diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..33489cc5 --- /dev/null +++ b/404.html @@ -0,0 +1,2727 @@ + + + + + + + + + + + + + + + + + + + + + + ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ +
+ +
+ +
+ + + + + + +
+
+ + +
+
+
+ +
+
+
+ + + +
+
+ +

404 - Not found

+ + + + + + +
+
+
+
+ + + + +
+ + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/CNAME b/CNAME new file mode 100644 index 00000000..3925f8f3 --- /dev/null +++ b/CNAME @@ -0,0 +1 @@ +hpc-docs.uni.lu \ No newline at end of file diff --git a/accounts/collaboration_accounts/index.html b/accounts/collaboration_accounts/index.html new file mode 100644 index 00000000..b3ca94d5 --- /dev/null +++ b/accounts/collaboration_accounts/index.html @@ -0,0 +1,2816 @@ + + + + + + + + + + + + + + + + + + + + + + + + Collaboration Accounts - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ +
+ +
+ +
+ + + + + + +
+
+ + +
+
+
+ +
+
+
+ + + +
+
+ + + + + + + + + + +

Collaboration Accounts

+ +

All ULHPC login accounts are associated with specific individuals and +must not be shared. +In some HPC centers, you may be able to request Collaboration Accounts designed to handle the following use cases:

+
    +
  • Collaborative Data Management: + Large scale experimental and simulation data are typically read or written by multiple collaborators and are kept on disk for long periods.
  • +
  • Collaborative Software Management
  • +
  • Collaborative Job Management
  • +
+
+

Info

+

By default, we DO NOT provide Collaboration Accounts and encourage the usage of shared research projects <name> stored on the Global project directory to enable the group members to manipulate project data with the appropriate use of unix groups and file permissions.

+

For dedicated job billing and accounting purposes, you should also request the creation of a project account (this will be done for all accepted funded projects).

+

For more details, see Project Accounts documentation.

+
+

We are aware nevertheless that a problem that often arises is that the files are owned by the collaborator who did the work and if that collaborator changes roles the default unix file permissions usually are such that the +files cannot be managed (deleted) by other members of the collaboration and system administrators must be contacted. +Similarly, for some use cases, Collaboration Accounts would enable members of the team to manipulate jobs submitted by other team members as necessary. +Justified and argued use cases can be submitted to the HPC team to find the appropriate solution by opening a ticket on the HPC Helpdesk Portal.

+ + + + +
+
+ + + Last update: November 13, 2024 + + +
+ + + + + + + + +
+
+
+
+ + + + +
+ + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/accounts/index.html b/accounts/index.html new file mode 100644 index 00000000..3e22b3c0 --- /dev/null +++ b/accounts/index.html @@ -0,0 +1,3162 @@ + + + + + + + + + + + + + + + + + + + + + + + + Get an Account - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ +
+ +
+ +
+ + + + + + +
+
+ + +
+
+
+ +
+
+
+ + + + + +
+
+ + + + + + + + + + +

Get an Account

+

In order to use the ULHPC facilities, you need to have a user account with an associated user login name (also called username) placed under an account hierarchy.

+

Conditions of acceptance

+

Acceptable Use Policy (AUP)

+ + +

There are a number of policies which apply to ULHPC users.

+

UL HPC Acceptable Use Policy (AUP) [pdf]

+
+

Important

+

All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP). +You should read and keep a signed copy of this document before using the facility.

+

Access and/or usage of any ULHPC system assumes the tacit acknowledgement to this policy.

+
+ + +

Remember that you are expected to acknowledge ULHPC in your publications. +See Acceptable Use Policy for more details.

+
ULHPC Platforms are meant ONLY for R&D!

The ULHPC facility is made for Research and Development and it is NOT a full production computing center -- for such needs, consider using the National HPC center.

+

In particular, we cannot make any guarantees of cluster availability or timely job completion even if we target a minimum compute node availability above 95% which is typically met - for instance, past KPI statistics in 2019 report a computing node availability above 97%.

+
+

Resource allocation policies

+

ULHPC Usage Charging and Resource allocation policy

+

UL internal R&D and training

+ + +

ULHPC resources are free of charge for UL staff for their internal work and training activities. +Principal Investigators (PI) will nevertheless receive on a regular basis a usage report of their team activities on the UL HPC platform. +The corresponding accumulated price will be provided even if this amount is purely indicative and won't be charged back.

+

Any other activities will be reviewed with the rectorate and are a priori subjected to be billed.

+ + +

Research Projects

+ + + + +

Externals and private partners

+ + + + +
+

How to Get a New User account?

+

Account Request Form

+
    +
  1. University staff - you can submit a request for a new ULHPC account by using the ServiceNow portal (Research > HPC > User access & accounts > New HPC account request).
    +Students - submit your account request on the Student Service Portal.
    +Externals - a University staff member must request the account for you, using the section New HPC account for external. Enter the professional data (organization and institutional email address). Specify the line manager / project PI if needed.
  2. +
  3. If you need to access a specific project directory, ask the project directory owner to open a ticket using the section Add user within project.
  4. +
  5. Your account will undergo user checks, in accordance with ULHPC policies, to verify your identity and the information proposed. Under some circumstances, there could be a delay while this vetting takes place.
  6. +
  7. After vetting has completed, you will receive a welcome email with your login information, and a unique link to a PrivateBin 1 holding a random temporary password. That link will expire if not used within 24 hours. +The PI and PI Proxies for the project will be notified when applicable.
  8. +
  9. Finally, you will need to log into the HPC IPA Portal to set up your initial password and Multi-Factor Authentication (MFA) for your account. +
  10. +
+
UL HPC \neq University credentials

Be aware that the source of authentication for the HPC services based on RedHat IdM/IPA DIFFERS from the University credentials (based on UL Active Directory).

+
    +
  • ULHPC credentials are maintained by the HPC team; associated portal: https://hpc-ipa.uni.lu/ipa/ui/
      +
    • authentication service for: UL HPC
    • +
    +
  • +
  • University credentials are maintained by the IT team of the University
      +
    • authentication service for Service Now and all other UL services
    • +
    +
  • +
+
+

Managing User Accounts

+

ULHPC user accounts are managed in through the HPC IPA web portal.

+

Security Incidents

+

If you think there has been a computer security incident, you should contact the ULHPC Team and the University CISO team as soon as possible:

+
+

To: hpc-team@uni.lu,laurent.weber@uni.lu

+

Subject: Security Incident for HPC account '<login>' (ADAPT accordingly)

+
+

Please save any evidence of the break-in and include as many details as possible in your communication with us.

+
+

How to Get a New Project account?

+

Projects are defined for accounting purposes and are associated to a set of user accounts allowed by the project PI to access its data and submit jobs on behalf of the project account. See Slurm Account Hierarchy.

+

You can request (or be automatically added) to project accounts for accounting purposes. +For more information, please see the Project Account documentation

+

FAQ

+

Can I share an account? – Account Security Policies

+
+

Danger

+

The sharing of passwords or login credentials is not allowed under UL HPC and University information security policies. Please bear in mind that this policy also protects the end-user.

+
+

Sharing credentials removes the ability to audit and accountability for the account holder in case of account misuse. Accounts which are in violation of this policy may be disabled or otherwise limited. Accounts knowingly skirting this policy may be banned.

+

If you find that you need to share resources among multiple individuals, shared projects are just the way to go, and remember that the University extends access to its HPC resources (i.e., facility and expert HPC consultants) to the scientific staff of national public organizations and external partners for the duration of joint research projects under the conditions defined above.

+

When in doubt, please contact us and we will be happy to assist you with finding a safe and secure way to do so.

+
+
+
    +
  1. +

    PrivateBin is a minimalist, open source online pastebin where the server has zero knowledge of pasted data. Data is encrypted / decrypted in the browser using 256bit AES in Galois Counter mode. 

    +
  2. +
+
+ + + + +
+
+ + + Last update: November 13, 2024 + + +
+ + + + + + + + +
+
+
+
+ + + + +
+ + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/accounts/projects/index.html b/accounts/projects/index.html new file mode 100644 index 00000000..a911eba4 --- /dev/null +++ b/accounts/projects/index.html @@ -0,0 +1,2924 @@ + + + + + + + + + + + + + + + + + + + + + + + + Projects Accounts - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ +
+ +
+ +
+ + + + + + +
+
+ + +
+
+
+ +
+
+
+ + +
+
+
+ + +
+
+
+ + +
+
+ + + + + + + + + + +

Projects Accounts

+

Shared project in the Global project directory.

+

We can setup for you a dedicated project directory on the GPFS/SpectrumScale Filesystem for sharing research data with other colleagues.

+

Whether to create a new project directory or to add/remove members to the group set to access the project data, use the Service Now HPC Support Portal.

+

Service Now HPC Support Portal

+

Data Storage Charging

+ + + + +

Slurm Project Account

+

As explained in the Slurm Account Hierarchy, projects account can be created at the L3 level of the association tree.

+

To quickly list a given project accounts and the users attached to it, you can use the sassoc helper function:

+
# /!\ ADAPT project acronym/name <name>accordingly
+sassoc project_<name>
+
+ +

Alternatively, you can rely on sacctmgr, typically coupled with the withassoc attribute:

+
# /!\ ADAPT project acronym/name <name>accordingly
+sacctmgr show account where name=project_<name> format="account%20,user%20,Share,QOS%50" withassoc
+
+ +

As per HPC Resource Allocations for Research Project, creation of such project accounts is mandatory for funded research projects, since usage charging may occur when a detailed reporting will be provided for auditing purposes.

+

With the help of the University Research Support department, we will create automatically project accounts from the list of accepted project which acknowledge the need of computing resources. +Feel free nethertheless to use the Service Now HPC Support Portal to request the creation of a new project account or to add/remove members to the group - this might be pertinent for internal research projects or specific collaboration with external partners requiring a separate usage monitoring.

+
+

Important

+

Project account is a natural way to access the higher priority QOS not granted by default to your personnal account on the ULHPC. For instance, the high QOS is automatically granted as soon as a contribution to the HPC budget line is performed by the project.

+
+ + + + +
+
+ + + Last update: November 13, 2024 + + +
+ + + + + + + + +
+
+
+
+ + + + +
+ + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..1cf13b9f Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.23546af0.min.js b/assets/javascripts/bundle.23546af0.min.js new file mode 100644 index 00000000..b9efe0a6 --- /dev/null +++ b/assets/javascripts/bundle.23546af0.min.js @@ -0,0 +1,2 @@ +!function(t,e){for(var n in e)t[n]=e[n]}(window,function(t){function e(e){for(var r,i,o=e[0],u=e[1],b=e[2],s=0,O=[];s0}function H(){return new _.a(new URL(location.href))}var R=n(111);function P(t,e){return e.location$.pipe(Object(R.a)(1),Object(l.a)((function(e){var n=e.href;return new URL(t,n).toString().replace(/\/$/,"")})),Object(p.a)(1))}function U(){return location.hash.substring(1)}function q(t){var e=s("a");e.href=t,e.addEventListener("click",(function(t){return t.stopPropagation()})),e.click()}function N(){return Object(c.a)(window,"hashchange").pipe(Object(l.a)(U),Object(d.a)(U()),Object(S.a)((function(t){return t.length>0})),Object(C.a)())}function I(t){var e=matchMedia(t);return Object(x.a)((function(t){return e.addListener((function(){return t(e.matches)}))})).pipe(Object(d.a)(e.matches),Object(p.a)(1))}var z={drawer:u("[data-md-toggle=drawer]"),search:u("[data-md-toggle=search]")};function V(t){return z[t].checked}function D(t,e){z[t].checked!==e&&z[t].click()}function B(t){var e=z[t];return Object(c.a)(e,"change").pipe(Object(l.a)((function(){return e.checked})),Object(d.a)(e.checked))}var Y=n(59),J=n(87);function K(){return{x:Math.max(0,pageXOffset),y:Math.max(0,pageYOffset)}}function Q(t){var e=t.x,n=t.y;window.scrollTo(e||0,n||0)}function F(){return{width:innerWidth,height:innerHeight}}function W(){return Object(Y.a)([Object(j.a)(Object(c.a)(window,"scroll",{passive:!0}),Object(c.a)(window,"resize",{passive:!0})).pipe(Object(l.a)(K),Object(d.a)(K())),Object(c.a)(window,"resize",{passive:!0}).pipe(Object(l.a)(F),Object(d.a)(F()))]).pipe(Object(l.a)((function(t){var e=Object(w.h)(t,2);return{offset:e[0],size:e[1]}})),Object(p.a)(1))}function X(t,e){var n=e.header$,r=e.viewport$,c=r.pipe(Object(J.a)("size")),a=Object(Y.a)([c,n]).pipe(Object(l.a)((function(){return{x:t.offsetLeft,y:t.offsetTop}})));return Object(Y.a)([n,r,a]).pipe(Object(l.a)((function(t){var e=Object(w.h)(t,3),n=e[0].height,r=e[1],c=r.offset,a=r.size,i=e[2],o=i.x,u=i.y;return{offset:{x:c.x-o,y:c.y-u+n},size:a}})),Object(p.a)(1))}var Z=n(98),G=n(99),tt=n(79),et=n(100);function nt(t,e){var n=e.tx$,r=Object(x.a)((function(e){return t.addEventListener("message",e)})).pipe(Object(Z.a)("data"));return n.pipe(Object(G.a)((function(){return r}),{leading:!0,trailing:!0}),Object(tt.a)((function(e){return t.postMessage(e)})),Object(et.a)(r),Object(C.a)())}},,,function(t,e,n){"use strict";function r(t){return"object"==typeof t&&"string"==typeof t.base&&"object"==typeof t.features&&"object"==typeof t.search}n.d(e,"d",(function(){return r})),n.d(e,"b",(function(){return b})),n.d(e,"a",(function(){return O})),n.d(e,"f",(function(){return d})),n.d(e,"g",(function(){return p})),n.d(e,"e",(function(){return h})),n.d(e,"c",(function(){return v}));var c=n(0),a=n(78);function i(t){switch(t){case"svg":case"path":return document.createElementNS("http://www.w3.org/2000/svg",t);default:return document.createElement(t)}}function o(t,e,n){switch(e){case"xmlns":break;case"viewBox":case"d":"boolean"!=typeof n?t.setAttributeNS(null,e,n):n&&t.setAttributeNS(null,e,"");break;default:"boolean"!=typeof n?t.setAttribute(e,n):n&&t.setAttribute(e,"")}}function u(t,e){var n,r;if("string"==typeof e||"number"==typeof e)t.innerHTML+=e.toString();else if(e instanceof Node)t.appendChild(e);else if(Array.isArray(e))try{for(var a=Object(c.k)(e),i=a.next();!i.done;i=a.next()){u(t,i.value)}}catch(t){n={error:t}}finally{try{i&&!i.done&&(r=a.return)&&r.call(a)}finally{if(n)throw n.error}}}function b(t,e){for(var n,r,b,f,s=[],O=2;On){for(;" "!==t[n]&&--n>0;);return t.substring(0,n)+"..."}return t}function h(t){return t>999?((t+1e-6)/1e3).toFixed(+((t-950)%1e3>99))+"k":t.toString()}function v(t){for(var e=0,n=0,r=t.length;n code").forEach((function(t,e){var n=t.parentElement;n.id="__code_"+e,n.insertBefore(Object(f.a)(n.id),t)}))}));var O=Object(a.a)((function(t){new r(".md-clipboard").on("success",t)})).pipe(Object(i.a)());return O.pipe(Object(o.a)((function(t){return t.clearSelection()})),Object(u.a)(Object(s.f)("clipboard.copied"))).subscribe(n),O}var j=n(26),l=n(39),d=n(82),p=n(33),h=n(9),v=n(56),m=n(112);function y(t){var e=(void 0===t?{}:t).duration,n=new j.a,r=Object(b.a)("div");return r.classList.add("md-dialog","md-typeset"),n.pipe(Object(p.a)((function(t){return Object(l.a)(document.body).pipe(Object(h.a)((function(t){return t.appendChild(r)})),Object(v.b)(d.a),Object(m.a)(1),Object(o.a)((function(e){e.innerHTML=t,e.setAttribute("data-md-state","open")})),Object(m.a)(e||2e3),Object(o.a)((function(t){return t.removeAttribute("data-md-state")})),Object(m.a)(400),Object(o.a)((function(t){t.innerHTML="",t.remove()})))}))).subscribe(),n}var g=n(0),w=n(92),$=n(94),x=n(110),k=n(96),S=n(46),C=n(98),T=n(87),A=n(103),_=n(104),E=n(101),L=n(88),M=n(105),H=n(89);function R(t){var e=t.document$,n=t.viewport$,r=t.location$;"scrollRestoration"in history&&(history.scrollRestoration="manual"),Object(w.a)(window,"beforeunload").subscribe((function(){history.scrollRestoration="auto"}));var a=Object(b.c)('link[rel="shortcut icon"]');void 0!==a&&(a.href=a.href);var o=Object(w.a)(document.body,"click").pipe(Object(k.a)((function(t){return!(t.metaKey||t.ctrlKey)})),Object(p.a)((function(t){if(t.target instanceof HTMLElement){var e=t.target.closest("a");if(e&&!e.target&&Object(b.h)(e))return Object(b.g)(e)||t.preventDefault(),Object(l.a)(e)}return c.a})),Object(h.a)((function(t){return{url:new URL(t.href)}})),Object(i.a)());o.subscribe((function(){Object(b.o)("search",!1)}));var u=o.pipe(Object(k.a)((function(t){var e=t.url;return!Object(b.g)(e)})),Object(i.a)()),f=Object(w.a)(window,"popstate").pipe(Object(k.a)((function(t){return null!==t.state})),Object(h.a)((function(t){return{url:new URL(location.href),offset:t.state}})),Object(i.a)());Object($.a)(u,f).pipe(Object(S.a)((function(t,e){return t.url.href===e.url.href})),Object(C.a)("url")).subscribe(r);var s=r.pipe(Object(T.a)("pathname"),Object(A.a)(1),Object(p.a)((function(t){return Object(x.a)({url:t.href,responseType:"text",withCredentials:!0}).pipe(Object(_.a)((function(){return Object(b.m)(t),c.a})))})));u.pipe(Object(E.a)(s)).subscribe((function(t){var e=t.url;history.pushState({},"",e.toString())}));var O=new DOMParser;s.pipe(Object(h.a)((function(t){var e=t.response;return O.parseFromString(e,"text/html")}))).subscribe(e);var j=Object($.a)(u,f).pipe(Object(E.a)(e));j.subscribe((function(t){var e=t.url,n=t.offset;e.hash&&!n?Object(b.n)(e.hash):Object(b.p)(n||{y:0})})),j.pipe(Object(L.a)(e)).subscribe((function(t){var e,n,r=Object(g.h)(t,2)[1],c=r.title,a=r.head;document.dispatchEvent(new CustomEvent("DOMContentSwitch")),document.title=c;try{for(var i=Object(g.k)(['link[rel="canonical"]','meta[name="author"]','meta[name="description"]']),o=i.next();!o.done;o=i.next()){var u=o.value,f=Object(b.c)(u,a),s=Object(b.c)(u,document.head);void 0!==f&&void 0!==s&&Object(b.j)(s,f)}}catch(t){e={error:t}}finally{try{o&&!o.done&&(n=i.return)&&n.call(i)}finally{if(e)throw e.error}}})),n.pipe(Object(M.a)(250),Object(T.a)("offset")).subscribe((function(t){var e=t.offset;history.replaceState(e,"")})),Object($.a)(o,f).pipe(Object(H.a)(2,1),Object(k.a)((function(t){var e=Object(g.h)(t,2),n=e[0],r=e[1];return n.url.pathname===r.url.pathname&&!Object(b.g)(r.url)})),Object(h.a)((function(t){return Object(g.h)(t,2)[1]}))).subscribe((function(t){var e=t.offset;Object(b.p)(e||{y:0})}))}var P=n(7);function U(){var t=Object(b.u)().pipe(Object(h.a)((function(t){return Object(g.a)({mode:Object(b.f)("search")?"search":"global"},t)})),Object(k.a)((function(t){if("global"===t.mode){var e=Object(b.b)();if(void 0!==e)return!Object(b.i)(e)}return!0})),Object(i.a)());return t.pipe(Object(k.a)((function(t){return"search"===t.mode})),Object(L.a)(Object(P.useComponent)("search-query"),Object(P.useComponent)("search-result"))).subscribe((function(t){var e=Object(g.h)(t,3),n=e[0],r=e[1],c=e[2],a=Object(b.b)();switch(n.type){case"Enter":a===r&&n.claim();break;case"Escape":case"Tab":Object(b.o)("search",!1),Object(b.k)(r,!1);break;case"ArrowUp":case"ArrowDown":if(void 0===a)Object(b.k)(r);else{var i=Object(g.i)([r],Object(b.e)("[href]",c)),o=Math.max(0,(Math.max(0,i.indexOf(a))+i.length+("ArrowUp"===n.type?-1:1))%i.length);Object(b.k)(i[o])}n.claim();break;default:r!==Object(b.b)()&&Object(b.k)(r)}})),t.pipe(Object(k.a)((function(t){return"global"===t.mode})),Object(L.a)(Object(P.useComponent)("search-query"))).subscribe((function(t){var e=Object(g.h)(t,2),n=e[0],r=e[1];switch(n.type){case"f":case"s":case"/":Object(b.k)(r),Object(b.l)(r),n.claim();break;case"p":case",":var c=Object(b.c)("[href][rel=prev]");void 0!==c&&c.click();break;case"n":case".":var a=Object(b.c)("[href][rel=next]");void 0!==a&&a.click()}})),t}var q=n(44)},,,,,,function(t,e,n){"use strict";n.d(e,"a",(function(){return j})),n.d(e,"b",(function(){return l}));var r,c=n(0),a=n(39),i=n(18),o=n(9),u=n(80),b=n(86),f=n(33),s=n(46),O=n(1);function j(t,e){var n=e.document$;r=n.pipe(Object(o.a)((function(e){return t.reduce((function(t,n){var r,a=Object(O.c)("[data-md-component="+n+"]",e);return Object(c.a)(Object(c.a)({},t),void 0!==a?((r={})[n]=a,r):{})}),{})})),Object(u.a)((function(e,n){var r,a;try{for(var i=Object(c.k)(t),o=i.next();!o.done;o=i.next()){var u=o.value;switch(u){case"announce":case"header-title":case"container":case"skip":u in e&&void 0!==e[u]&&(Object(O.j)(e[u],n[u]),e[u]=n[u]);break;default:void 0!==n[u]?e[u]=Object(O.c)("[data-md-component="+u+"]"):delete e[u]}}}catch(t){r={error:t}}finally{try{o&&!o.done&&(a=i.return)&&a.call(i)}finally{if(r)throw r.error}}return e})),Object(b.a)(1))}function l(t){return r.pipe(Object(f.a)((function(e){return void 0!==e[t]?Object(a.a)(e[t]):i.a})),Object(s.a)())}},,,function(t,e,n){"use strict";function r(t,e){t.setAttribute("data-md-state",e?"blur":"")}function c(t){t.removeAttribute("data-md-state")}function a(t,e){t.classList.toggle("md-nav__link--active",e)}function i(t){t.classList.remove("md-nav__link--active")}n.d(e,"d",(function(){return r})),n.d(e,"b",(function(){return c})),n.d(e,"c",(function(){return a})),n.d(e,"a",(function(){return i}))},,,,,,function(t,e,n){"use strict";var r=n(61);n.o(r,"applySidebar")&&n.d(e,"applySidebar",(function(){return r.applySidebar})),n.o(r,"mountTableOfContents")&&n.d(e,"mountTableOfContents",(function(){return r.mountTableOfContents})),n.o(r,"mountTabs")&&n.d(e,"mountTabs",(function(){return r.mountTabs})),n.o(r,"watchSidebar")&&n.d(e,"watchSidebar",(function(){return r.watchSidebar}))},function(t,e,n){"use strict";n.d(e,"a",(function(){return a})),n.d(e,"b",(function(){return j})),n.d(e,"c",(function(){return p})),n.d(e,"d",(function(){return m}));var r=n(4),c="md-clipboard md-icon";function a(t){return Object(r.b)("button",{class:c,title:Object(r.f)("clipboard.copy"),"data-clipboard-target":"#"+t+" > code"},Object(r.b)("svg",{xmlns:"http://www.w3.org/2000/svg",viewBox:"0 0 24 24"},Object(r.b)("path",{d:"M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"})))}var i=n(0),o="md-search-result__item",u="md-search-result__link",b="md-search-result__article md-search-result__article--document",f="md-search-result__article",s="md-search-result__title",O="md-search-result__teaser";function j(t){var e=t.article,n=t.sections,c=Object(r.b)("div",{class:"md-search-result__icon md-icon"},Object(r.b)("svg",{xmlns:"http://www.w3.org/2000/svg",viewBox:"0 0 24 24"},Object(r.b)("path",{d:"M14,2H6A2,2 0 0,0 4,4V20A2,2 0 0,0 6,22H13C12.59,21.75 12.2,21.44 11.86,21.1C11.53,20.77 11.25,20.4 11,20H6V4H13V9H18V10.18C18.71,10.34 19.39,10.61 20,11V8L14,2M20.31,18.9C21.64,16.79 21,14 18.91,12.68C16.8,11.35 14,12 12.69,14.08C11.35,16.19 12,18.97 14.09,20.3C15.55,21.23 17.41,21.23 18.88,20.32L22,23.39L23.39,22L20.31,18.9M16.5,19A2.5,2.5 0 0,1 14,16.5A2.5,2.5 0 0,1 16.5,14A2.5,2.5 0 0,1 19,16.5A2.5,2.5 0 0,1 16.5,19Z"}))),a=Object(i.i)([e],n).map((function(t){var e=t.location,n=t.title,a=t.text;return Object(r.b)("a",{href:e,class:u,tabIndex:-1},Object(r.b)("article",{class:"parent"in t?f:b},!("parent"in t)&&c,Object(r.b)("h1",{class:s},n),a.length>0&&Object(r.b)("p",{class:O},Object(r.g)(a,320))))}));return Object(r.b)("li",{class:o},a)}var l="md-source__facts",d="md-source__fact";function p(t){var e=t.map((function(t){return Object(r.b)("li",{class:d},t)}));return Object(r.b)("ul",{class:l},e)}var h="md-typeset__scrollwrap",v="md-typeset__table";function m(t){return Object(r.b)("div",{class:h},Object(r.b)("div",{class:v},t))}},function(t,e,n){"use strict";function r(t,e){t.style.top=e+"px"}function c(t){t.style.top=""}function a(t,e){t.style.height=e+"px"}function i(t){t.style.height=""}n.d(e,"d",(function(){return r})),n.d(e,"b",(function(){return c})),n.d(e,"c",(function(){return a})),n.d(e,"a",(function(){return i}))},,,,,,,,,,,,function(t,e,n){"use strict";var r=n(66);n.o(r,"applyAnchorList")&&n.d(e,"applyAnchorList",(function(){return r.applyAnchorList})),n.o(r,"watchAnchorList")&&n.d(e,"watchAnchorList",(function(){return r.watchAnchorList}));var c=n(67);n.d(e,"applyAnchorList",(function(){return c.a})),n.d(e,"watchAnchorList",(function(){return c.b}));n(23)},function(t,e,n){"use strict";n.d(e,"b",(function(){return a})),n.d(e,"f",(function(){return y})),n.d(e,"a",(function(){return i})),n.d(e,"d",(function(){return p})),n.d(e,"c",(function(){return h})),n.d(e,"e",(function(){return v}));var r=n(0),c=n(75);!function(){function t(t){var e=t.config,n=t.docs,a=t.pipeline,i=t.index;this.documents=function(t){var e,n,a=new Map;try{for(var i=Object(r.k)(t),o=i.next();!o.done;o=i.next()){var u=o.value,b=Object(r.h)(u.location.split("#"),2),f=b[0],s=b[1],O=u.location,j=u.title,l=c(u.text).replace(/\s+(?=[,.:;!?])/g,"").replace(/\s+/g," ");if(s){var d=a.get(f);d.linked?a.set(O,{location:O,title:j,text:l,parent:d}):(d.title=u.title,d.text=l,d.linked=!0)}else a.set(O,{location:O,title:j,text:l,linked:!1})}}catch(t){e={error:t}}finally{try{o&&!o.done&&(n=i.return)&&n.call(i)}finally{if(e)throw e.error}}return a}(n),this.highlight=function(t){var e=new RegExp(t.separator,"img"),n=function(t,e,n){return e+""+n+""};return function(c){c=c.replace(/[\s*+-:~^]+/g," ").trim();var a=new RegExp("(^|"+t.separator+")("+c.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(e,"|")+")","img");return function(t){return Object(r.a)(Object(r.a)({},t),{title:t.title.replace(a,n),text:t.text.replace(a,n)})}}}(e),this.index=void 0===i?lunr((function(){var t,c,i,o,u;a=a||["trimmer","stopWordFilter"],this.pipeline.reset();try{for(var b=Object(r.k)(a),f=b.next();!f.done;f=b.next()){var s=f.value;this.pipeline.add(lunr[s])}}catch(e){t={error:e}}finally{try{f&&!f.done&&(c=b.return)&&c.call(b)}finally{if(t)throw t.error}}1===e.lang.length&&"en"!==e.lang[0]?this.use(lunr[e.lang[0]]):e.lang.length>1&&this.use((i=lunr).multiLanguage.apply(i,Object(r.i)(e.lang))),this.field("title",{boost:1e3}),this.field("text"),this.ref("location");try{for(var O=Object(r.k)(n),j=O.next();!j.done;j=O.next()){var l=j.value;this.add(l)}}catch(t){o={error:t}}finally{try{j&&!j.done&&(u=O.return)&&u.call(O)}finally{if(o)throw o.error}}})):lunr.Index.load("string"==typeof i?JSON.parse(i):i)}t.prototype.query=function(t){var e=this;if(t)try{var n=this.index.search(t).reduce((function(t,n){var c=e.documents.get(n.ref);if(void 0!==c)if("parent"in c){var a=c.parent.location;t.set(a,Object(r.i)(t.get(a)||[],[n]))}else{a=c.location;t.set(a,t.get(a)||[])}return t}),new Map),c=this.highlight(t);return Object(r.i)(n).map((function(t){var n=Object(r.h)(t,2),a=n[0],i=n[1];return{article:c(e.documents.get(a)),sections:i.map((function(t){return c(e.documents.get(t.ref))}))}}))}catch(e){console.warn("Invalid query: "+t+" – see https://bit.ly/2s3ChXG")}return[]}}();function a(t){return t.replace(/(?:^|\s+)[*+-:^~]+(?=\s+|$)/g,"").trim().replace(/\s+|\b$/g,"* ")}var i,o=n(109),u=n(26),b=n(53),f=n(88),s=n(9),O=n(86),j=n(56),l=n(1),d=n(4);function p(t){return t.type===i.READY}function h(t){return t.type===i.QUERY}function v(t){return t.type===i.RESULT}function m(t){var e=t.config,n=t.docs,r=t.index;return 1===e.lang.length&&"en"===e.lang[0]&&(e.lang=[Object(d.f)("search.config.lang")]),"[s-]+"===e.separator&&(e.separator=Object(d.f)("search.config.separator")),{config:e,docs:n,index:r,pipeline:Object(d.f)("search.config.pipeline").split(/\s*,\s*/).filter(o.a)}}function y(t,e){var n=e.index$,c=e.base$,a=new Worker(t),o=new u.a,d=Object(l.C)(a,{tx$:o}).pipe(Object(f.a)(c),Object(s.a)((function(t){var e,n,c,a,i=Object(r.h)(t,2),o=i[0],u=i[1];if(v(o))try{for(var b=Object(r.k)(o.data),f=b.next();!f.done;f=b.next()){var s=f.value,O=s.article,j=s.sections;O.location=u+"/"+O.location;try{for(var l=(c=void 0,Object(r.k)(j)),d=l.next();!d.done;d=l.next()){var p=d.value;p.location=u+"/"+p.location}}catch(t){c={error:t}}finally{try{d&&!d.done&&(a=l.return)&&a.call(l)}finally{if(c)throw c.error}}}}catch(t){e={error:t}}finally{try{f&&!f.done&&(n=b.return)&&n.call(b)}finally{if(e)throw e.error}}return o})),Object(O.a)(1));return n.pipe(Object(s.a)((function(t){return{type:i.SETUP,data:m(t)}})),Object(j.b)(b.a)).subscribe(o.next.bind(o)),{tx$:o,rx$:d}}!function(t){t[t.SETUP=0]="SETUP",t[t.READY=1]="READY",t[t.QUERY=2]="QUERY",t[t.RESULT=3]="RESULT"}(i||(i={}))},,,,,,,,,,,,,,,,,function(t,e,n){"use strict";var r=n(62);n.o(r,"applySidebar")&&n.d(e,"applySidebar",(function(){return r.applySidebar})),n.o(r,"mountTableOfContents")&&n.d(e,"mountTableOfContents",(function(){return r.mountTableOfContents})),n.o(r,"mountTabs")&&n.d(e,"mountTabs",(function(){return r.mountTabs})),n.o(r,"watchSidebar")&&n.d(e,"watchSidebar",(function(){return r.watchSidebar}));var c=n(63);n.d(e,"applySidebar",(function(){return c.a})),n.d(e,"watchSidebar",(function(){return c.b}));n(31)},function(t,e){},function(t,e,n){"use strict";n.d(e,"b",(function(){return l})),n.d(e,"a",(function(){return d}));var r=n(0),c=n(59),a=n(45),i=n(82),o=n(9),u=n(46),b=n(56),f=n(88),s=n(79),O=n(81),j=n(31);function l(t,e){var n=e.main$,a=e.viewport$,i=t.parentElement.offsetTop-t.parentElement.parentElement.offsetTop;return Object(c.a)([n,a]).pipe(Object(o.a)((function(t){var e=Object(r.h)(t,2),n=e[0],c=n.offset,a=n.height,o=e[1].offset.y;return{height:a=a+Math.min(i,Math.max(0,o-c))-i,lock:o>=c+i}})),Object(u.a)((function(t,e){return t.height===e.height&&t.lock===e.lock})))}function d(t,e){var n=e.header$;return Object(a.a)(Object(b.b)(i.a),Object(f.a)(n),Object(s.a)((function(e){var n=Object(r.h)(e,2),c=n[0],a=c.height,i=c.lock,o=n[1].height;Object(j.c)(t,a),i?Object(j.d)(t,o):Object(j.b)(t)})),Object(o.a)((function(t){return Object(r.h)(t,1)[0]})),Object(O.a)((function(){Object(j.b)(t),Object(j.a)(t)})))}},function(t,e,n){"use strict";var r=n(65);n.d(e,"mountTableOfContents",(function(){return r.a}));n(43)},function(t,e,n){"use strict";n.d(e,"a",(function(){return O}));var r=n(0),c=n(45),a=n(59),i=n(39),o=n(33),u=n(9),b=n(1),f=n(29),s=n(43);function O(t){var e=t.header$,n=t.main$,O=t.viewport$,j=t.tablet$;return Object(c.a)(Object(o.a)((function(t){return j.pipe(Object(o.a)((function(c){if(c){var o=Object(b.e)(".md-nav__link",t),j=Object(f.watchSidebar)(t,{main$:n,viewport$:O}).pipe(Object(f.applySidebar)(t,{header$:e})),l=Object(s.watchAnchorList)(o,{header$:e,viewport$:O}).pipe(Object(s.applyAnchorList)(o));return Object(a.a)([j,l]).pipe(Object(u.a)((function(t){var e=Object(r.h)(t,2);return{sidebar:e[0],anchors:e[1]}})))}return Object(i.a)({})})))})))}},function(t,e){},function(t,e,n){"use strict";n.d(e,"b",(function(){return y})),n.d(e,"a",(function(){return g}));var r=n(0),c=n(91),a=n(59),i=n(45),o=n(82),u=n(9),b=n(87),f=n(33),s=n(80),O=n(46),j=n(90),l=n(89),d=n(56),p=n(79),h=n(81),v=n(1),m=n(23);function y(t,e){var n,i,o=e.header$,d=e.viewport$,p=new Map;try{for(var h=Object(r.k)(t),m=h.next();!m.done;m=h.next()){var y=m.value,g=decodeURIComponent(y.hash.substring(1)),w=Object(v.c)('[id="'+g+'"]');void 0!==w&&p.set(y,w)}}catch(t){n={error:t}}finally{try{m&&!m.done&&(i=h.return)&&i.call(h)}finally{if(n)throw n.error}}var $=o.pipe(Object(u.a)((function(t){return 18+t.height})));return Object(v.t)(document.body).pipe(Object(b.a)("height"),Object(u.a)((function(){var t=[];return Object(r.i)(p).reduce((function(e,n){for(var a=Object(r.h)(n,2),i=a[0],o=a[1];t.length;){if(!(p.get(t[t.length-1]).tagName>=o.tagName))break;t.pop()}for(var u=o.offsetTop;!u&&o.parentElement;)u=(o=o.parentElement).offsetTop;return e.set(Object(c.a)(t=Object(r.i)(t,[i])),u)}),new Map)})),Object(f.a)((function(t){return Object(a.a)([$,d]).pipe(Object(s.a)((function(t,e){for(var n=Object(r.h)(t,2),c=n[0],a=n[1],i=Object(r.h)(e,2),o=i[0],u=i[1].offset.y;a.length;){if(!(Object(r.h)(a[0],2)[1]-o=u))break;a=Object(r.i)([c.pop()],a)}return[c,a]}),[[],Object(r.i)(t)]),Object(O.a)((function(t,e){return t[0]===e[0]&&t[1]===e[1]})))}))).pipe(Object(u.a)((function(t){var e=Object(r.h)(t,2),n=e[0],c=e[1];return{prev:n.map((function(t){return Object(r.h)(t,1)[0]})),next:c.map((function(t){return Object(r.h)(t,1)[0]}))}})),Object(j.a)({prev:[],next:[]}),Object(l.a)(2,1),Object(u.a)((function(t){var e=Object(r.h)(t,2),n=e[0],c=e[1];return n.prev.length16)););return n}),0),Object(u.a)(e),Object(M.a)((function(){!function(t){t.innerHTML=""}(b)})))})))}function q(t,e){var n=t.rx$,r=e.query$;return Object(c.a)(Object(i.a)((function(t){var e=t.parentElement,c=n.pipe(Object(o.a)(h.c),Object(u.a)(!0)),a=Object(p.s)(e).pipe(Object(O.a)((function(t){return t.y>=e.scrollHeight-e.offsetHeight-16})),Object(g.a)(),Object(o.a)(C.a));return n.pipe(Object(o.a)(h.d),Object(T.a)("data"),U(t,{query$:r,ready$:c,fetch$:a}),Object(b.a)([]))})))}},function(t,e,n){"use strict";n.d(e,"a",(function(){return m}));var r=n(26),c=n(45),a=n(33),i=n(87),o=n(79),u=n(20),b=n(0),f=n(59),s=n(82),O=n(98),j=n(46),l=n(86),d=n(9),p=n(56),h=n(81),v=n(1);function m(t){var e=t.header$,n=t.viewport$,m=new r.a;return Object(u.b)("header").pipe(Object(a.a)((function(t){return m.pipe(Object(i.a)("active"),(e=t,Object(c.a)(Object(p.b)(s.a),Object(o.a)((function(t){var n=t.active;!function(t,e){t.setAttribute("data-md-state",e?"shadow":"")}(e,n)})),Object(h.a)((function(){!function(t){t.removeAttribute("data-md-state")}(e)})))));var e}))).subscribe(),Object(c.a)(Object(a.a)((function(t){return function(t,e){var n=e.header$,r=e.viewport$,c=n.pipe(Object(O.a)("height"),Object(j.a)(),Object(l.a)(1)),o=c.pipe(Object(a.a)((function(){return Object(v.t)(t).pipe(Object(d.a)((function(e){var n=e.height;return{top:t.offsetTop,bottom:t.offsetTop+n}})))})),Object(i.a)("bottom"),Object(l.a)(1));return Object(f.a)([c,o,r]).pipe(Object(d.a)((function(t){var e=Object(b.h)(t,3),n=e[0],r=e[1],c=r.top,a=r.bottom,i=e[2],o=i.offset.y,u=i.size.height;return{offset:c-n,height:u=Math.max(0,u-Math.max(0,c-o,n)-Math.max(0,u+o-a)),active:c-n<=o}})),Object(j.a)((function(t,e){return t.offset===e.offset&&t.height===e.height&&t.active===e.active})))}(t,{header$:e,viewport$:n})})),Object(o.a)((function(t){return m.next(t)})))}},function(t,e,n){"use strict";n.d(e,"a",(function(){return O}));var r=n(45),c=n(33),a=n(9),i=n(87),o=n(1),u=n(82),b=n(56),f=n(79),s=n(81);function O(t){var e=t.header$,n=t.viewport$;return Object(r.a)(Object(c.a)((function(t){return Object(o.B)(t,{header$:e,viewport$:n}).pipe(Object(a.a)((function(t){return{hidden:t.offset.y>=20}})),Object(i.a)("hidden"),function(t){return Object(r.a)(Object(b.b)(u.a),Object(f.a)((function(e){var n=e.hidden;!function(t,e){t.setAttribute("data-md-state",e?"hidden":"")}(t,n)})),Object(s.a)((function(){!function(t){t.removeAttribute("data-md-state")}(t)})))}(t))})))}},function(t,e,n){"use strict";n.d(e,"a",(function(){return y}));var r=n(0),c=n(45),a=n(59),i=n(33),o=n(9),u=n(96),b=n(88),f=n(46),s=n(90),O=n(86),j=n(1),l=n(20),d=n(39),p=n(82),h=n(56),v=n(79),m=n(81);function y(t){var e=t.document$,n=t.viewport$;return Object(c.a)(Object(i.a)((function(t){var y=function(t,e){return e.document$.pipe(Object(o.a)((function(){var e=getComputedStyle(t);return["sticky","-webkit-sticky"].includes(e.position)})),Object(f.a)(),Object(i.a)((function(e){return e?Object(j.t)(t).pipe(Object(o.a)((function(t){return{sticky:!0,height:t.height}}))):Object(d.a)({sticky:!1,height:0})})),Object(O.a)(1))}(t,{document$:e}),g=Object(l.b)("main").pipe(Object(o.a)((function(t){return Object(j.c)("h1, h2, h3, h4, h5, h6",t)})),Object(u.a)((function(t){return void 0!==t})),Object(b.a)(Object(l.b)("header-title")),Object(i.a)((function(t){var e=Object(r.h)(t,2),a=e[0],i=e[1];return Object(j.B)(a,{header$:y,viewport$:n}).pipe(Object(o.a)((function(t){return t.offset.y>=a.offsetHeight?"page":"site"})),Object(f.a)(),function(t){return Object(c.a)(Object(h.b)(p.a),Object(v.a)((function(e){!function(t,e){t.setAttribute("data-md-state",e?"active":"")}(t,"page"===e)})),Object(m.a)((function(){!function(t){t.removeAttribute("data-md-state")}(t)})))}(i))})),Object(s.a)("site"));return Object(a.a)([y,g]).pipe(Object(o.a)((function(t){var e=Object(r.h)(t,2),n=e[0],c=e[1];return Object(r.a)({type:c},n)})),Object(O.a)(1))})))}},function(t,e,n){"use strict";n.d(e,"a",(function(){return j}));var r=n(45),c=n(39),a=n(33),i=n(9),o=n(87),u=n(1),b=n(82),f=n(56),s=n(79),O=n(81);function j(t){var e=t.header$,n=t.viewport$,j=t.screen$;return Object(r.a)(Object(a.a)((function(t){return j.pipe(Object(a.a)((function(a){return a?Object(u.B)(t,{header$:e,viewport$:n}).pipe(Object(i.a)((function(t){return{hidden:t.offset.y>=10}})),Object(o.a)("hidden"),function(t){return Object(r.a)(Object(f.b)(b.a),Object(s.a)((function(e){var n=e.hidden;!function(t,e){t.setAttribute("data-md-state",e?"hidden":"")}(t,n)})),Object(O.a)((function(){!function(t){t.removeAttribute("data-md-state")}(t)})))}(t)):Object(c.a)({hidden:!0})})))})))}},function(t,e,n){"use strict";n.d(e,"a",(function(){return u}));var r=n(45),c=n(39),a=n(33),i=n(9),o=n(29);function u(t){var e=t.header$,n=t.main$,u=t.viewport$,b=t.screen$;return Object(r.a)(Object(a.a)((function(t){return b.pipe(Object(a.a)((function(r){return r?Object(o.watchSidebar)(t,{main$:n,viewport$:u}).pipe(Object(o.applySidebar)(t,{header$:e}),Object(i.a)((function(t){return{sidebar:t}}))):Object(c.a)({})})))})))}},,,,,,,,,,,,function(t,e,n){"use strict";n.r(e),n.d(e,"setScrollLock",(function(){return U})),n.d(e,"resetScrollLock",(function(){return q})),n.d(e,"initialize",(function(){return N}));var r=n(0),c=n(107),a=n(76),i=n(36),o=n(39),u=n(102),b=n(59),f=n(82),s=n(92),O=n(94),j=n(110),l=n(86),d=n(33),p=n(98),h=n(104),v=n(79),m=n(112),y=n(88),g=n(56),w=n(96),$=n(111),x=n(1),k=n(7),S=n(14),C=n(109),T=n(9),A=n(100);var _=n(103);var E=n(106),L=n(93);function M(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}var H=n(30),R=n(4);function P(t){switch(Object(r.h)(t.match(/(git(?:hub|lab))/i)||[],1)[0].toLowerCase()){case"github":var e=Object(r.h)(t.match(/^.+github\.com\/([^\/]+)\/?([^\/]+)/i),3);return function(t,e){return Object(j.a)({url:void 0!==e?"https://api.github.com/repos/"+t+"/"+e:"https://api.github.com/users/"+t,responseType:"json"}).pipe(Object(w.a)((function(t){return 200===t.status})),Object(p.a)("response"),Object(d.a)((function(t){if(void 0!==e){var n=t.stargazers_count,r=t.forks_count;return Object(o.a)([Object(R.e)(n||0)+" Stars",Object(R.e)(r||0)+" Forks"])}var c=t.public_repos;return Object(o.a)([Object(R.e)(c||0)+" Repositories"])})))}(e[1],e[2]);case"gitlab":var n=Object(r.h)(t.match(/^.+?([^\/]*gitlab[^\/]+)\/(.+?)\/?$/i),3);return function(t,e){return Object(j.a)({url:"https://"+t+"/api/v4/projects/"+encodeURIComponent(e),responseType:"json"}).pipe(Object(w.a)((function(t){return 200===t.status})),Object(p.a)("response"),Object(T.a)((function(t){var e=t.star_count,n=t.forks_count;return[Object(R.e)(e)+" Stars",Object(R.e)(n)+" Forks"]})))}(n[1],n[2]);default:return u.a}}function U(t,e){t.setAttribute("data-md-state","lock"),t.style.top="-"+e+"px"}function q(t){var e=-1*parseInt(t.style.top,10);t.removeAttribute("data-md-state"),t.style.top="",e&&window.scrollTo(0,e)}function N(t){if(!Object(R.d)(t))throw new SyntaxError("Invalid configuration: "+JSON.stringify(t));var e=Object(x.q)(),n=Object(x.v)(),N=Object(x.w)(t.base,{location$:n}),I=Object(x.x)(),z=Object(x.A)(),V=Object(x.y)("(min-width: 960px)"),D=Object(x.y)("(min-width: 1220px)");Object(k.setupComponents)(["announce","container","header","header-title","hero","main","navigation","search","search-query","search-reset","search-result","skip","tabs","toc"],{document$:e});var B=Object(S.h)();!function(t){var e=t.document$,n=t.hash$,c=e.pipe(Object(T.a)((function(){return Object(x.e)("details")})));Object(O.a)(Object(x.y)("print").pipe(Object(w.a)(C.a)),Object(s.a)(window,"beforeprint")).pipe(Object(A.a)(c)).subscribe((function(t){var e,n;try{for(var c=Object(r.k)(t),a=c.next();!a.done;a=c.next()){a.value.setAttribute("open","")}}catch(t){e={error:t}}finally{try{a&&!a.done&&(n=c.return)&&n.call(c)}finally{if(e)throw e.error}}})),n.pipe(Object(T.a)((function(t){return Object(x.c)('[id="'+t+'"]')})),Object(w.a)((function(t){return void 0!==t})),Object(v.a)((function(t){var e=t.closest("details");e&&!e.open&&e.setAttribute("open","")}))).subscribe((function(t){return t.scrollIntoView()}))}({document$:e,hash$:I}),{document$:e}.document$.pipe(Object(_.a)(1),Object(y.a)(Object(k.useComponent)("container")),Object(T.a)((function(t){var e=Object(r.h)(t,2)[1];return Object(x.e)("script",e)}))).subscribe((function(t){var e,n;try{for(var c=Object(r.k)(t),a=c.next();!a.done;a=c.next()){var i=a.value;if(i.src||/(^|\/javascript)$/i.test(i.type)){var o=Object(x.a)("script"),u=i.src?"src":"textContent";o[u]=i[u],Object(x.j)(i,o)}}}catch(t){e={error:t}}finally{try{a&&!a.done&&(n=c.return)&&n.call(c)}finally{if(e)throw e.error}}})),function(t){t.document$.pipe(Object(T.a)((function(){return Object(x.d)(".md-source[href]")})),Object(d.a)((function(t){var e=t.href;return Object(R.a)(""+Object(R.c)(e),(function(){return P(e)}))})),Object(h.a)((function(){return u.a}))).subscribe((function(t){var e,n;try{for(var c=Object(r.k)(Object(x.e)(".md-source__repository")),a=c.next();!a.done;a=c.next()){var i=a.value;i.hasAttribute("data-md-state")||(i.setAttribute("data-md-state","done"),i.appendChild(Object(H.c)(t)))}}catch(t){e={error:t}}finally{try{a&&!a.done&&(n=c.return)&&n.call(c)}finally{if(e)throw e.error}}}))}({document$:e}),function(t){var e=t.document$,n=Object(x.a)("table");e.pipe(Object(T.a)((function(){return Object(x.e)("table:not([class])")}))).subscribe((function(t){var e,c;try{for(var a=Object(r.k)(t),i=a.next();!i.done;i=a.next()){var o=i.value;Object(x.j)(o,n),Object(x.j)(n,Object(H.d)(o))}}catch(t){e={error:t}}finally{try{i&&!i.done&&(c=a.return)&&c.call(a)}finally{if(e)throw e.error}}}))}({document$:e}),function(t){var e=t.document$.pipe(Object(T.a)((function(){return Object(x.e)("[data-md-scrollfix]")})),Object(l.a)(1));e.subscribe((function(t){var e,n;try{for(var c=Object(r.k)(t),a=c.next();!a.done;a=c.next()){a.value.removeAttribute("data-md-scrollfix")}}catch(t){e={error:t}}finally{try{a&&!a.done&&(n=c.return)&&n.call(c)}finally{if(e)throw e.error}}})),Object(E.a)(M,e,u.a).pipe(Object(d.a)((function(t){return O.a.apply(void 0,Object(r.i)(t.map((function(t){return Object(s.a)(t,"touchstart",{passive:!0}).pipe(Object(L.a)(t))}))))}))).subscribe((function(t){var e=t.scrollTop;0===e?t.scrollTop=1:e+t.offsetHeight===t.scrollHeight&&(t.scrollTop=e-1)}))}({document$:e});var Y=Object(S.f)(),J=Object(S.e)({document$:e,dialog$:Y}),K=Object(k.useComponent)("header").pipe(Object(k.mountHeader)({document$:e,viewport$:z}),Object(l.a)(1)),Q=Object(k.useComponent)("main").pipe(Object(k.mountMain)({header$:K,viewport$:z}),Object(l.a)(1)),F=Object(k.useComponent)("navigation").pipe(Object(k.mountNavigation)({header$:K,main$:Q,viewport$:z,screen$:D}),Object(l.a)(1)),W=Object(k.useComponent)("toc").pipe(Object(k.mountTableOfContents)({header$:K,main$:Q,viewport$:z,tablet$:V}),Object(l.a)(1)),X=Object(k.useComponent)("tabs").pipe(Object(k.mountTabs)({header$:K,viewport$:z,screen$:D}),Object(l.a)(1)),Z=Object(k.useComponent)("hero").pipe(Object(k.mountHero)({header$:K,viewport$:z}),Object(l.a)(1)),G=Object(a.a)((function(){var e=t.search&&t.search.index?t.search.index:void 0,n=void 0!==e?Object(i.a)(e):N.pipe(Object(d.a)((function(t){return Object(j.a)({url:t+"/search/search_index.json",responseType:"json",withCredentials:!0}).pipe(Object(p.a)("response"))})));return Object(o.a)(Object(S.i)(t.search.worker,{base$:N,index$:n}))})).pipe(Object(d.a)((function(e){var n=Object(k.useComponent)("search-query").pipe(Object(k.mountSearchQuery)(e,{transform:t.search.transform}),Object(l.a)(1)),r=Object(k.useComponent)("search-reset").pipe(Object(k.mountSearchReset)(),Object(l.a)(1)),c=Object(k.useComponent)("search-result").pipe(Object(k.mountSearchResult)(e,{query$:n}),Object(l.a)(1));return Object(k.useComponent)("search").pipe(Object(k.mountSearch)(e,{query$:n,reset$:r,result$:c}),Object(l.a)(1))})),Object(h.a)((function(){return Object(k.useComponent)("search").subscribe((function(t){return t.hidden=!0})),u.a})));I.pipe(Object(v.a)((function(){return Object(x.o)("search",!1)})),Object(m.a)(125)).subscribe((function(t){return Object(x.n)("#"+t)})),Object(b.a)([Object(x.z)("search"),V]).pipe(Object(y.a)(z),Object(d.a)((function(t){var n=Object(r.h)(t,2),c=Object(r.h)(n[0],2),a=c[0],i=c[1],o=n[1].offset.y,u=a&&!i;return e.pipe(Object(m.a)(u?400:100),Object(g.b)(f.a),Object(v.a)((function(t){var e=t.body;return u?U(e,o):q(e)})))}))).subscribe(),Object(s.a)(document.body,"click").pipe(Object(w.a)((function(t){return!(t.metaKey||t.ctrlKey)})),Object(w.a)((function(t){if(t.target instanceof HTMLElement){var e=t.target.closest("a");if(e&&Object(x.h)(e))return!0}return!1}))).subscribe((function(){Object(x.o)("drawer",!1)})),t.features.includes("instant")&&"file:"!==location.protocol&&Object(S.g)({document$:e,location$:n,viewport$:z}),B.pipe(Object(w.a)((function(t){return"global"===t.mode&&"Tab"===t.type})),Object($.a)(1)).subscribe((function(){var t,e;try{for(var n=Object(r.k)(Object(x.e)(".headerlink")),c=n.next();!c.done;c=n.next()){c.value.style.visibility="visible"}}catch(e){t={error:e}}finally{try{c&&!c.done&&(e=n.return)&&e.call(n)}finally{if(t)throw t.error}}}));var tt={document$:e,location$:n,viewport$:z,header$:K,hero$:Z,main$:Q,navigation$:F,search$:G,tabs$:X,toc$:W,clipboard$:J,keyboard$:B,dialog$:Y};return O.a.apply(void 0,Object(r.i)(Object(c.a)(tt))).subscribe(),tt}document.documentElement.classList.remove("no-js"),document.documentElement.classList.add("js"),navigator.userAgent.match(/(iPad|iPhone|iPod)/g)&&document.documentElement.classList.add("ios")}])); +//# sourceMappingURL=bundle.23546af0.min.js.map \ No newline at end of file diff --git a/assets/javascripts/bundle.23546af0.min.js.map b/assets/javascripts/bundle.23546af0.min.js.map new file mode 100644 index 00000000..37ec9fbe --- /dev/null +++ b/assets/javascripts/bundle.23546af0.min.js.map @@ -0,0 +1 @@ +{"version":3,"sources":["webpack:///webpack/bootstrap","webpack:///./src/assets/javascripts/browser/document/index.ts","webpack:///./src/assets/javascripts/browser/element/_/index.ts","webpack:///./src/assets/javascripts/browser/element/focus/index.ts","webpack:///./src/assets/javascripts/browser/element/offset/index.ts","webpack:///./src/assets/javascripts/browser/element/select/index.ts","webpack:///./src/assets/javascripts/browser/element/size/index.ts","webpack:///./src/assets/javascripts/browser/keyboard/index.ts","webpack:///./src/assets/javascripts/browser/location/_/index.ts","webpack:///./src/assets/javascripts/browser/location/base/index.ts","webpack:///./src/assets/javascripts/browser/location/hash/index.ts","webpack:///./src/assets/javascripts/browser/media/index.ts","webpack:///./src/assets/javascripts/browser/toggle/index.ts","webpack:///./src/assets/javascripts/browser/viewport/offset/index.ts","webpack:///./src/assets/javascripts/browser/viewport/size/index.ts","webpack:///./src/assets/javascripts/browser/viewport/_/index.ts","webpack:///./src/assets/javascripts/browser/worker/index.ts","webpack:///./src/assets/javascripts/utilities/config/index.ts","webpack:///./src/assets/javascripts/utilities/jsx/index.ts","webpack:///./src/assets/javascripts/utilities/rxjs/index.ts","webpack:///./src/assets/javascripts/utilities/string/index.ts","webpack:///./src/assets/javascripts/components/index.ts","webpack:///./src/assets/javascripts/integrations/clipboard/index.ts","webpack:///./src/assets/javascripts/integrations/dialog/index.ts","webpack:///./src/assets/javascripts/integrations/instant/index.ts","webpack:///./src/assets/javascripts/integrations/keyboard/index.ts","webpack:///./src/assets/javascripts/components/_/index.ts","webpack:///./src/assets/javascripts/components/toc/anchor/set/index.ts","webpack:///./src/assets/javascripts/components/shared/index.ts","webpack:///./src/assets/javascripts/templates/clipboard/index.tsx","webpack:///./src/assets/javascripts/templates/search/index.tsx","webpack:///./src/assets/javascripts/templates/source/index.tsx","webpack:///./src/assets/javascripts/templates/table/index.tsx","webpack:///./src/assets/javascripts/components/shared/sidebar/set/index.ts","webpack:///./src/assets/javascripts/components/toc/anchor/index.ts","webpack:///./src/assets/javascripts/integrations/search/_/index.ts","webpack:///./src/assets/javascripts/integrations/search/document/index.ts","webpack:///./src/assets/javascripts/integrations/search/highlighter/index.ts","webpack:///./src/assets/javascripts/integrations/search/transform/index.ts","webpack:///./src/assets/javascripts/integrations/search/worker/message/index.ts","webpack:///./src/assets/javascripts/integrations/search/worker/_/index.ts","webpack:///./src/assets/javascripts/components/shared/sidebar/index.ts","webpack:///./src/assets/javascripts/components/shared/sidebar/react/index.ts","webpack:///./src/assets/javascripts/components/toc/index.ts","webpack:///./src/assets/javascripts/components/toc/_/index.ts","webpack:///./src/assets/javascripts/components/toc/anchor/react/index.ts","webpack:///./src/assets/javascripts/components/search/_/index.ts","webpack:///./src/assets/javascripts/components/search/query/_/index.ts","webpack:///./src/assets/javascripts/components/search/query/react/index.ts","webpack:///./src/assets/javascripts/components/search/reset/_/index.ts","webpack:///./src/assets/javascripts/components/search/reset/react/index.ts","webpack:///./src/assets/javascripts/components/search/result/set/index.ts","webpack:///./src/assets/javascripts/components/search/result/react/index.ts","webpack:///./src/assets/javascripts/components/search/result/_/index.ts","webpack:///./src/assets/javascripts/components/main/_/index.ts","webpack:///./src/assets/javascripts/components/main/react/index.ts","webpack:///./src/assets/javascripts/components/main/set/index.ts","webpack:///./src/assets/javascripts/components/hero/_/index.ts","webpack:///./src/assets/javascripts/components/hero/react/index.ts","webpack:///./src/assets/javascripts/components/hero/set/index.ts","webpack:///./src/assets/javascripts/components/header/_/index.ts","webpack:///./src/assets/javascripts/components/header/react/index.ts","webpack:///./src/assets/javascripts/components/header/set/index.ts","webpack:///./src/assets/javascripts/components/tabs/_/index.ts","webpack:///./src/assets/javascripts/components/tabs/react/index.ts","webpack:///./src/assets/javascripts/components/tabs/set/index.ts","webpack:///./src/assets/javascripts/components/navigation/_/index.ts","webpack:///./src/assets/javascripts/patches/scrollfix/index.ts","webpack:///./src/assets/javascripts/patches/source/index.ts","webpack:///./src/assets/javascripts/patches/source/github/index.ts","webpack:///./src/assets/javascripts/patches/source/gitlab/index.ts","webpack:///./src/assets/javascripts/index.ts","webpack:///./src/assets/javascripts/patches/details/index.ts","webpack:///./src/assets/javascripts/patches/script/index.ts","webpack:///./src/assets/javascripts/patches/table/index.ts"],"names":["webpackJsonpCallback","data","moduleId","chunkId","chunkIds","moreModules","executeModules","i","resolves","length","Object","prototype","hasOwnProperty","call","installedChunks","push","modules","parentJsonpFunction","shift","deferredModules","apply","checkDeferredModules","result","deferredModule","fulfilled","j","depId","splice","__webpack_require__","s","installedModules","0","exports","module","l","m","c","d","name","getter","o","defineProperty","enumerable","get","r","Symbol","toStringTag","value","t","mode","__esModule","ns","create","key","bind","n","object","property","p","jsonpArray","window","oldJsonpFunction","slice","watchDocument","document$","ReplaySubject","fromEvent","document","pipe","mapTo","subscribe","getElement","selector","node","querySelector","undefined","getElementOrThrow","el","ReferenceError","getActiveElement","activeElement","HTMLElement","getElements","Array","from","querySelectorAll","createElement","tagName","replaceElement","source","target","replaceWith","setElementFocus","focus","blur","watchElementFocus","merge","map","type","startWith","shareReplay","getElementOffset","x","scrollLeft","y","scrollTop","watchElementOffset","setElementSelection","HTMLInputElement","Error","select","watchElementSize","fromEventPattern","next","contentRect","width","Math","round","height","observe","offsetWidth","offsetHeight","getElementSize","isSusceptibleToKeyboard","isContentEditable","watchKeyboard","filter","ev","metaKey","ctrlKey","claim","preventDefault","stopPropagation","share","setLocation","url","location","href","isLocalLocation","ref","host","test","pathname","isAnchorLocation","hash","watchLocation","BehaviorSubject","URL","watchLocationBase","base","location$","take","toString","replace","getLocationHash","substring","setLocationHash","addEventListener","click","watchLocationHash","watchMedia","query","media","matchMedia","addListener","matches","toggles","drawer","search","getToggle","checked","setToggle","watchToggle","getViewportOffset","max","pageXOffset","pageYOffset","setViewportOffset","scrollTo","getViewportSize","innerWidth","innerHeight","watchViewport","combineLatest","passive","offset","size","watchViewportAt","header$","viewport$","size$","distinctUntilKeyChanged","offset$","offsetLeft","offsetTop","watchWorker","worker","tx$","rx$","pluck","throttle","leading","trailing","tap","message","postMessage","switchMapTo","isConfig","config","features","createElementNS","setAttribute","setAttributeNS","appendChild","child","innerHTML","Node","isArray","h","attributes","keys","attr","children","cache","factory","defer","sessionStorage","getItem","of","JSON","parse","value$","setItem","stringify","err","lang","translate","textContent","truncate","toFixed","len","charCodeAt","setupClipboard","dialog$","forEach","block","index","parent","parentElement","id","insertBefore","clipboard$","on","clearSelection","setupDialog","duration","Subject","dialog","classList","add","switchMap","text","body","container","observeOn","animationFrame","delay","removeAttribute","remove","setupInstantLoading","history","scrollRestoration","favicon","state$","closest","push$","pop$","state","distinctUntilChanged","prev","ajax$","skip","ajax","responseType","withCredentials","catchError","sample","pushState","dom","DOMParser","response","parseFromString","instant$","withLatestFrom","title","head","dispatchEvent","CustomEvent","debounceTime","replaceState","bufferCount","setupKeyboard","keyboard$","active","els","indexOf","components$","setupComponents","names","reduce","components","useComponent","setAnchorBlur","resetAnchorBlur","setAnchorActive","toggle","resetAnchorActive","css","renderClipboardButton","class","xmlns","viewBox","renderSearchResult","article","sections","icon","tabIndex","renderSource","facts","fact","renderTable","table","setSidebarOffset","style","top","resetSidebarOffset","setSidebarHeight","resetSidebarHeight","docs","pipeline","this","documents","Map","doc","path","linked","set","setupSearchDocumentMap","highlight","separator","RegExp","_","term","trim","match","setupSearchHighlighter","lunr","reset","fn","use","multiLanguage","field","boost","Index","load","groups","results","section","console","warn","defaultTransform","SearchMessageType","isSearchReadyMessage","READY","isSearchQueryMessage","QUERY","isSearchResultMessage","RESULT","setupSearchIndex","split","identity","setupSearchWorker","index$","base$","Worker","SETUP","watchSidebar","main$","adjust","min","lock","a","b","applySidebar","mountTableOfContents","tablet$","tablet","sidebar$","anchors$","sidebar","anchors","watchAnchorList","decodeURIComponent","adjust$","header","anchor","pop","applyAnchorList","mountSearch","query$","reset$","result$","status$","status","mountSearchQuery","options","transform","focus$","watchSearchQuery","mountSearchReset","watchSearchReset","addToSearchResultList","applySearchResult","ready$","fetch$","list","meta","setSearchResultMeta","resetSearchResultMeta","scan","scrollHeight","finalize","resetSearchResultList","mountSearchResult","mountMain","setHeaderShadow","resetHeaderShadow","border$","bottom","watchMain","main","mountHero","hidden","setHeroHidden","resetHeroHidden","applyHero","mountHeader","styles","getComputedStyle","includes","position","sticky","watchHeader","type$","hx","setHeaderTitleActive","resetHeaderTitleActive","applyHeaderType","mountTabs","screen$","screen","setTabsHidden","resetTabsHidden","applyTabs","mountNavigation","isAppleDevice","navigator","userAgent","fetchSourceFacts","toLowerCase","user","repo","stargazers_count","forks_count","public_repos","fetchSourceFactsFromGitHub","project","encodeURIComponent","star_count","fetchSourceFactsFromGitLab","setScrollLock","resetScrollLock","parseInt","initialize","SyntaxError","hash$","els$","details","open","scrollIntoView","patchDetails","src","script","hasAttribute","patchSource","sentinel","patchTables","iif","patchScrollfix","navigation$","toc$","tabs$","hero$","search$","protocol","visibility","values","documentElement"],"mappings":"4DACE,SAASA,EAAqBC,GAQ7B,IAPA,IAMIC,EAAUC,EANVC,EAAWH,EAAK,GAChBI,EAAcJ,EAAK,GACnBK,EAAiBL,EAAK,GAIHM,EAAI,EAAGC,EAAW,GACpCD,EAAIH,EAASK,OAAQF,IACzBJ,EAAUC,EAASG,GAChBG,OAAOC,UAAUC,eAAeC,KAAKC,EAAiBX,IAAYW,EAAgBX,IACpFK,EAASO,KAAKD,EAAgBX,GAAS,IAExCW,EAAgBX,GAAW,EAE5B,IAAID,KAAYG,EACZK,OAAOC,UAAUC,eAAeC,KAAKR,EAAaH,KACpDc,EAAQd,GAAYG,EAAYH,IAKlC,IAFGe,GAAqBA,EAAoBhB,GAEtCO,EAASC,QACdD,EAASU,OAATV,GAOD,OAHAW,EAAgBJ,KAAKK,MAAMD,EAAiBb,GAAkB,IAGvDe,IAER,SAASA,IAER,IADA,IAAIC,EACIf,EAAI,EAAGA,EAAIY,EAAgBV,OAAQF,IAAK,CAG/C,IAFA,IAAIgB,EAAiBJ,EAAgBZ,GACjCiB,GAAY,EACRC,EAAI,EAAGA,EAAIF,EAAed,OAAQgB,IAAK,CAC9C,IAAIC,EAAQH,EAAeE,GACG,IAA3BX,EAAgBY,KAAcF,GAAY,GAE3CA,IACFL,EAAgBQ,OAAOpB,IAAK,GAC5Be,EAASM,EAAoBA,EAAoBC,EAAIN,EAAe,KAItE,OAAOD,EAIR,IAAIQ,EAAmB,GAKnBhB,EAAkB,CACrBiB,EAAG,GAGAZ,EAAkB,GAGtB,SAASS,EAAoB1B,GAG5B,GAAG4B,EAAiB5B,GACnB,OAAO4B,EAAiB5B,GAAU8B,QAGnC,IAAIC,EAASH,EAAiB5B,GAAY,CACzCK,EAAGL,EACHgC,GAAG,EACHF,QAAS,IAUV,OANAhB,EAAQd,GAAUW,KAAKoB,EAAOD,QAASC,EAAQA,EAAOD,QAASJ,GAG/DK,EAAOC,GAAI,EAGJD,EAAOD,QAKfJ,EAAoBO,EAAInB,EAGxBY,EAAoBQ,EAAIN,EAGxBF,EAAoBS,EAAI,SAASL,EAASM,EAAMC,GAC3CX,EAAoBY,EAAER,EAASM,IAClC5B,OAAO+B,eAAeT,EAASM,EAAM,CAAEI,YAAY,EAAMC,IAAKJ,KAKhEX,EAAoBgB,EAAI,SAASZ,GACX,oBAAXa,QAA0BA,OAAOC,aAC1CpC,OAAO+B,eAAeT,EAASa,OAAOC,YAAa,CAAEC,MAAO,WAE7DrC,OAAO+B,eAAeT,EAAS,aAAc,CAAEe,OAAO,KAQvDnB,EAAoBoB,EAAI,SAASD,EAAOE,GAEvC,GADU,EAAPA,IAAUF,EAAQnB,EAAoBmB,IAC/B,EAAPE,EAAU,OAAOF,EACpB,GAAW,EAAPE,GAA8B,iBAAVF,GAAsBA,GAASA,EAAMG,WAAY,OAAOH,EAChF,IAAII,EAAKzC,OAAO0C,OAAO,MAGvB,GAFAxB,EAAoBgB,EAAEO,GACtBzC,OAAO+B,eAAeU,EAAI,UAAW,CAAET,YAAY,EAAMK,MAAOA,IACtD,EAAPE,GAA4B,iBAATF,EAAmB,IAAI,IAAIM,KAAON,EAAOnB,EAAoBS,EAAEc,EAAIE,EAAK,SAASA,GAAO,OAAON,EAAMM,IAAQC,KAAK,KAAMD,IAC9I,OAAOF,GAIRvB,EAAoB2B,EAAI,SAAStB,GAChC,IAAIM,EAASN,GAAUA,EAAOiB,WAC7B,WAAwB,OAAOjB,EAAgB,SAC/C,WAA8B,OAAOA,GAEtC,OADAL,EAAoBS,EAAEE,EAAQ,IAAKA,GAC5BA,GAIRX,EAAoBY,EAAI,SAASgB,EAAQC,GAAY,OAAO/C,OAAOC,UAAUC,eAAeC,KAAK2C,EAAQC,IAGzG7B,EAAoB8B,EAAI,GAExB,IAAIC,EAAaC,OAAqB,aAAIA,OAAqB,cAAK,GAChEC,EAAmBF,EAAW5C,KAAKuC,KAAKK,GAC5CA,EAAW5C,KAAOf,EAClB2D,EAAaA,EAAWG,QACxB,IAAI,IAAIvD,EAAI,EAAGA,EAAIoD,EAAWlD,OAAQF,IAAKP,EAAqB2D,EAAWpD,IAC3E,IAAIU,EAAsB4C,EAM1B,OAFA1C,EAAgBJ,KAAK,CAAC,GAAG,IAElBM,I,uhCCjHF,SAAS0C,IACd,IAAMC,EAAY,IAAIC,EAAA,EAQtB,OAPA,OAAAC,EAAA,GAAUC,SAAU,oBACjBC,KACC,OAAAC,EAAA,GAAMF,WAELG,UAAUN,GAGRA,ECXF,SAASO,EACdC,EAAkBC,GAElB,YAFkB,IAAAA,MAAA,UAEXA,EAAKC,cAAiBF,SAAaG,EAarC,SAASC,EACdJ,EAAkBC,QAAA,IAAAA,MAAA,UAElB,IAAMI,EAAKN,EAAcC,EAAUC,GACnC,QAAkB,IAAPI,EACT,MAAM,IAAIC,eACR,8BAA8BN,EAAQ,mBAE1C,OAAOK,EAQF,SAASE,IACd,OAAOZ,SAASa,yBAAyBC,YACrCd,SAASa,mBACTL,EAaC,SAASO,EACdV,EAAkBC,GAElB,YAFkB,IAAAA,MAAA,UAEXU,MAAMC,KAAKX,EAAKY,iBAAoBb,IActC,SAASc,EAEdC,GACA,OAAOpB,SAASmB,cAAcC,GASzB,SAASC,EACdC,EAAqBC,GAErBD,EAAOE,YAAYD,G,mCC/Ed,SAASE,EAChBf,EAAiB9B,QAAA,IAAAA,OAAA,GAEXA,EACF8B,EAAGgB,QAEHhB,EAAGiB,OAYA,SAASC,EACdlB,GAEA,OAAO,OAAAmB,EAAA,GACL,OAAA9B,EAAA,GAAsBW,EAAI,SAC1B,OAAAX,EAAA,GAAsBW,EAAI,SAEzBT,KACC,OAAA6B,EAAA,IAAI,SAAC,GAAa,MAAS,UAApB,EAAAC,QACP,OAAAC,EAAA,GAAUtB,IAAOE,KACjB,OAAAqB,EAAA,GAAY,ICjBX,SAASC,EAAiBxB,GAC/B,MAAO,CACLyB,EAAGzB,EAAG0B,WACNC,EAAG3B,EAAG4B,WAaH,SAASC,EACd7B,GAEA,OAAO,OAAAmB,EAAA,GACL,OAAA9B,EAAA,GAAUW,EAAI,UACd,OAAAX,EAAA,GAAUN,OAAQ,WAEjBQ,KACC,OAAA6B,EAAA,IAAI,WAAM,OAAAI,EAAiBxB,MAC3B,OAAAsB,EAAA,GAAUE,EAAiBxB,IAC3B,OAAAuB,EAAA,GAAY,IC3CX,SAASO,EACd9B,GAEA,KAAIA,aAAc+B,kBAGhB,MAAM,IAAIC,MAAM,mBAFhBhC,EAAGiC,S,2BC8BA,SAASC,EACdlC,GAEA,OAAO,OAAAmC,EAAA,IAA8B,SAAAC,GACnC,IAAI,KAAe,SAAC,G,IAAGC,EAAH,iBAAG,GAAAA,YAAmB,OAAAD,EAAK,CAC7CE,MAAQC,KAAKC,MAAMH,EAAYC,OAC/BG,OAAQF,KAAKC,MAAMH,EAAYI,aAE9BC,QAAQ1C,MAEVT,KACC,OAAA+B,EAAA,GA3BC,SAAwBtB,GAC7B,MAAO,CACLsC,MAAQtC,EAAG2C,YACXF,OAAQzC,EAAG4C,cAwBCC,CAAe7C,IACzB,OAAAuB,EAAA,GAAY,I,qBC7BX,SAASuB,EAAwB9C,GACtC,OAAQA,EAAGU,SAGT,IAAK,QACL,IAAK,SACL,IAAK,WACH,OAAO,EAGT,QACE,OAAOV,EAAG+C,mBAWT,SAASC,IACd,OAAO,OAAA3D,EAAA,GAAyBN,OAAQ,WACrCQ,KACC,OAAA0D,EAAA,IAAO,SAAAC,GAAM,QAAEA,EAAGC,SAAWD,EAAGE,YAChC,OAAAhC,EAAA,IAAI,SAAA8B,GAAM,OACR7B,KAAM6B,EAAG1E,IACT6E,MAAK,WACHH,EAAGI,iBACHJ,EAAGK,uBAGP,OAAAC,EAAA,M,YClCC,SAASC,EAAYC,GAC1BC,SAASC,KAAOF,EAAIE,KAaf,SAASC,EACdH,EACAI,GAEA,YAFA,IAAAA,MAAA,UAEOJ,EAAIK,OAASD,EAAIC,MACjB,iCAAiCC,KAAKN,EAAIO,UAW5C,SAASC,EACdR,EACAI,GAEA,YAFA,IAAAA,MAAA,UAEOJ,EAAIO,WAAaH,EAAIG,UACrBP,EAAIS,KAAKvI,OAAS,EAUpB,SAASwI,IACd,OAAO,IAAIC,EAAA,EAtDJ,IAAIC,IAAIX,SAASC,O,aCInB,SAASW,EACdC,EAAc,GAEd,OAFgB,EAAAC,UAGblF,KACC,OAAAmF,EAAA,GAAK,GACL,OAAAtD,EAAA,IAAI,SAAC,G,IAAEwC,EAAA,EAAAA,KAAW,WAAIU,IAAIE,EAAMZ,GAC7Be,WACAC,QAAQ,MAAO,OAElB,OAAArD,EAAA,GAAY,ICjBX,SAASsD,IACd,OAAOlB,SAASQ,KAAKW,UAAU,GAa1B,SAASC,EAAgBZ,GAC9B,IAAMnE,EAAKS,EAAc,KACzBT,EAAG4D,KAAOO,EACVnE,EAAGgF,iBAAiB,SAAS,SAAA9B,GAAM,OAAAA,EAAGK,qBACtCvD,EAAGiF,QAUE,SAASC,IACd,OAAO,OAAA7F,EAAA,GAA2BN,OAAQ,cACvCQ,KACC,OAAA6B,EAAA,GAAIyD,GACJ,OAAAvD,EAAA,GAAUuD,KACV,OAAA5B,EAAA,IAAO,SAAAkB,GAAQ,OAAAA,EAAKvI,OAAS,KAC7B,OAAA4H,EAAA,MClCC,SAAS2B,EAAWC,GACzB,IAAMC,EAAQC,WAAWF,GACzB,OAAO,OAAAjD,EAAA,IAA0B,SAAAC,GAC/B,OAAAiD,EAAME,aAAY,WAAM,OAAAnD,EAAKiD,EAAMG,eAElCjG,KACC,OAAA+B,EAAA,GAAU+D,EAAMG,SAChB,OAAAjE,EAAA,GAAY,ICElB,IAAMkE,EAA4C,CAChDC,OAAQ3F,EAAkB,2BAC1B4F,OAAQ5F,EAAkB,4BAcrB,SAAS6F,EAAUnI,GACxB,OAAOgI,EAAQhI,GAAMoI,QAchB,SAASC,EAAUrI,EAAcS,GAClCuH,EAAQhI,GAAMoI,UAAY3H,GAC5BuH,EAAQhI,GAAMwH,QAYX,SAASc,EAAYtI,GAC1B,IAAMuC,EAAKyF,EAAQhI,GACnB,OAAO,OAAA4B,EAAA,GAAUW,EAAI,UAClBT,KACC,OAAA6B,EAAA,IAAI,WAAM,OAAApB,EAAG6F,WACb,OAAAvE,EAAA,GAAUtB,EAAG6F,U,oBC9CZ,SAASG,IACd,MAAO,CACLvE,EAAGc,KAAK0D,IAAI,EAAGC,aACfvE,EAAGY,KAAK0D,IAAI,EAAGE,cASZ,SAASC,EACd,G,IAAE3E,EAAA,EAAAA,EAAGE,EAAA,EAAAA,EAEL5C,OAAOsH,SAAS5E,GAAK,EAAGE,GAAK,GClBxB,SAAS2E,IACd,MAAO,CACLhE,MAAQiE,WACR9D,OAAQ+D,aCwBL,SAASC,IACd,OAAO,OAAAC,EAAA,GAAc,CFCd,OAAAvF,EAAA,GACL,OAAA9B,EAAA,GAAUN,OAAQ,SAAU,CAAE4H,SAAS,IACvC,OAAAtH,EAAA,GAAUN,OAAQ,SAAU,CAAE4H,SAAS,KAEtCpH,KACC,OAAA6B,EAAA,GAAI4E,GACJ,OAAA1E,EAAA,GAAU0E,MCpBP,OAAA3G,EAAA,GAAUN,OAAQ,SAAU,CAAE4H,SAAS,IAC3CpH,KACC,OAAA6B,EAAA,GAAIkF,GACJ,OAAAhF,EAAA,GAAUgF,QCcX/G,KACC,OAAA6B,EAAA,IAAI,SAAC,G,IAAA,mBAAmB,OAAGwF,OAArB,KAA6BC,KAArB,SACd,OAAAtF,EAAA,GAAY,IAYX,SAASuF,EACd9G,EAAiB,G,IAAE+G,EAAA,EAAAA,QAASC,EAAA,EAAAA,UAEtBC,EAAQD,EACXzH,KACC,OAAA2H,EAAA,GAAwB,SAItBC,EAAU,OAAAT,EAAA,GAAc,CAACO,EAAOF,IACnCxH,KACC,OAAA6B,EAAA,IAAI,WAAsB,OACxBK,EAAGzB,EAAGoH,WACNzF,EAAG3B,EAAGqH,eAKZ,OAAO,OAAAX,EAAA,GAAc,CAACK,EAASC,EAAWG,IACvC5H,KACC,OAAA6B,EAAA,IAAI,SAAC,G,IAAA,mBAAGqB,EAAA,KAAAA,OAAU,OAAEmE,EAAA,EAAAA,OAAQC,EAAA,EAAAA,KAAQ,OAAEpF,EAAA,EAAAA,EAAGE,EAAA,EAAAA,EAAS,OAChDiF,OAAQ,CACNnF,EAAGmF,EAAOnF,EAAIA,EACdE,EAAGiF,EAAOjF,EAAIA,EAAIc,GAEpBoE,KAAI,MAEN,OAAAtF,EAAA,GAAY,I,uCClCX,SAAS+F,GACdC,EAAgB,G,IAAEC,EAAA,EAAAA,IAIZC,EAAM,OAAAtF,EAAA,IAA+B,SAAAC,GACzC,OAAAmF,EAAOvC,iBAAiB,UAAW5C,MAElC7C,KACC,OAAAmI,EAAA,GAAM,SAIV,OAAOF,EACJjI,KACC,OAAAoI,EAAA,IAAS,WAAM,OAAAF,IAAK,CAAEG,SAAS,EAAMC,UAAU,IAC/C,OAAAC,GAAA,IAAI,SAAAC,GAAW,OAAAR,EAAOS,YAAYD,MAClC,OAAAE,GAAA,GAAYR,GACZ,OAAAjE,EAAA,Q,+BCvCC,SAAS0E,EAASC,GACvB,MAAyB,iBAAXA,GACgB,iBAAhBA,EAAO3D,MACa,iBAApB2D,EAAOC,UACW,iBAAlBD,EAAOxC,O,iQCRvB,SAASlF,EAAcC,GACrB,OAAQA,GAGN,IAAK,MACL,IAAK,OACH,OAAOpB,SAAS+I,gBAAgB,6BAA8B3H,GAGhE,QACE,OAAOpB,SAASmB,cAAcC,IAWpC,SAAS4H,EACPtI,EAA8BvC,EAAcS,GAC5C,OAAQT,GAGN,IAAK,QACH,MAGF,IAAK,UACL,IAAK,IACkB,kBAAVS,EACT8B,EAAGuI,eAAe,KAAM9K,EAAMS,GACvBA,GACP8B,EAAGuI,eAAe,KAAM9K,EAAM,IAChC,MAGF,QACuB,kBAAVS,EACT8B,EAAGsI,aAAa7K,EAAMS,GACfA,GACP8B,EAAGsI,aAAa7K,EAAM,KAU9B,SAAS+K,EACPxI,EAA8ByI,G,QAI9B,GAAqB,iBAAVA,GAAuC,iBAAVA,EACtCzI,EAAG0I,WAAaD,EAAM9D,gBAGjB,GAAI8D,aAAiBE,KAC1B3I,EAAGwI,YAAYC,QAGV,GAAInI,MAAMsI,QAAQH,G,IACvB,IAAmB,kBAAAA,GAAK,+BACtBD,EAAYxI,EADC,U,kGAkBZ,SAAS6I,EACdnI,EAAiBoI,G,gBAA+B,oDAEhD,IAAM9I,EAAKS,EAAcC,GAGzB,GAAIoI,E,IACF,IAAmB,yBAAAC,EAAA,GAAKD,IAAW,+BAA9B,IAAME,EAAI,QACbV,EAAatI,EAAIgJ,EAAMF,EAAWE,K,qGAGtC,IAAoB,kBAAAC,GAAQ,+BAAvB,IAAMR,EAAK,QACdD,EAAYxI,EAAIyI,I,iGAGlB,OAAOzI,E,oBCrHF,SAASkJ,EACd1K,EAAa2K,GAEb,OAAO,OAAAC,EAAA,IAAM,WACX,IAAMhO,EAAOiO,eAAeC,QAAQ9K,GACpC,GAAIpD,EACF,OAAO,OAAAmO,EAAA,GAAGC,KAAKC,MAAMrO,IAIrB,IAAMsO,EAASP,IAUf,OATAO,EAAOjK,WAAU,SAAAvB,GACf,IACEmL,eAAeM,QAAQnL,EAAKgL,KAAKI,UAAU1L,IAC3C,MAAO2L,QAMJH,K,ICdTI,E,OAcG,SAASC,EAAUvL,EAAmBN,GAC3C,QAAoB,IAAT4L,EAAsB,CAC/B,IAAM9J,EAAK,YAAkB,WAC7B8J,EAAON,KAAKC,MAAMzJ,EAAGgK,aAEvB,QAAyB,IAAdF,EAAKtL,GACd,MAAM,IAAIyB,eAAe,wBAAwBzB,GAEnD,YAAwB,IAAVN,EACV4L,EAAKtL,GAAKoG,QAAQ,IAAK1G,GACvB4L,EAAKtL,GAgBJ,SAASyL,EAAS/L,EAAeQ,GACtC,IAAIhD,EAAIgD,EACR,GAAIR,EAAMtC,OAASF,EAAG,CACpB,KAAoB,MAAbwC,EAAMxC,MAAgBA,EAAI,IACjC,OAAUwC,EAAM4G,UAAU,EAAGpJ,GAAE,MAEjC,OAAOwC,EAmBF,SAASsE,EAAMtE,GACpB,OAAIA,EAAQ,MAEEA,EAAQ,MAAY,KAAMgM,WADpBhM,EAAQ,KAAO,IAAO,KACa,IAE9CA,EAAMyG,WAaV,SAASR,EAAKjG,GAEjB,IADA,IAAI2K,EAAI,EACCnN,EAAI,EAAGyO,EAAMjM,EAAMtC,OAAQF,EAAIyO,EAAKzO,IAC3CmN,GAAOA,GAAK,GAAKA,EAAK3K,EAAMkM,WAAW1O,GACvCmN,GAAK,EAEP,OAAOA,I,+BC1IX,o5B,2aCwDO,SAASwB,EACd,G,IAAElL,EAAA,EAAAA,UAAWmL,EAAA,EAAAA,QAEb,IAAK,gBACH,OAAO,IAGTnL,EAAUM,WAAU,WACH,YAAY,cACpB8K,SAAQ,SAACC,EAAOC,GACrB,IAAMC,EAASF,EAAMG,cACrBD,EAAOE,GAAK,UAAUH,EACtBC,EAAOG,aAAa,YAAsBH,EAAOE,IAAKJ,SAK1D,IAAMM,EAAa,OAAA3I,EAAA,IAAoC,SAAAC,GACrD,IAAI,EAAY,iBAAiB2I,GAAG,UAAW3I,MAE9C7C,KACC,OAAAiE,EAAA,MAYJ,OARAsH,EACGvL,KACC,OAAAuI,EAAA,IAAI,SAAA5E,GAAM,OAAAA,EAAG8H,oBACb,OAAAxL,EAAA,GAAM,YAAU,sBAEfC,UAAU6K,GAGRQ,E,4DClCF,SAASG,EACd,G,IAAEC,QAAA,YAAAA,SAEIZ,EAAU,IAAIa,EAAA,EAGdC,EAAS,YAAc,OA4B7B,OA3BAA,EAAOC,UAAUC,IAAI,YAAa,cAGlChB,EACG/K,KACC,OAAAgM,EAAA,IAAU,SAAAC,GAAQ,cAAAjC,EAAA,GAAGjK,SAASmM,MAC3BlM,KACC,OAAA6B,EAAA,IAAI,SAAAsK,GAAa,OAAAA,EAAUlD,YAAY4C,MACvC,OAAAO,EAAA,GAAUC,EAAA,GACV,OAAAC,EAAA,GAAM,GACN,OAAA/D,EAAA,IAAI,SAAA9H,GACFA,EAAG0I,UAAY8C,EACfxL,EAAGsI,aAAa,gBAAiB,WAEnC,OAAAuD,EAAA,GAAMX,GAAY,KAClB,OAAApD,EAAA,IAAI,SAAA9H,GAAM,OAAAA,EAAG8L,gBAAgB,oBAC7B,OAAAD,EAAA,GAAM,KACN,OAAA/D,EAAA,IAAI,SAAA9H,GACFA,EAAG0I,UAAY,GACf1I,EAAG+L,iBAKRtM,YAGE6K,E,wHCYF,SAAS0B,EACd,G,IAAE7M,EAAA,EAAAA,UAAW6H,EAAA,EAAAA,UAAWvC,EAAA,EAAAA,UAIpB,sBAAuBwH,UACzBA,QAAQC,kBAAoB,UAG9B,OAAA7M,EAAA,GAAUN,OAAQ,gBACfU,WAAU,WACTwM,QAAQC,kBAAoB,UAIhC,IAAMC,EAAU,YAA4B,kCACrB,IAAZA,IACTA,EAAQvI,KAAOuI,EAAQvI,MAGzB,IAAMwI,EAAS,OAAA/M,EAAA,GAAsBC,SAASmM,KAAM,SACjDlM,KACC,OAAA0D,EAAA,IAAO,SAAAC,GAAM,QAAEA,EAAGC,SAAWD,EAAGE,YAChC,OAAAmI,EAAA,IAAU,SAAArI,GACR,GAAIA,EAAGrC,kBAAkBT,YAAa,CACpC,IAAMJ,EAAKkD,EAAGrC,OAAOwL,QAAQ,KAC7B,GAAIrM,IAAOA,EAAGa,QAAU,YAAgBb,GAGtC,OAFK,YAAiBA,IACpBkD,EAAGI,iBACE,OAAAiG,EAAA,GAAGvJ,GAGd,OAAO,OAET,OAAAoB,EAAA,IAAI,SAAApB,GAAM,OAAG0D,IAAK,IAAIY,IAAItE,EAAG4D,UAC7B,OAAAJ,EAAA,MAIJ4I,EAAO3M,WAAU,WACf,YAAU,UAAU,MAItB,IAAM6M,EAAQF,EACX7M,KACC,OAAA0D,EAAA,IAAO,SAAC,G,IAAES,EAAA,EAAAA,IAAU,OAAC,YAAiBA,MACtC,OAAAF,EAAA,MAIE+I,EAAO,OAAAlN,EAAA,GAAyBN,OAAQ,YAC3CQ,KACC,OAAA0D,EAAA,IAAO,SAAAC,GAAM,OAAa,OAAbA,EAAGsJ,SAChB,OAAApL,EAAA,IAAI,SAAA8B,GAAM,OACRQ,IAAK,IAAIY,IAAIX,SAASC,MACtBgD,OAAQ1D,EAAGsJ,UAEb,OAAAhJ,EAAA,MAIJ,OAAArC,EAAA,GAAMmL,EAAOC,GACVhN,KACC,OAAAkN,EAAA,IAAqB,SAACC,EAAMtK,GAAS,OAAAsK,EAAKhJ,IAAIE,OAASxB,EAAKsB,IAAIE,QAChE,OAAA8D,EAAA,GAAM,QAELjI,UAAUgF,GAGf,IAAMkI,EAAQlI,EACXlF,KACC,OAAA2H,EAAA,GAAwB,YACxB,OAAA0F,EAAA,GAAK,GACL,OAAArB,EAAA,IAAU,SAAA7H,GAAO,cAAAmJ,EAAA,GAAK,CACpBnJ,IAAKA,EAAIE,KACTkJ,aAAc,OACdC,iBAAiB,IAEhBxN,KACC,OAAAyN,EAAA,IAAW,WAET,OADA,YAAYtJ,GACL,YAOjB4I,EACG/M,KACC,OAAA0N,EAAA,GAAON,IAENlN,WAAU,SAAC,G,IAAEiE,EAAA,EAAAA,IACZuI,QAAQiB,UAAU,GAAI,GAAIxJ,EAAIiB,eAIpC,IAAMwI,EAAM,IAAIC,UAChBT,EACGpN,KACC,OAAA6B,EAAA,IAAI,SAAC,G,IAAEiM,EAAA,EAAAA,SAAe,OAAAF,EAAIG,gBAAgBD,EAAU,iBAEnD5N,UAAUN,GAGf,IAAMoO,EAAW,OAAApM,EAAA,GAAMmL,EAAOC,GAC3BhN,KACC,OAAA0N,EAAA,GAAO9N,IAIXoO,EAAS9N,WAAU,SAAC,G,IAAEiE,EAAA,EAAAA,IAAKkD,EAAA,EAAAA,OACrBlD,EAAIS,OAASyC,EACf,YAAgBlD,EAAIS,MAEpB,YAAkByC,GAAU,CAAEjF,EAAG,OAKrC4L,EACGhO,KACC,OAAAiO,EAAA,GAAerO,IAEdM,WAAU,SAAC,G,QAAG,EAAH,iBAAG,GAAEgO,EAAA,EAAAA,MAAOC,EAAA,EAAAA,KACtBpO,SAASqO,cAAc,IAAIC,YAAY,qBACvCtO,SAASmO,MAAQA,E,IAGjB,IAAuB,mBACrB,wBACA,sBACA,6BACD,8BAAE,CAJE,IAAM9N,EAAQ,QAKXyC,EAAO,YAAWzC,EAAU+N,GAC5BhB,EAAO,YAAW/M,EAAUL,SAASoO,WAEzB,IAATtL,QACS,IAATsK,GAEP,YAAeA,EAAMtK,I,qGAM/B4E,EACGzH,KACC,OAAAsO,EAAA,GAAa,KACb,OAAA3G,EAAA,GAAwB,WAEvBzH,WAAU,SAAC,G,IAAEmH,EAAA,EAAAA,OACZqF,QAAQ6B,aAAalH,EAAQ,OAInC,OAAAzF,EAAA,GAAMiL,EAAQG,GACXhN,KACC,OAAAwO,EAAA,GAAY,EAAG,GACf,OAAA9K,EAAA,IAAO,SAAC,G,IAAA,mBAACyJ,EAAA,KAAMtK,EAAA,KACb,OAAOsK,EAAKhJ,IAAIO,WAAa7B,EAAKsB,IAAIO,WAC9B,YAAiB7B,EAAKsB,QAEhC,OAAAtC,EAAA,IAAI,SAAC,GAAc,OAAd,iBAAG,OAEP3B,WAAU,SAAC,G,IAAEmH,EAAA,EAAAA,OACZ,YAAkBA,GAAU,CAAEjF,EAAG,O,WCrLlC,SAASqM,IACd,IAAMC,EAAY,cACf1O,KACC,OAAA6B,EAAA,IAAmB,SAAA5C,GAAO,OAAC,WAAD,CAAC,CACzBJ,KAAM,YAAU,UAAY,SAAW,UACpCI,MAEL,OAAAyE,EAAA,IAAO,SAAC,GACN,GAAa,WADL,EAAA7E,KACe,CACrB,IAAM8P,EAAS,cACf,QAAsB,IAAXA,EACT,OAAQ,YAAwBA,GAEpC,OAAO,KAET,OAAA1K,EAAA,MA4FJ,OAxFAyK,EACG1O,KACC,OAAA0D,EAAA,IAAO,SAAC,GAAa,MAAS,WAApB,EAAA7E,QACV,OAAAoP,EAAA,GACE,uBAAa,gBACb,uBAAa,mBAGd/N,WAAU,SAAC,G,IAAA,mBAACjB,EAAA,KAAK4G,EAAA,KAAO3I,EAAA,KACjByR,EAAS,cACf,OAAQ1P,EAAI6C,MAGV,IAAK,QACC6M,IAAW9I,GACb5G,EAAI6E,QACN,MAGF,IAAK,SACL,IAAK,MACH,YAAU,UAAU,GACpB,YAAgB+B,GAAO,GACvB,MAGF,IAAK,UACL,IAAK,YACH,QAAsB,IAAX8I,EACT,YAAgB9I,OACX,CACL,IAAM+I,EAAM,aAAC/I,GAAU,YAAY,SAAU3I,IACvCf,EAAI6G,KAAK0D,IAAI,GACjB1D,KAAK0D,IAAI,EAAGkI,EAAIC,QAAQF,IAAWC,EAAIvS,QACxB,YAAb4C,EAAI6C,MAAsB,EAAI,IAE9B8M,EAAIvS,QACR,YAAgBuS,EAAIzS,IAItB8C,EAAI6E,QACJ,MAGF,QACM+B,IAAU,eACZ,YAAgBA,OAK5B6I,EACG1O,KACC,OAAA0D,EAAA,IAAO,SAAC,GAAa,MAAS,WAApB,EAAA7E,QACV,OAAAoP,EAAA,GAAe,uBAAa,kBAE3B/N,WAAU,SAAC,G,IAAA,mBAACjB,EAAA,KAAK4G,EAAA,KAChB,OAAQ5G,EAAI6C,MAGV,IAAK,IACL,IAAK,IACL,IAAK,IACH,YAAgB+D,GAChB,YAAoBA,GACpB5G,EAAI6E,QACJ,MAGF,IAAK,IACL,IAAK,IACH,IAAMqJ,EAAO,YAAW,yBACJ,IAATA,GACTA,EAAKzH,QACP,MAGF,IAAK,IACL,IAAK,IACH,IAAM7C,EAAO,YAAW,yBACJ,IAATA,GACTA,EAAK6C,YAMVgJ,E,+CClMT,wEAiFII,EAjFJ,qEAgGO,SAASC,EACdC,EAAoB,G,IAAEpP,EAAA,EAAAA,UAEtBkP,EAAclP,EACXI,KAGC,aAAI,SAAAD,GAAY,OAAAiP,EAAMC,QAAqB,SAACC,EAAYhR,G,MAChDuC,EAAK,YAAW,sBAAsBvC,EAAI,IAAK6B,GACrD,OAAO,2BACFmP,QACc,IAAPzO,IAAoB,MAAIvC,GAAOuC,EAAE,GAAK,MAEjD,OAGH,aAAK,SAAC0M,EAAMtK,G,YACV,IAAmB,kBAAAmM,GAAK,8BAAE,CAArB,IAAM9Q,EAAI,QACb,OAAQA,GAGN,IAAK,WACL,IAAK,eACL,IAAK,YACL,IAAK,OACCA,KAAQiP,QAA8B,IAAfA,EAAKjP,KAC9B,YAAeiP,EAAKjP,GAAQ2E,EAAK3E,IACjCiP,EAAKjP,GAAQ2E,EAAK3E,IAEpB,MAGF,aAC4B,IAAf2E,EAAK3E,GACdiP,EAAKjP,GAAQ,YAAW,sBAAsBA,EAAI,YAE3CiP,EAAKjP,K,iGAGpB,OAAOiP,KAIT,YAAY,IAsBX,SAASgC,EACdjR,GAEA,OAAO4Q,EACJ9O,KACC,aAAU,SAAAkP,GAAc,YACM,IAArBA,EAAWhR,GACd,YAAGgR,EAAWhR,IACd,OAEN,iB,+BC3IC,SAASkR,EACd3O,EAAiB9B,GAEjB8B,EAAGsI,aAAa,gBAAiBpK,EAAQ,OAAS,IAQ7C,SAAS0Q,EACd5O,GAEAA,EAAG8L,gBAAgB,iBAWd,SAAS+C,EACd7O,EAAiB9B,GAEjB8B,EAAGqL,UAAUyD,OAAO,uBAAwB5Q,GAQvC,SAAS6Q,EACd/O,GAEAA,EAAGqL,UAAUU,OAAO,wBAvEtB,yI,kCCAA,gW,gLC+BMiD,EACO,uBAuBN,SAASC,EACdrE,GAEA,OACE,WADK,CACL,UACEsE,MAAOF,EACPvB,MAAO,YAAU,kBAAiB,wBACX,IAAI7C,EAAE,WAE7B,mBAAKuE,MAAM,6BAA6BC,QAAQ,aAC9C,oBAAM5R,EAxBZ,iI,WCTI,EACK,yBADL,EAEK,yBAFL,EAGK,gEAHL,EAIK,4BAJL,EAKK,0BALL,EAMK,2BA4BJ,SAAS6R,EACd,G,IAAEC,EAAA,EAAAA,QAASC,EAAA,EAAAA,SAILC,EACJ,WADW,CACX,OAAKN,MAAM,kCACT,mBAAKC,MAAM,6BAA6BC,QAAQ,aAC9C,oBAAM5R,EA3BZ,+aAiCMyL,EAAW,aAACqG,GAAYC,GAAUnO,KAAI,SAAA9B,GAClC,IAAAqE,EAAA,EAAAA,SAAU8J,EAAA,EAAAA,MAAOjC,EAAA,EAAAA,KACzB,OACE,WADK,CACL,KAAG5H,KAAMD,EAAUuL,MAAO,EAAUO,UAAW,GAC7C,uBAASP,MAAO,WAAY5P,EAAW,EAAc,KAChD,WAAYA,IAAakQ,EAC5B,kBAAIN,MAAO,GAAYzB,GACtBjC,EAAK5P,OAAS,GAAK,iBAAGsT,MAAO,GAAa,YAAS1D,EAAM,WAOlE,OACE,WADK,CACL,MAAI0D,MAAO,GACRjG,GChEP,IAAM,EACG,mBADH,EAEG,kBAcF,SAASyG,EACdC,GAEA,IAAM1G,EAAW0G,EAAMvO,KAAI,SAAAwO,GAAQ,OACjC,WADiC,CACjC,MAAIV,MAAO,GAAWU,MAExB,OACE,WADK,CACL,MAAIV,MAAO,GACRjG,GCzBP,IAAM,EACK,yBADL,EAEK,oBAcJ,SAAS4G,EACdC,GAEA,OACE,WADK,CACL,OAAKZ,MAAO,GACV,mBAAKA,MAAO,GACTY,M,6BCrBF,SAASC,EACd/P,EAAiB9B,GAEjB8B,EAAGgQ,MAAMC,IAAS/R,EAAK,KAQlB,SAASgS,EACdlQ,GAEAA,EAAGgQ,MAAMC,IAAM,GAWV,SAASE,EACdnQ,EAAiB9B,GAEjB8B,EAAGgQ,MAAMvN,OAAYvE,EAAK,KAQrB,SAASkS,EACdpQ,GAEAA,EAAGgQ,MAAMvN,OAAS,GAvEpB,yI,wCCAA,uT,6PCwGA,WA2BE,WAAmB,G,IAAE0F,EAAA,EAAAA,OAAQkI,EAAA,EAAAA,KAAMC,EAAA,EAAAA,SAAU7F,EAAA,EAAAA,MAC3C8F,KAAKC,UC/DF,SACLH,G,QAEMG,EAAY,IAAIC,I,IACtB,IAAkB,kBAAAJ,GAAI,8BAAE,CAAnB,IAAMK,EAAG,QACN,uCAACC,EAAA,KAAMxM,EAAA,KAGPR,EAAW+M,EAAI/M,SACf8J,EAAWiD,EAAIjD,MAGfjC,EAAO,EAAWkF,EAAIlF,MACzB5G,QAAQ,mBAAoB,IAC5BA,QAAQ,OAAQ,KAGnB,GAAIT,EAAM,CACR,IAAMuG,EAAS8F,EAAU1S,IAAI6S,GAGxBjG,EAAOkG,OAOVJ,EAAUK,IAAIlN,EAAU,CACtBA,SAAQ,EACR8J,MAAK,EACLjC,KAAI,EACJd,OAAM,KAVRA,EAAO+C,MAASiD,EAAIjD,MACpB/C,EAAOc,KAASA,EAChBd,EAAOkG,QAAS,QAclBJ,EAAUK,IAAIlN,EAAU,CACtBA,SAAQ,EACR8J,MAAK,EACLjC,KAAI,EACJoF,QAAQ,K,iGAId,OAAOJ,EDiBYM,CAAuBT,GACxCE,KAAKQ,UEvEF,SACL5I,GAEA,IAAM6I,EAAY,IAAIC,OAAO9I,EAAO6I,UAAW,OACzCD,EAAY,SAACG,EAAY9V,EAAc+V,GAC3C,OAAU/V,EAAI,OAAO+V,EAAI,SAI3B,OAAO,SAACjT,GACNA,EAAQA,EACL0G,QAAQ,eAAgB,KACxBwM,OAGH,IAAMC,EAAQ,IAAIJ,OAAO,MAAM9I,EAAO6I,UAAS,KAC7C9S,EACG0G,QAAQ,uBAAwB,QAChCA,QAAQoM,EAAW,KAAI,IACvB,OAGL,OAAO,SAAA1R,GAAY,OAAC,WAAD,CAAC,eACfA,GAAQ,CACXmO,MAAOnO,EAASmO,MAAM7I,QAAQyM,EAAON,GACrCvF,KAAOlM,EAASkM,KAAK5G,QAAQyM,EAAON,OF8CrBO,CAAuBnJ,GAItCoI,KAAK9F,WADc,IAAVA,EACI8G,MAAK,W,cAChBjB,EAAWA,GAAY,CAAC,UAAW,kBAGnCC,KAAKD,SAASkB,Q,IACd,IAAiB,kBAAAlB,GAAQ,+BAApB,IAAMmB,EAAE,QACXlB,KAAKD,SAAShF,IAAIiG,KAAKE,K,iGAGE,IAAvBtJ,EAAO2B,KAAKlO,QAAmC,OAAnBuM,EAAO2B,KAAK,GAC1CyG,KAAKmB,IAAKH,KAAapJ,EAAO2B,KAAK,KAC1B3B,EAAO2B,KAAKlO,OAAS,GAC9B2U,KAAKmB,KAAK,EAAAH,MAAaI,cAAa,oBAAIxJ,EAAO2B,QAIjDyG,KAAKqB,MAAM,QAAS,CAAEC,MAAO,MAC7BtB,KAAKqB,MAAM,QACXrB,KAAKzM,IAAI,Y,IAGT,IAAkB,kBAAAuM,GAAI,+BAAjB,IAAMK,EAAG,QACZH,KAAKjF,IAAIoF,I,qGAKAa,KAAKO,MAAMC,KACL,iBAAVtH,EACHjB,KAAKC,MAAMgB,GACXA,GAqBH,YAAArF,MAAP,SAAalH,GAAb,WACE,GAAIA,EACF,IAGE,IAAM8T,EAASzB,KAAK9F,MAAM9E,OAAOzH,GAC9BsQ,QAAO,SAACyD,EAASxV,GAChB,IAAM6C,EAAW,EAAKkR,UAAU1S,IAAIrB,EAAOqH,KAC3C,QAAwB,IAAbxE,EACT,GAAI,WAAYA,EAAU,CACxB,IAAMwE,EAAMxE,EAASoL,OAAO/G,SAC5BsO,EAAQpB,IAAI/M,EAAK,YAAImO,EAAQnU,IAAIgG,IAAQ,GAAI,CAAArH,SACxC,CACCqH,EAAMxE,EAASqE,SACrBsO,EAAQpB,IAAI/M,EAAKmO,EAAQnU,IAAIgG,IAAQ,IAGzC,OAAOmO,IACN,IAAIxB,KAGH,EAAKF,KAAKQ,UAAU7S,GAG1B,OAAO,YAAI8T,GAAQ5Q,KAAI,SAAC,G,IAAA,mBAAC0C,EAAA,KAAKyL,EAAA,KAAc,OAC1CD,QAAS,EAAG,EAAKkB,UAAU1S,IAAIgG,IAC/ByL,SAAUA,EAASnO,KAAI,SAAA8Q,GACrB,OAAO,EAAG,EAAK1B,UAAU1S,IAAIoU,EAAQpO,aAKzC,MAAO+F,GAEPsI,QAAQC,KAAK,kBAAkBlU,EAAK,iCAKxC,MAAO,IA3HX,GGvDO,SAASmU,EAAiBnU,GAC/B,OAAOA,EACJ0G,QAAQ,+BAAgC,IACxCwM,OACAxM,QAAQ,WAAY,M,ICtBP0N,E,sEA2EX,SAASC,EACdxK,GAEA,OAAOA,EAAQ1G,OAASiR,EAAkBE,MAUrC,SAASC,EACd1K,GAEA,OAAOA,EAAQ1G,OAASiR,EAAkBI,MAUrC,SAASC,EACd5K,GAEA,OAAOA,EAAQ1G,OAASiR,EAAkBM,OCtE5C,SAASC,EACP,G,IAAE1K,EAAA,EAAAA,OAAQkI,EAAA,EAAAA,KAAM5F,EAAA,EAAAA,MAiBhB,OAb2B,IAAvBtC,EAAO2B,KAAKlO,QAAmC,OAAnBuM,EAAO2B,KAAK,KAC1C3B,EAAO2B,KAAO,CAAC,YAAU,wBAGF,UAArB3B,EAAO6I,YACT7I,EAAO6I,UAAY,YAAU,4BAQxB,CAAE7I,OAAM,EAAEkI,KAAI,EAAE5F,MAAK,EAAE6F,SALb,YAAU,0BACxBwC,MAAM,WACN7P,OAAO8P,EAAA,IAsBL,SAASC,EACdtP,EAAa,G,IAAEuP,EAAA,EAAAA,OAAQC,EAAA,EAAAA,MAEjB3L,EAAS,IAAI4L,OAAOzP,GAGpB8D,EAAM,IAAI2D,EAAA,EACV1D,EAAM,YAAYF,EAAQ,CAAEC,IAAG,IAClCjI,KACC,OAAAiO,EAAA,GAAe0F,GACf,OAAA9R,EAAA,IAAI,SAAC,G,YAAA,mBAAC2G,EAAA,KAASvD,EAAA,KACb,GAAImO,EAAsB5K,G,IACxB,IAAoC,kBAAAA,EAAQ3M,MAAI,8BAAE,CAAvC,cAAEkU,EAAA,EAAAA,QAASC,EAAA,EAAAA,SACpBD,EAAQ3L,SAAca,EAAI,IAAI8K,EAAQ3L,S,IACtC,IAAsB,4BAAA4L,IAAQ,+BAAzB,IAAM2C,EAAO,QAChBA,EAAQvO,SAAca,EAAI,IAAI0N,EAAQvO,U,oMAG5C,OAAOoE,KAET,OAAAxG,EAAA,GAAY,IAehB,OAXA0R,EACG1T,KACC,OAAA6B,EAAA,IAAqC,SAAAqJ,GAAS,OAC5CpJ,KAAMiR,EAAkBc,MACxBhY,KAAMyX,EAAiBpI,OAEzB,OAAAkB,EAAA,GAAU,MAETlM,UAAU+H,EAAIpF,KAAK3D,KAAK+I,IAGtB,CAAEA,IAAG,EAAEC,IAAG,ID1GnB,SAAkB6K,GAChB,qBACA,qBACA,qBACA,uBAJF,CAAkBA,MAAiB,M,6CE/BnC,gd,6CCAA,8JAsFO,SAASe,EACdrT,EAAiB,G,IAAEsT,EAAA,EAAAA,MAAOtM,EAAA,EAAAA,UAEpBuM,EAASvT,EAAG2K,cAAetD,UAClBrH,EAAG2K,cAAeA,cAAetD,UAGhD,OAAO,YAAc,CAACiM,EAAOtM,IAC1BzH,KACC,aAAI,SAAC,G,IAAA,mBAAC,OAAEqH,EAAA,EAAAA,OAAQnE,EAAA,EAAAA,OAAsBd,EAAA,YAAAA,EAIpC,MAAO,CACLc,OAJFA,EAASA,EACLF,KAAKiR,IAAID,EAAQhR,KAAK0D,IAAI,EAAGtE,EAAIiF,IACjC2M,EAGFE,KAAM9R,GAAKiF,EAAS2M,MAGxB,aAA8B,SAACG,EAAGC,GAChC,OAAOD,EAAEjR,SAAWkR,EAAElR,QACfiR,EAAED,OAAWE,EAAEF,SAevB,SAASG,EACd5T,EAAiB,G,IAAE+G,EAAA,EAAAA,QAEnB,OAAO,YAGL,YAAU,KACV,YAAeA,GACf,aAAI,SAAC,G,IAAA,mBAAC,OAAEtE,EAAA,EAAAA,OAAQgR,EAAA,EAAAA,KAAU,OAAAhR,OACxB,YAAiBzC,EAAIyC,GAGjBgR,EACF,YAAiBzT,EAAI4G,GAErB,YAAmB5G,MAIvB,aAAI,SAAC,GAAc,OAAd,iBAAC,MAGN,aAAS,WACP,YAAmBA,GACnB,YAAmBA,S,6BCjJzB,0E,6BCAA,2GAiGO,SAAS6T,EACd,G,IAAE9M,EAAA,EAAAA,QAASuM,EAAA,EAAAA,MAAOtM,EAAA,EAAAA,UAAW8M,EAAA,EAAAA,QAE7B,OAAO,YACL,aAAU,SAAA9T,GAAM,OAAA8T,EACbvU,KACC,aAAU,SAAAwU,GAGR,GAAIA,EAAQ,CACV,IAAM5F,EAAM,YAA+B,gBAAiBnO,GAGtDgU,EAAW,uBAAahU,EAAI,CAAEsT,MAAK,EAAEtM,UAAS,IACjDzH,KACC,uBAAaS,EAAI,CAAE+G,QAAO,KAIxBkN,EAAW,0BAAgB9F,EAAK,CAAEpH,QAAO,EAAEC,UAAS,IACvDzH,KACC,0BAAgB4O,IAIpB,OAAO,YAAc,CAAC6F,EAAUC,IAC7B1U,KACC,aAAI,SAAC,G,IAAA,mBAAuB,OAAG2U,QAAzB,KAAkCC,QAAzB,UAKnB,OAAO,YAAG,c,6CCjItB,6MA0FO,SAASC,EACdjG,EAA0B,G,QAAEpH,EAAA,EAAAA,QAASC,EAAA,EAAAA,UAE/B8I,EAAQ,IAAIW,I,IAClB,IAAiB,kBAAAtC,GAAG,8BAAE,CAAjB,IAAMnO,EAAE,QACL4K,EAAKyJ,mBAAmBrU,EAAGmE,KAAKW,UAAU,IAC1CjE,EAAS,YAAW,QAAQ+J,EAAE,WACd,IAAX/J,GACTiP,EAAMe,IAAI7Q,EAAIa,I,iGAIlB,IAAMyT,EAAUvN,EACbxH,KACC,aAAI,SAAAgV,GAAU,UAAKA,EAAO9R,WAyE9B,OArEmB,YAAiBnD,SAASmM,MAC1ClM,KACC,YAAwB,UAGxB,aAAI,WACF,IAAIoR,EAA4B,GAChC,OAAO,YAAIb,GAAOtB,QAAO,SAAC/D,EAAO,GAC/B,I,IAD+B,mBAAC+J,EAAA,KAAQ3T,EAAA,KACjC8P,EAAK/U,QAAQ,CAElB,KADakU,EAAMhS,IAAI6S,EAAKA,EAAK/U,OAAS,IACjC8E,SAAWG,EAAOH,SAGzB,MAFAiQ,EAAK8D,MAQT,IADA,IAAI7N,EAAS/F,EAAOwG,WACZT,GAAU/F,EAAO8J,eAEvB/D,GADA/F,EAASA,EAAO8J,eACAtD,UAIlB,OAAOoD,EAAMoG,IACX,YAAQF,EAAO,YAAIA,EAAM,CAAA6D,KACzB5N,KAED,IAAI6J,QAIT,aAAU,SAAAhG,GAAS,mBAAc,CAAC6J,EAAStN,IACxCzH,KACC,aAAK,SAAC,EAAc,GAGlB,I,IAHI,mBAACmN,EAAA,KAAMtK,EAAA,KAAO,mBAACmR,EAAA,KAAoB5R,EAAA,YAAAA,EAGhCS,EAAKxG,QAAQ,CAElB,KADM,oBAAG,GACI2X,EAAS5R,GAGpB,MAFA+K,EAAO,YAAIA,EAAM,CAAAtK,EAAK/F,UAO1B,KAAOqQ,EAAK9Q,QAAQ,CAElB,KADM,6BAAG,GACI2X,GAAU5R,GAGrB,MAFAS,EAAO,aAACsK,EAAK+H,OAAWrS,GAO5B,MAAO,CAACsK,EAAMtK,KACb,CAAC,GAAI,YAAIqI,KACZ,aAAqB,SAACiJ,EAAGC,GACvB,OAAOD,EAAE,KAAOC,EAAE,IACXD,EAAE,KAAOC,EAAE,WAQzBpU,KACC,aAAI,SAAC,G,IAAA,mBAACmN,EAAA,KAAMtK,EAAA,KAAU,OACpBsK,KAAMA,EAAKtL,KAAI,SAAC,GAAW,OAAX,iBAAC,MACjBgB,KAAMA,EAAKhB,KAAI,SAAC,GAAW,OAAX,iBAAC,UAInB,YAAU,CAAEsL,KAAM,GAAItK,KAAM,KAC5B,YAAY,EAAG,GACf,aAAI,SAAC,G,IAAA,mBAACsR,EAAA,KAAGC,EAAA,KAGP,OAAID,EAAEhH,KAAK9Q,OAAS+X,EAAEjH,KAAK9Q,OAClB,CACL8Q,KAAMiH,EAAEjH,KAAKzN,MAAMsD,KAAK0D,IAAI,EAAGyN,EAAEhH,KAAK9Q,OAAS,GAAI+X,EAAEjH,KAAK9Q,QAC1DwG,KAAM,IAKD,CACLsK,KAAMiH,EAAEjH,KAAKzN,OAAO,GACpBmD,KAAMuR,EAAEvR,KAAKnD,MAAM,EAAG0U,EAAEvR,KAAKxG,OAAS8X,EAAEtR,KAAKxG,aAgBlD,SAAS8Y,EACdvG,GAEA,OAAO,YAGL,YAAU,KACV,aAAI,SAAC,G,QAAEzB,EAAA,EAAAA,KAAMtK,EAAA,EAAAA,K,IAGX,IAAmB,kBAAAA,GAAI,8BAAE,CAAd,IAACpC,EAAD,uBAAC,GACV,YAAkBA,GAClB,YAAgBA,I,iGAIlB0M,EAAKnC,SAAQ,SAAC,EAAME,G,IAALzK,EAAD,iBAAC,GACb,YAAgBA,EAAIyK,IAAUiC,EAAK9Q,OAAS,GAC5C,YAAcoE,GAAI,SAKtB,aAAS,W,YACP,IAAiB,kBAAAmO,GAAG,8BAAE,CAAjB,IAAMnO,EAAE,QACX,YAAkBA,GAClB,YAAgBA,I,yWCxJjB,SAAS2U,EACd,EACA,G,IADElN,EAAA,EAAAA,IAAKD,EAAA,EAAAA,IACLoN,EAAA,EAAAA,OAAQC,EAAA,EAAAA,OAAQC,EAAA,EAAAA,QAElB,OAAO,OAAAvV,EAAA,GACL,OAAAgM,EAAA,IAAU,WAGR,IAAMwJ,EAAUtN,EACblI,KACC,OAAA0D,EAAA,GAAO,KACP,OAAAzD,EAAA,GAAoB,SACpB,OAAA8B,EAAA,GAAU,YAad,OATAkG,EACGjI,KACC,OAAA0D,EAAA,GAAO,KACP,OAAAgK,EAAA,GAAO8H,GACP,OAAArQ,EAAA,GAAK,IAEJjF,UAAU+H,EAAIpF,KAAK3D,KAAK+I,IAGtB,OAAAd,EAAA,GAAc,CAACqO,EAASH,EAAQE,EAASD,IAC7CtV,KACC,OAAA6B,EAAA,IAAI,SAAC,G,IAAA,mBAA4B,OAC/B4T,OADI,KAEJ5P,MAFY,KAGZ3I,OAHmB,c,4DC3CxB,SAASwY,EACd,EAAuCC,G,IAArC1N,EAAA,EAAAA,IAEF,YAFuC,IAAA0N,MAAA,IAEhC,OAAA3V,EAAA,GACL,OAAAgM,EAAA,IAAU,SAAAvL,GACR,IAAM4U,EClBL,SACL5U,EAAsB,G,IAEhByR,QAFkB,YAAA0D,WAEA,IAGlBzL,EAAS,OAAAvI,EAAA,GACb,OAAA9B,EAAA,GAAUW,EAAI,SACd,OAAAX,EAAA,GAAUW,EAAI,SAAST,KAAK,OAAAsM,EAAA,GAAM,KAEjCtM,KACC,OAAA6B,EAAA,IAAI,WAAM,OAAAqQ,EAAGzR,EAAG9B,UAChB,OAAAoD,EAAA,GAAUmQ,EAAGzR,EAAG9B,QAChB,OAAAuO,EAAA,MAIE2I,EAAS,YAAkBpV,GAGjC,OAAO,OAAA0G,EAAA,GAAc,CAACgD,EAAQ0L,IAC3B7V,KACC,OAAA6B,EAAA,IAAI,SAAC,G,IAAA,mBAAmB,OAAGlD,MAArB,KAA4B8C,MAArB,UDJEqU,CAAiBrV,EAAIkV,GAwBpC,OArBAN,EACGrV,KACC,OAAA2H,EAAA,GAAwB,SACxB,OAAA9F,EAAA,IAAI,SAAC,G,IAAElD,EAAA,EAAAA,MAAgC,OACrCmD,KAAM,IAAkBqR,MACxBtX,KAAM8C,OAGPuB,UAAU+H,EAAIpF,KAAK3D,KAAK+I,IAG7BoN,EACGrV,KACC,OAAA2H,EAAA,GAAwB,UAEvBzH,WAAU,SAAC,G,IAAEuB,EAAA,EAAAA,MACRA,GACF,YAAU,SAAUA,MAIrB4T,M,6BE1DN,SAASU,IACd,OAAO,OAAA/V,EAAA,GACL,OAAAgM,EAAA,IAAU,SAAAvL,GAAM,OCXb,SACLA,GAEA,OAAO,OAAAX,EAAA,GAAUW,EAAI,SAClBT,KACC,OAAAC,EAAA,QAAMM,IDMQyV,CAAiBvV,GAC9BT,KACC,OAAA0I,EAAA,GAAY,YAAa,iBACzB,OAAAH,EAAA,GAAI,KACJ,OAAAtI,EAAA,QAAMM,OAGV,OAAAwB,EAAA,QAAUxB,I,4EEoBP,SAAS0V,EACdxV,EAAiByI,GAEjBzI,EAAGwI,YAAYC,GCEV,SAASgN,EACdzV,EAAiB,G,IAAE4U,EAAA,EAAAA,OAAQc,EAAA,EAAAA,OAAQC,EAAA,EAAAA,OAE7BC,EAAO,YAAkB,0BAA2B5V,GACpD6V,EAAO,YAAkB,0BAA2B7V,GAC1D,OAAO,OAAAT,EAAA,GAGL,OAAAiO,EAAA,GAAeoH,EAAQc,GACvB,OAAAtU,EAAA,IAAI,SAAC,G,IAAA,mBAAC3E,EAAA,KAMJ,OANY,KACFyB,MDvDT,SACL8B,EAAiB9B,GAEjB,OAAQA,GAGN,KAAK,EACH8B,EAAGgK,YAAc,YAAU,sBAC3B,MAGF,KAAK,EACHhK,EAAGgK,YAAc,YAAU,qBAC3B,MAGF,QACEhK,EAAGgK,YAAc,YAAU,sBAAuB9L,EAAMyG,aCuCtDmR,CAAoBD,EAAMpZ,EAAOb,QD9BlC,SACLoE,GAEAA,EAAGgK,YAAc,YAAU,6BC6BrB+L,CAAsBF,GAEjBpZ,KAIT,OAAA8O,EAAA,IAAU,SAAA9O,GAAU,OAAAkZ,EACjBpW,KAGC,OAAAoM,EAAA,GAAUC,EAAA,GACV,OAAAoK,EAAA,IAAK,SAAAvL,GAEH,IADA,IAAMiB,EAAY1L,EAAG2K,cACdF,EAAQhO,EAAOb,SACpB4Z,EAAsBI,EAAM,YAAmBnZ,EAAOgO,SAClDiB,EAAUuK,aAAevK,EAAU9I,aAAe,OAGxD,OAAO6H,IACN,GAGH,OAAAjL,EAAA,GAAM/C,GAGN,OAAAyZ,EAAA,IAAS,YDhCV,SACLlW,GAEAA,EAAG0I,UAAY,GC8BPyN,CAAsBP,WClDzB,SAASQ,EACd,EAAuC,G,IAArC3O,EAAA,EAAAA,IAAuCmN,EAAA,EAAAA,OAEzC,OAAO,OAAArV,EAAA,GACL,OAAAgM,EAAA,IAAU,SAAAvL,GACR,IAAM0L,EAAY1L,EAAG2K,cAGf+K,EAASjO,EACZlI,KACC,OAAA0D,EAAA,GAAO,KACP,OAAAzD,EAAA,IAAM,IAIJmW,EAAS,YAAmBjK,GAC/BnM,KACC,OAAA6B,EAAA,IAAI,SAAC,GACH,OADK,EAAAO,GACO+J,EAAUuK,aAAevK,EAAU9I,aAAe,MAEhE,OAAA6J,EAAA,KACA,OAAAxJ,EAAA,GAAO8P,EAAA,IAIX,OAAOtL,EACJlI,KACC,OAAA0D,EAAA,GAAO,KACP,OAAAyE,EAAA,GAAM,QACN+N,EAAkBzV,EAAI,CAAE4U,OAAM,EAAEc,OAAM,EAAEC,OAAM,IAC9C,OAAArU,EAAA,GAAU,W,gMCvBb,SAAS+U,EACd,G,IAAEtP,EAAA,EAAAA,QAASC,EAAA,EAAAA,UAELsM,EAAQ,IAAInI,EAAA,EAelB,OAZA,YAAa,UACV5L,KACC,OAAAgM,EAAA,IAAU,SAAAgJ,GAAU,OAAAjB,EACjB/T,KACC,OAAA2H,EAAA,GAAwB,WCoDhClH,EDnD0BuU,ECqDnB,OAAAhV,EAAA,GAGL,OAAAoM,EAAA,GAAUC,EAAA,GACV,OAAA9D,EAAA,IAAI,SAAC,G,IAAEoG,EAAA,EAAAA,QC/GJ,SACLlO,EAAiB9B,GAEjB8B,EAAGsI,aAAa,gBAAiBpK,EAAQ,SAAW,ID6GhDoY,CAAgBtW,EAAIkO,MAItB,OAAAgI,EAAA,IAAS,YCzGN,SACLlW,GAEAA,EAAG8L,gBAAgB,iBDuGfyK,CAAkBvW,SAbjB,IACLA,MD/CKP,YAGE,OAAAF,EAAA,GACL,OAAAgM,EAAA,IAAU,SAAAvL,GAAM,OChBb,SACLA,EAAiB,G,IAAE+G,EAAA,EAAAA,QAASC,EAAA,EAAAA,UAItBsN,EAAUvN,EACbxH,KACC,OAAAmI,EAAA,GAAM,UACN,OAAA+E,EAAA,KACA,OAAAlL,EAAA,GAAY,IAIViV,EAAUlC,EACb/U,KACC,OAAAgM,EAAA,IAAU,WAAM,mBAAiBvL,GAC9BT,KACC,OAAA6B,EAAA,IAAI,SAAC,G,IAAEqB,EAAA,EAAAA,OAAa,OAClBwN,IAAQjQ,EAAGqH,UACXoP,OAAQzW,EAAGqH,UAAY5E,UAI7B,OAAAyE,EAAA,GAAwB,UACxB,OAAA3F,EAAA,GAAY,IAIhB,OAAO,OAAAmF,EAAA,GAAc,CAAC4N,EAASkC,EAASxP,IACrCzH,KACC,OAAA6B,EAAA,IAAI,SAAC,G,IAAA,mBAACmT,EAAA,KAAQ,OAAEtE,EAAA,EAAAA,IAAKwG,EAAA,EAAAA,OAAU,OAAY9U,EAAA,SAAAA,EAAac,EAAA,OAAAA,OAKtD,MAAO,CACLmE,OAAQqJ,EAAMsE,EACd9R,OANFA,EAASF,KAAK0D,IAAI,EAAGxD,EACjBF,KAAK0D,IAAI,EAAGgK,EAAStO,EAAI4S,GACzBhS,KAAK0D,IAAI,EAAGxD,EAASd,EAAI8U,IAK3BvI,OAAQ+B,EAAMsE,GAAU5S,MAG5B,OAAA8K,EAAA,IAA2B,SAACiH,EAAGC,GAC7B,OAAOD,EAAE9M,SAAW+M,EAAE/M,QACf8M,EAAEjR,SAAWkR,EAAElR,QACfiR,EAAExF,SAAWyF,EAAEzF,WD5BVwI,CAAU1W,EAAI,CAAE+G,QAAO,EAAEC,UAAS,OAClD,OAAAc,EAAA,IAAI,SAAA6O,GAAQ,OAAArD,EAAMlR,KAAKuU,S,yIG3BpB,SAASC,EACd,G,IAAE7P,EAAA,EAAAA,QAASC,EAAA,EAAAA,UAEX,OAAO,OAAAzH,EAAA,GACL,OAAAgM,EAAA,IAAU,SAAAvL,GAAM,mBAAgBA,EAAI,CAAE+G,QAAO,EAAEC,UAAS,IACrDzH,KACC,OAAA6B,EAAA,IAAI,SAAC,GAAsB,OAAGyV,OAAb,SAAAlV,GAA0B,OAC3C,OAAAuF,EAAA,GAAwB,UC7BzB,SACLlH,GAEA,OAAO,OAAAT,EAAA,GAGL,OAAAoM,EAAA,GAAUC,EAAA,GACV,OAAA9D,EAAA,IAAI,SAAC,G,IAAE+O,EAAA,EAAAA,QCrBJ,SACL7W,EAAiB9B,GAEjB8B,EAAGsI,aAAa,gBAAiBpK,EAAQ,SAAW,IDmBhD4Y,CAAc9W,EAAI6W,MAIpB,OAAAX,EAAA,IAAS,YCfN,SACLlW,GAEAA,EAAG8L,gBAAgB,iBDafiL,CAAgB/W,ODiBdgX,CAAUhX,U,wMGcX,SAASiX,EACd,G,IAAE9X,EAAA,EAAAA,UAAW6H,EAAA,EAAAA,UAEb,OAAO,OAAAzH,EAAA,GACL,OAAAgM,EAAA,IAAU,SAAAvL,GACR,IAAM+G,EC1BL,SACL/G,EAAiB,GAEjB,OAFmB,EAAAb,UAGhBI,KACC,OAAA6B,EAAA,IAAI,WACF,IAAM8V,EAASC,iBAAiBnX,GAChC,MAAO,CACL,SACA,kBACAoX,SAASF,EAAOG,aAEpB,OAAA5K,EAAA,KACA,OAAAlB,EAAA,IAAU,SAAA+L,GACR,OAAIA,EACK,YAAiBtX,GACrBT,KACC,OAAA6B,EAAA,IAAI,SAAC,GAAe,OAClBkW,QAAQ,EACR7U,OAFK,EAAAA,YAMJ,OAAA8G,EAAA,GAAG,CACR+N,QAAQ,EACR7U,OAAQ,OAId,OAAAlB,EAAA,GAAY,IDHIgW,CAAYvX,EAAI,CAAEb,UAAS,IAGrCqY,EAAQ,YAAa,QACxBjY,KACC,OAAA6B,EAAA,IAAI,SAAAuV,GAAQ,mBAAW,yBAA0BA,MACjD,OAAA1T,EAAA,IAAO,SAAAwU,GAAM,YAAc,IAAPA,KACpB,OAAAjK,EAAA,GAAe,YAAa,iBAC5B,OAAAjC,EAAA,IAAU,SAAC,G,IAAA,mBAACkM,EAAA,KAAIhK,EAAA,KAAW,mBAAgBgK,EAAI,CAAE1Q,QAAO,EAAEC,UAAS,IAChEzH,KACC,OAAA6B,EAAA,IAAI,SAAC,GACH,OADe,SAAAO,GACH8V,EAAG7U,aAAe,OAAS,UAEzC,OAAA6J,EAAA,KCGP,SACLzM,GAEA,OAAO,OAAAT,EAAA,GAGL,OAAAoM,EAAA,GAAUC,EAAA,GACV,OAAA9D,EAAA,IAAI,SAAAzG,ICtFD,SACLrB,EAAiB9B,GAEjB8B,EAAGsI,aAAa,gBAAiBpK,EAAQ,SAAW,IDoFhDwZ,CAAqB1X,EAAa,SAATqB,MAI3B,OAAA6U,EAAA,IAAS,YChFN,SACLlW,GAEAA,EAAG8L,gBAAgB,iBD8Ef6L,CAAuB3X,ODff4X,CAAgBnK,OAGpB,OAAAnM,EAAA,GAAsB,SAI1B,OAAO,OAAAoF,EAAA,GAAc,CAACK,EAASyQ,IAC5BjY,KACC,OAAA6B,EAAA,IAAI,SAAC,G,IAAA,mBAACmT,EAAA,KAAQlT,EAAA,KAAkB,OAAC,WAAD,CAAC,CAAEA,KAAI,GAAKkT,MAC5C,OAAAhT,EAAA,GAAY,U,iJGlDf,SAASsW,EACd,G,IAAE9Q,EAAA,EAAAA,QAASC,EAAA,EAAAA,UAAW8Q,EAAA,EAAAA,QAEtB,OAAO,OAAAvY,EAAA,GACL,OAAAgM,EAAA,IAAU,SAAAvL,GAAM,OAAA8X,EACbvY,KACC,OAAAgM,EAAA,IAAU,SAAAwM,GAGR,OAAIA,EACK,YAAgB/X,EAAI,CAAE+G,QAAO,EAAEC,UAAS,IAC5CzH,KACC,OAAA6B,EAAA,IAAI,SAAC,GAAsB,OAAGyV,OAAb,SAAAlV,GAA0B,OAC3C,OAAAuF,EAAA,GAAwB,UCpCjC,SACLlH,GAEA,OAAO,OAAAT,EAAA,GAGL,OAAAoM,EAAA,GAAUC,EAAA,GACV,OAAA9D,EAAA,IAAI,SAAC,G,IAAE+O,EAAA,EAAAA,QCrBJ,SACL7W,EAAiB9B,GAEjB8B,EAAGsI,aAAa,gBAAiBpK,EAAQ,SAAW,IDmBhD8Z,CAAchY,EAAI6W,MAIpB,OAAAX,EAAA,IAAS,YCfN,SACLlW,GAEAA,EAAG8L,gBAAgB,iBDafmM,CAAgBjY,ODwBNkY,CAAUlY,IAKP,OAAAuJ,EAAA,GAAG,CAAEsN,QAAQ,c,0GGHzB,SAASsB,EACd,G,IAAEpR,EAAA,EAAAA,QAASuM,EAAA,EAAAA,MAAOtM,EAAA,EAAAA,UAAW8Q,EAAA,EAAAA,QAE7B,OAAO,OAAAvY,EAAA,GACL,OAAAgM,EAAA,IAAU,SAAAvL,GAAM,OAAA8X,EACbvY,KACC,OAAAgM,EAAA,IAAU,SAAAwM,GAGR,OAAIA,EACK,uBAAa/X,EAAI,CAAEsT,MAAK,EAAEtM,UAAS,IACvCzH,KACC,uBAAaS,EAAI,CAAE+G,QAAO,IAC1B,OAAA3F,EAAA,IAAI,SAAA8S,GAAW,OAAGA,QAAO,OAKtB,OAAA3K,EAAA,GAAG,c,0bCxDtB,SAAS6O,IACP,MAAO,qBAAqBpU,KAAKqU,UAAUC,W,mBCe7C,SAASC,EACP7U,GAGA,OADM,gDAAC,GACM8U,eAGX,IAAK,SACG,qEACN,OC7BC,SACLC,EAAcC,GAEd,OAAO,OAAA7L,EAAA,GAAK,CACVnJ,SAAqB,IAATgV,EACR,gCAAgCD,EAAI,IAAIC,EACxC,gCAAgCD,EACpC3L,aAAc,SAEbvN,KACC,OAAA0D,EAAA,IAAO,SAAC,GAAe,OAAW,MAAxB,EAAA+R,UACV,OAAAtN,EAAA,GAAM,YACN,OAAA6D,EAAA,IAAU,SAAAnQ,GAGR,QAAoB,IAATsd,EAAsB,CACvB,IAAAC,EAAA,EAAAA,iBAAkBC,EAAA,EAAAA,YAC1B,OAAO,OAAArP,EAAA,GAAG,CACL,YAAMoP,GAAoB,GAAE,SAC5B,YAAMC,GAAe,GAAE,WAKpB,IAAAC,EAAA,EAAAA,aACR,OAAO,OAAAtP,EAAA,GAAG,CACL,YAAMsP,GAAgB,GAAE,sBDG1BC,CADE,KAAM,MAIjB,IAAK,SACG,qEACN,OElCC,SACLtU,EAAcuU,GAEd,OAAO,OAAAlM,EAAA,GAAK,CACVnJ,IAAK,WAAWc,EAAI,oBAAoBwU,mBAAmBD,GAC3DjM,aAAc,SAEbvN,KACC,OAAA0D,EAAA,IAAO,SAAC,GAAe,OAAW,MAAxB,EAAA+R,UACV,OAAAtN,EAAA,GAAM,YACN,OAAAtG,EAAA,IAAI,SAAC,G,IAAE6X,EAAA,EAAAA,WAAYL,EAAA,EAAAA,YAAiC,OAC/C,YAAMK,GAAW,SACjB,YAAML,GAAY,cFsBhBM,CADE,KAAM,MAIjB,QACE,OAAO,KG8BN,SAASC,EACdnZ,EAAiB9B,GAEjB8B,EAAGsI,aAAa,gBAAiB,QACjCtI,EAAGgQ,MAAMC,IAAM,IAAI/R,EAAK,KAQnB,SAASkb,EACdpZ,GAEA,IAAM9B,GAAS,EAAImb,SAASrZ,EAAGgQ,MAAMC,IAAK,IAC1CjQ,EAAG8L,gBAAgB,iBACnB9L,EAAGgQ,MAAMC,IAAM,GACX/R,GACFa,OAAOsH,SAAS,EAAGnI,GAYhB,SAASob,EAAWnR,GACzB,IAAK,YAASA,GACZ,MAAM,IAAIoR,YAAY,0BAA0B/P,KAAKI,UAAUzB,IAGjE,IAAMhJ,EAAY,cACZsF,EAAY,cAGZyO,EAAY,YAAkB/K,EAAO3D,KAAM,CAAEC,UAAS,IACtD+U,EAAY,cACZxS,EAAY,cACZ8M,EAAY,YAAW,sBACvBgE,EAAY,YAAW,uBAK7B,0BAAgB,CACd,WACA,YACA,SACA,eACA,OACA,OACA,aACA,SACA,eACA,eACA,gBACA,OACA,OACA,OACC,CAAE3Y,UAAS,IAEd,IAAM8O,EAAY,eCpHb,SACL,G,IAAE9O,EAAA,EAAAA,UAAWqa,EAAA,EAAAA,MAEPC,EAAOta,EACVI,KACC,OAAA6B,EAAA,IAAI,WAAM,mBAAgC,eAI9C,OAAAD,EAAA,GACE,YAAW,SAAS5B,KAAK,OAAA0D,EAAA,GAAO8P,EAAA,IAChC,OAAA1T,EAAA,GAAUN,OAAQ,gBAEjBQ,KACC,OAAA0I,EAAA,GAAYwR,IAEXha,WAAU,SAAA0O,G,YACT,IAAiB,kBAAAA,GAAG,+BAAP,QACR7F,aAAa,OAAQ,K,qGAIhCkR,EACGja,KACC,OAAA6B,EAAA,IAAI,SAAAwJ,GAAM,mBAAW,QAAQA,EAAE,SAC/B,OAAA3H,EAAA,IAAO,SAAAjD,GAAM,YAAc,IAAPA,KACpB,OAAA8H,EAAA,IAAI,SAAA9H,GACF,IAAM0Z,EAAU1Z,EAAGqM,QAAQ,WACvBqN,IAAYA,EAAQC,MACtBD,EAAQpR,aAAa,OAAQ,QAGhC7I,WAAU,SAAAO,GAAM,OAAAA,EAAG4Z,oBDsFxBC,CAAa,CAAE1a,UAAS,EAAEqa,MAAK,IAClB,CAAEra,UAAS,GE5HtBA,UAGCI,KACC,OAAAqN,EAAA,GAAK,GACL,OAAAY,EAAA,GAAe,uBAAa,cAC5B,OAAApM,EAAA,IAAI,SAAC,G,IAAGpB,EAAH,iBAAG,GAAQ,mBAA+B,SAAUA,OAIxDP,WAAU,SAAA0O,G,YACb,IAAiB,kBAAAA,GAAG,8BAAE,CAAjB,IAAMnO,EAAE,QACX,GAAIA,EAAG8Z,KAAO,qBAAqB9V,KAAKhE,EAAGqB,MAAO,CAChD,IAAM0Y,EAAS,YAAc,UACvBvb,EAAMwB,EAAG8Z,IAAM,MAAQ,cAC7BC,EAAOvb,GAAOwB,EAAGxB,GACjB,YAAewB,EAAI+Z,K,qGLyBpB,SACL,GAAE,EAAA5a,UAGCI,KACC,OAAA6B,EAAA,IAAI,WAAM,mBAAqC,uBAC/C,OAAAmK,EAAA,IAAU,SAAC,G,IAAE3H,EAAA,EAAAA,KAAW,OACtB,WADsB,CAChB,GAAG,YAAKA,IAAS,WAAM,OAAA2U,EAAiB3U,SAEhD,OAAAoJ,EAAA,IAAW,WAAM,eAEhBvN,WAAU,SAAAkQ,G,YACT,IAAiB,8BAAY,2BAAyB,8BAAE,CAAnD,IAAM3P,EAAE,QACNA,EAAGga,aAAa,mBACnBha,EAAGsI,aAAa,gBAAiB,QACjCtI,EAAGwI,YAAY,YAAamH,M,qGGqEtCsK,CAAY,CAAE9a,UAAS,IG9HlB,SACL,G,IAAEA,EAAA,EAAAA,UAEI+a,EAAW,YAAc,SAC/B/a,EACGI,KACC,OAAA6B,EAAA,IAAI,WAAM,mBAA8B,0BAEvC3B,WAAU,SAAA0O,G,YACT,IAAiB,kBAAAA,GAAG,8BAAE,CAAjB,IAAMnO,EAAE,QACX,YAAeA,EAAIka,GACnB,YAAeA,EAAU,YAAYla,K,qGHoH7Cma,CAAY,CAAEhb,UAAS,IJpHlB,SACL,G,IAEMsa,EAFJ,EAAAta,UAGCI,KACC,OAAA6B,EAAA,IAAI,WAAM,mBAAY,0BACtB,OAAAG,EAAA,GAAY,IAIhBkY,EAAKha,WAAU,SAAA0O,G,YACb,IAAiB,kBAAAA,GAAG,+BAAP,QACRrC,gBAAgB,sB,qGAIvB,OAAAsO,EAAA,GAAIhC,EAAeqB,EAAM,KACtBla,KACC,OAAAgM,EAAA,IAAU,SAAA4C,GAAO,OAAAhN,EAAA,EAAK,yBAAIgN,EAAI/M,KAAI,SAAApB,GAAM,OACtC,OAAAX,EAAA,GAAUW,EAAI,aAAc,CAAE2G,SAAS,IACpCpH,KACC,OAAAC,EAAA,GAAMQ,aAIXP,WAAU,SAAAO,GACT,IAAMiQ,EAAMjQ,EAAG4B,UAGH,IAARqO,EACFjQ,EAAG4B,UAAY,EAGNqO,EAAMjQ,EAAG4C,eAAiB5C,EAAGiW,eACtCjW,EAAG4B,UAAYqO,EAAM,MIqF7BoK,CAAe,CAAElb,UAAS,IAG1B,IAAMmL,EAAU,cACVQ,EAAa,YAAe,CAAE3L,UAAS,EAAEmL,QAAO,IAKhDvD,EAAU,uBAAa,UAC1BxH,KACC,sBAAY,CAAEJ,UAAS,EAAE6H,UAAS,IAClC,OAAAzF,EAAA,GAAY,IAGV+R,EAAQ,uBAAa,QACxB/T,KACC,oBAAU,CAAEwH,QAAO,EAAEC,UAAS,IAC9B,OAAAzF,EAAA,GAAY,IAKV+Y,EAAc,uBAAa,cAC9B/a,KACC,0BAAgB,CAAEwH,QAAO,EAAEuM,MAAK,EAAEtM,UAAS,EAAE8Q,QAAO,IACpD,OAAAvW,EAAA,GAAY,IAGVgZ,EAAO,uBAAa,OACvBhb,KACC,+BAAqB,CAAEwH,QAAO,EAAEuM,MAAK,EAAEtM,UAAS,EAAE8M,QAAO,IACzD,OAAAvS,EAAA,GAAY,IAGViZ,EAAQ,uBAAa,QACxBjb,KACC,oBAAU,CAAEwH,QAAO,EAAEC,UAAS,EAAE8Q,QAAO,IACvC,OAAAvW,EAAA,GAAY,IAGVkZ,EAAQ,uBAAa,QACxBlb,KACC,oBAAU,CAAEwH,QAAO,EAAEC,UAAS,IAC9B,OAAAzF,EAAA,GAAY,IAmCVmZ,EA7BU,OAAAtR,EAAA,IAAM,WACpB,IAAMqB,EAAQtC,EAAOxC,QAAUwC,EAAOxC,OAAO8E,MACzCtC,EAAOxC,OAAO8E,WACd3K,EAGEmT,OAA0B,IAAVxI,EAClB,OAAAlK,EAAA,GAAKkK,GACLyI,EACG3T,KACC,OAAAgM,EAAA,IAAU,SAAA/G,GAAQ,cAAAqI,EAAA,GAAK,CACrBnJ,IAAQc,EAAI,4BACZsI,aAAc,OACdC,iBAAiB,IAEhBxN,KACC,OAAAmI,EAAA,GAAM,iBAKlB,OAAO,OAAA6B,EAAA,GAAG,YAAkBpB,EAAOxC,OAAO4B,OAAQ,CAChD2L,MAAK,EAAED,OAAM,QAQd1T,KACC,OAAAgM,EAAA,IAAU,SAAAhE,GAER,IAAMqN,EAAS,uBAAa,gBACzBrV,KACC,2BAAiBgI,EAAQ,CAAE4N,UAAWhN,EAAOxC,OAAOwP,YACpD,OAAA5T,EAAA,GAAY,IAIVsT,EAAS,uBAAa,gBACzBtV,KACC,6BACA,OAAAgC,EAAA,GAAY,IAIVuT,EAAU,uBAAa,iBAC1BvV,KACC,4BAAkBgI,EAAQ,CAAEqN,OAAM,IAClC,OAAArT,EAAA,GAAY,IAGhB,OAAO,uBAAa,UACjBhC,KACC,sBAAYgI,EAAQ,CAAEqN,OAAM,EAAEC,OAAM,EAAEC,QAAO,IAC7C,OAAAvT,EAAA,GAAY,OAGlB,OAAAyL,EAAA,IAAW,WAGT,OAFA,uBAAa,UACVvN,WAAU,SAAAO,GAAM,OAAAA,EAAG6W,QAAS,KACxB,QAOb2C,EACGja,KACC,OAAAuI,EAAA,IAAI,WAAM,mBAAU,UAAU,MAC9B,OAAA+D,EAAA,GAAM,MAELpM,WAAU,SAAA0E,GAAQ,mBAAgB,IAAIA,MAG3C,OAAAuC,EAAA,GAAc,CACZ,YAAY,UACZoN,IAECvU,KACC,OAAAiO,EAAA,GAAexG,GACf,OAAAuE,EAAA,IAAU,SAAC,G,IAAA,mBAAC,sBAACuD,EAAA,KAAQiF,EAAA,KAAqBpS,EAAA,YAAAA,EAClCuM,EAASY,IAAWiF,EAC1B,OAAO5U,EACJI,KACC,OAAAsM,EAAA,GAAMqC,EAAS,IAAM,KACrB,OAAAvC,EAAA,GAAUC,EAAA,GACV,OAAA9D,EAAA,IAAI,SAAC,G,IAAE2D,EAAA,EAAAA,KAAW,OAAAyC,EACdiL,EAAc1N,EAAM9J,GACpByX,EAAgB3N,WAKzBhM,YAKL,OAAAJ,EAAA,GAAsBC,SAASmM,KAAM,SAClClM,KACC,OAAA0D,EAAA,IAAO,SAAAC,GAAM,QAAEA,EAAGC,SAAWD,EAAGE,YAChC,OAAAH,EAAA,IAAO,SAAAC,GACL,GAAIA,EAAGrC,kBAAkBT,YAAa,CACpC,IAAMJ,EAAKkD,EAAGrC,OAAOwL,QAAQ,KAC7B,GAAIrM,GAAM,YAAgBA,GACxB,OAAO,EAGX,OAAO,MAGRP,WAAU,WACT,YAAU,UAAU,MAItB0I,EAAOC,SAASgP,SAAS,YAAoC,UAAtBzT,SAASgX,UAClD,YAAoB,CAAExb,UAAS,EAAEsF,UAAS,EAAEuC,UAAS,IAKvDiH,EACG1O,KACC,OAAA0D,EAAA,IAAO,SAAAzE,GAAO,MAAa,WAAbA,EAAIJ,MAAkC,QAAbI,EAAI6C,QAC3C,OAAAqD,EAAA,GAAK,IAEJjF,WAAU,W,YACT,IAAmB,8BAAY,gBAAc,+BAA9B,QACRuQ,MAAM4K,WAAa,W,qGAKhC,IAAMpO,GAAQ,CAGZrN,UAAS,EACTsF,UAAS,EACTuC,UAAS,EAGTD,QAAO,EACP0T,MAAK,EACLnH,MAAK,EACLgH,YAAW,EACXI,QAAO,EACPF,MAAK,EACLD,KAAI,EAGJzP,WAAU,EACVmD,UAAS,EACT3D,QAAO,GAMT,OAFAnJ,EAAA,EAAK,yBAAI,OAAA0Z,EAAA,GAAOrO,MACb/M,YACI+M,GA3STlN,SAASwb,gBAAgBzP,UAAUU,OAAO,SAC1CzM,SAASwb,gBAAgBzP,UAAUC,IAAI,MAGnC+M,UAAUC,UAAUjH,MAAM,wBAC5B/R,SAASwb,gBAAgBzP,UAAUC,IAAI","file":"assets/javascripts/bundle.23546af0.min.js","sourcesContent":[" \t// install a JSONP callback for chunk loading\n \tfunction webpackJsonpCallback(data) {\n \t\tvar chunkIds = data[0];\n \t\tvar moreModules = data[1];\n \t\tvar executeModules = data[2];\n\n \t\t// add \"moreModules\" to the modules object,\n \t\t// then flag all \"chunkIds\" as loaded and fire callback\n \t\tvar moduleId, chunkId, i = 0, resolves = [];\n \t\tfor(;i < chunkIds.length; i++) {\n \t\t\tchunkId = chunkIds[i];\n \t\t\tif(Object.prototype.hasOwnProperty.call(installedChunks, chunkId) && installedChunks[chunkId]) {\n \t\t\t\tresolves.push(installedChunks[chunkId][0]);\n \t\t\t}\n \t\t\tinstalledChunks[chunkId] = 0;\n \t\t}\n \t\tfor(moduleId in moreModules) {\n \t\t\tif(Object.prototype.hasOwnProperty.call(moreModules, moduleId)) {\n \t\t\t\tmodules[moduleId] = moreModules[moduleId];\n \t\t\t}\n \t\t}\n \t\tif(parentJsonpFunction) parentJsonpFunction(data);\n\n \t\twhile(resolves.length) {\n \t\t\tresolves.shift()();\n \t\t}\n\n \t\t// add entry modules from loaded chunk to deferred list\n \t\tdeferredModules.push.apply(deferredModules, executeModules || []);\n\n \t\t// run deferred modules when all chunks ready\n \t\treturn checkDeferredModules();\n \t};\n \tfunction checkDeferredModules() {\n \t\tvar result;\n \t\tfor(var i = 0; i < deferredModules.length; i++) {\n \t\t\tvar deferredModule = deferredModules[i];\n \t\t\tvar fulfilled = true;\n \t\t\tfor(var j = 1; j < deferredModule.length; j++) {\n \t\t\t\tvar depId = deferredModule[j];\n \t\t\t\tif(installedChunks[depId] !== 0) fulfilled = false;\n \t\t\t}\n \t\t\tif(fulfilled) {\n \t\t\t\tdeferredModules.splice(i--, 1);\n \t\t\t\tresult = __webpack_require__(__webpack_require__.s = deferredModule[0]);\n \t\t\t}\n \t\t}\n\n \t\treturn result;\n \t}\n\n \t// The module cache\n \tvar installedModules = {};\n\n \t// object to store loaded and loading chunks\n \t// undefined = chunk not loaded, null = chunk preloaded/prefetched\n \t// Promise = chunk loading, 0 = chunk loaded\n \tvar installedChunks = {\n \t\t0: 0\n \t};\n\n \tvar deferredModules = [];\n\n \t// The require function\n \tfunction __webpack_require__(moduleId) {\n\n \t\t// Check if module is in cache\n \t\tif(installedModules[moduleId]) {\n \t\t\treturn installedModules[moduleId].exports;\n \t\t}\n \t\t// Create a new module (and put it into the cache)\n \t\tvar module = installedModules[moduleId] = {\n \t\t\ti: moduleId,\n \t\t\tl: false,\n \t\t\texports: {}\n \t\t};\n\n \t\t// Execute the module function\n \t\tmodules[moduleId].call(module.exports, module, module.exports, __webpack_require__);\n\n \t\t// Flag the module as loaded\n \t\tmodule.l = true;\n\n \t\t// Return the exports of the module\n \t\treturn module.exports;\n \t}\n\n\n \t// expose the modules object (__webpack_modules__)\n \t__webpack_require__.m = modules;\n\n \t// expose the module cache\n \t__webpack_require__.c = installedModules;\n\n \t// define getter function for harmony exports\n \t__webpack_require__.d = function(exports, name, getter) {\n \t\tif(!__webpack_require__.o(exports, name)) {\n \t\t\tObject.defineProperty(exports, name, { enumerable: true, get: getter });\n \t\t}\n \t};\n\n \t// define __esModule on exports\n \t__webpack_require__.r = function(exports) {\n \t\tif(typeof Symbol !== 'undefined' && Symbol.toStringTag) {\n \t\t\tObject.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });\n \t\t}\n \t\tObject.defineProperty(exports, '__esModule', { value: true });\n \t};\n\n \t// create a fake namespace object\n \t// mode & 1: value is a module id, require it\n \t// mode & 2: merge all properties of value into the ns\n \t// mode & 4: return value when already ns object\n \t// mode & 8|1: behave like require\n \t__webpack_require__.t = function(value, mode) {\n \t\tif(mode & 1) value = __webpack_require__(value);\n \t\tif(mode & 8) return value;\n \t\tif((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;\n \t\tvar ns = Object.create(null);\n \t\t__webpack_require__.r(ns);\n \t\tObject.defineProperty(ns, 'default', { enumerable: true, value: value });\n \t\tif(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));\n \t\treturn ns;\n \t};\n\n \t// getDefaultExport function for compatibility with non-harmony modules\n \t__webpack_require__.n = function(module) {\n \t\tvar getter = module && module.__esModule ?\n \t\t\tfunction getDefault() { return module['default']; } :\n \t\t\tfunction getModuleExports() { return module; };\n \t\t__webpack_require__.d(getter, 'a', getter);\n \t\treturn getter;\n \t};\n\n \t// Object.prototype.hasOwnProperty.call\n \t__webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };\n\n \t// __webpack_public_path__\n \t__webpack_require__.p = \"\";\n\n \tvar jsonpArray = window[\"webpackJsonp\"] = window[\"webpackJsonp\"] || [];\n \tvar oldJsonpFunction = jsonpArray.push.bind(jsonpArray);\n \tjsonpArray.push = webpackJsonpCallback;\n \tjsonpArray = jsonpArray.slice();\n \tfor(var i = 0; i < jsonpArray.length; i++) webpackJsonpCallback(jsonpArray[i]);\n \tvar parentJsonpFunction = oldJsonpFunction;\n\n\n \t// add entry module to deferred list\n \tdeferredModules.push([85,1]);\n \t// run deferred modules when ready\n \treturn checkDeferredModules();\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ReplaySubject, Subject, fromEvent } from \"rxjs\"\nimport { mapTo } from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch document\n *\n * Documents must be implemented as subjects, so all downstream observables are\n * automatically updated when a new document is emitted. This enabled features\n * like instant loading.\n *\n * @return Document subject\n */\nexport function watchDocument(): Subject {\n const document$ = new ReplaySubject()\n fromEvent(document, \"DOMContentLoaded\")\n .pipe(\n mapTo(document)\n )\n .subscribe(document$)\n\n /* Return document */\n return document$\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve an element matching the query selector\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @return Element or nothing\n */\nexport function getElement(\n selector: string, node: ParentNode = document\n): T | undefined {\n return node.querySelector(selector) || undefined\n}\n\n/**\n * Retrieve an element matching a query selector or throw a reference error\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @return Element\n */\nexport function getElementOrThrow(\n selector: string, node: ParentNode = document\n): T {\n const el = getElement(selector, node)\n if (typeof el === \"undefined\")\n throw new ReferenceError(\n `Missing element: expected \"${selector}\" to be present`\n )\n return el\n}\n\n/**\n * Retrieve the currently active element\n *\n * @return Element or nothing\n */\nexport function getActiveElement(): HTMLElement | undefined {\n return document.activeElement instanceof HTMLElement\n ? document.activeElement\n : undefined\n}\n\n/**\n * Retrieve all elements matching the query selector\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @return Elements\n */\nexport function getElements(\n selector: string, node: ParentNode = document\n): T[] {\n return Array.from(node.querySelectorAll(selector))\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Create an element\n *\n * @template T - Tag name type\n *\n * @param tagName - Tag name\n *\n * @return Element\n */\nexport function createElement<\n T extends keyof HTMLElementTagNameMap\n>(tagName: T): HTMLElementTagNameMap[T] {\n return document.createElement(tagName)\n}\n\n/**\n * Replace an element with another element\n *\n * @param source - Source element\n * @param target - Target element\n */\nexport function replaceElement(\n source: HTMLElement, target: Node\n): void {\n source.replaceWith(target)\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, fromEvent, merge } from \"rxjs\"\nimport { map, shareReplay, startWith } from \"rxjs/operators\"\n\nimport { getActiveElement } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set element focus\n *\n * @param el - Element\n * @param value - Whether the element should be focused\n */\nexport function setElementFocus(\nel: HTMLElement, value: boolean = true\n): void {\n if (value)\n el.focus()\n else\n el.blur()\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element focus\n *\n * @param el - Element\n *\n * @return Element focus observable\n */\nexport function watchElementFocus(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(el, \"focus\"),\n fromEvent(el, \"blur\")\n )\n .pipe(\n map(({ type }) => type === \"focus\"),\n startWith(el === getActiveElement()),\n shareReplay(1)\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, fromEvent, merge } from \"rxjs\"\nimport { map, shareReplay, startWith } from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Element offset\n */\nexport interface ElementOffset {\n x: number /* Horizontal offset */\n y: number /* Vertical offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element offset\n *\n * @param el - Element\n *\n * @return Element offset\n */\nexport function getElementOffset(el: HTMLElement): ElementOffset {\n return {\n x: el.scrollLeft,\n y: el.scrollTop\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element offset\n *\n * @param el - Element\n *\n * @return Element offset observable\n */\nexport function watchElementOffset(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(el, \"scroll\"),\n fromEvent(window, \"resize\")\n )\n .pipe(\n map(() => getElementOffset(el)),\n startWith(getElementOffset(el)),\n shareReplay(1)\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set element text selection\n *\n * @param el - Element\n */\nexport function setElementSelection(\n el: HTMLElement\n): void {\n if (el instanceof HTMLInputElement)\n el.select()\n else\n throw new Error(\"Not implemented\")\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ResizeObserver from \"resize-observer-polyfill\"\nimport { Observable, fromEventPattern } from \"rxjs\"\nimport { shareReplay, startWith } from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Element offset\n */\nexport interface ElementSize {\n width: number /* Element width */\n height: number /* Element height */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element size\n *\n * @param el - Element\n *\n * @return Element size\n */\nexport function getElementSize(el: HTMLElement): ElementSize {\n return {\n width: el.offsetWidth,\n height: el.offsetHeight\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element size\n *\n * @param el - Element\n *\n * @return Element size observable\n */\nexport function watchElementSize(\n el: HTMLElement\n): Observable {\n return fromEventPattern(next => {\n new ResizeObserver(([{ contentRect }]) => next({\n width: Math.round(contentRect.width),\n height: Math.round(contentRect.height)\n }))\n .observe(el)\n })\n .pipe(\n startWith(getElementSize(el)),\n shareReplay(1)\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, fromEvent } from \"rxjs\"\nimport { filter, map, share } from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Key\n */\nexport interface Key {\n type: string /* Key type */\n claim(): void /* Key claim */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Check whether an element may receive keyboard input\n *\n * @param el - Element\n *\n * @return Test result\n */\nexport function isSusceptibleToKeyboard(el: HTMLElement): boolean {\n switch (el.tagName) {\n\n /* Form elements */\n case \"INPUT\":\n case \"SELECT\":\n case \"TEXTAREA\":\n return true\n\n /* Everything else */\n default:\n return el.isContentEditable\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch keyboard\n *\n * @return Keyboard observable\n */\nexport function watchKeyboard(): Observable {\n return fromEvent(window, \"keydown\")\n .pipe(\n filter(ev => !(ev.metaKey || ev.ctrlKey)),\n map(ev => ({\n type: ev.key,\n claim() {\n ev.preventDefault()\n ev.stopPropagation()\n }\n })),\n share()\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { BehaviorSubject, Subject } from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve location\n *\n * This function will return a `URL` object (and not `Location`) in order to\n * normalize typings across the application. Furthermore, locations need to be\n * tracked without setting them and `Location` is a singleton which represents\n * the current location.\n *\n * @return URL\n */\nexport function getLocation(): URL {\n return new URL(location.href)\n}\n\n/**\n * Set location\n *\n * @param url - URL to change to\n */\nexport function setLocation(url: URL): void {\n location.href = url.href\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Check whether a URL is a local link or a file (except `.html`)\n *\n * @param url - URL or HTML anchor element\n * @param ref - Reference URL\n *\n * @return Test result\n */\nexport function isLocalLocation(\n url: URL | HTMLAnchorElement,\n ref: URL | Location = location\n): boolean {\n return url.host === ref.host\n && /^(?:\\/[\\w-]+)*(?:\\/?|\\.html)$/i.test(url.pathname)\n}\n\n/**\n * Check whether a URL is an anchor link on the current page\n *\n * @param url - URL or HTML anchor element\n * @param ref - Reference URL\n *\n * @return Test result\n */\nexport function isAnchorLocation(\n url: URL | HTMLAnchorElement,\n ref: URL | Location = location\n): boolean {\n return url.pathname === ref.pathname\n && url.hash.length > 0\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch location\n *\n * @return Location subject\n */\nexport function watchLocation(): Subject {\n return new BehaviorSubject(getLocation())\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable } from \"rxjs\"\nimport { map, shareReplay, take } from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n location$: Observable /* Location observable */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch location base\n *\n * @return Location base observable\n */\nexport function watchLocationBase(\n base: string, { location$ }: WatchOptions\n): Observable {\n return location$\n .pipe(\n take(1),\n map(({ href }) => new URL(base, href)\n .toString()\n .replace(/\\/$/, \"\")\n ),\n shareReplay(1)\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, fromEvent } from \"rxjs\"\nimport { filter, map, share, startWith } from \"rxjs/operators\"\n\nimport { createElement } from \"browser\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve location hash\n *\n * @return Location hash\n */\nexport function getLocationHash(): string {\n return location.hash.substring(1)\n}\n\n/**\n * Set location hash\n *\n * Setting a new fragment identifier via `location.hash` will have no effect\n * if the value doesn't change. When a new fragment identifier is set, we want\n * the browser to target the respective element at all times, which is why we\n * use this dirty little trick.\n *\n * @param hash - Location hash\n */\nexport function setLocationHash(hash: string): void {\n const el = createElement(\"a\")\n el.href = hash\n el.addEventListener(\"click\", ev => ev.stopPropagation())\n el.click()\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch location hash\n *\n * @return Location hash observable\n */\nexport function watchLocationHash(): Observable {\n return fromEvent(window, \"hashchange\")\n .pipe(\n map(getLocationHash),\n startWith(getLocationHash()),\n filter(hash => hash.length > 0),\n share()\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, fromEventPattern } from \"rxjs\"\nimport { shareReplay, startWith } from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch media query\n *\n * @param query - Media query\n *\n * @return Media observable\n */\nexport function watchMedia(query: string): Observable {\n const media = matchMedia(query)\n return fromEventPattern(next =>\n media.addListener(() => next(media.matches))\n )\n .pipe(\n startWith(media.matches),\n shareReplay(1)\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, fromEvent } from \"rxjs\"\nimport { map, startWith } from \"rxjs/operators\"\n\nimport { getElementOrThrow } from \"../element\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Toggle\n */\nexport type Toggle =\n | \"drawer\" /* Toggle for drawer */\n | \"search\" /* Toggle for search */\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Toggle map\n */\nconst toggles: Record = {\n drawer: getElementOrThrow(`[data-md-toggle=drawer]`),\n search: getElementOrThrow(`[data-md-toggle=search]`)\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve the value of a toggle\n *\n * @param name - Toggle\n *\n * @return Toggle value\n */\nexport function getToggle(name: Toggle): boolean {\n return toggles[name].checked\n}\n\n/**\n * Set toggle\n *\n * Simulating a click event seems to be the most cross-browser compatible way\n * of changing the value while also emitting a `change` event. Before, Material\n * used `CustomEvent` to programmatically change the value of a toggle, but this\n * is a much simpler and cleaner solution which doesn't require a polyfill.\n *\n * @param name - Toggle\n * @param value - Toggle value\n */\nexport function setToggle(name: Toggle, value: boolean): void {\n if (toggles[name].checked !== value)\n toggles[name].click()\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch toggle\n *\n * @param name - Toggle\n *\n * @return Toggle value observable\n */\nexport function watchToggle(name: Toggle): Observable {\n const el = toggles[name]\n return fromEvent(el, \"change\")\n .pipe(\n map(() => el.checked),\n startWith(el.checked)\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, fromEvent, merge } from \"rxjs\"\nimport { map, startWith } from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport offset\n */\nexport interface ViewportOffset {\n x: number /* Horizontal offset */\n y: number /* Vertical offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve viewport offset\n *\n * On iOS Safari, viewport offset can be negative due to overflow scrolling.\n * As this may induce strange behaviors downstream, we'll just limit it to 0.\n *\n * @return Viewport offset\n */\nexport function getViewportOffset(): ViewportOffset {\n return {\n x: Math.max(0, pageXOffset),\n y: Math.max(0, pageYOffset)\n }\n}\n\n/**\n * Set viewport offset\n *\n * @param offset - Viewport offset\n */\nexport function setViewportOffset(\n { x, y }: Partial\n): void {\n window.scrollTo(x || 0, y || 0)\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport offset\n *\n * @return Viewport offset observable\n */\nexport function watchViewportOffset(): Observable {\n return merge(\n fromEvent(window, \"scroll\", { passive: true }),\n fromEvent(window, \"resize\", { passive: true })\n )\n .pipe(\n map(getViewportOffset),\n startWith(getViewportOffset())\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, fromEvent } from \"rxjs\"\nimport { map, startWith } from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport size\n */\nexport interface ViewportSize {\n width: number /* Viewport width */\n height: number /* Viewport height */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve viewport size\n *\n * @return Viewport size\n */\nexport function getViewportSize(): ViewportSize {\n return {\n width: innerWidth,\n height: innerHeight\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport size\n *\n * @return Viewport size observable\n */\nexport function watchViewportSize(): Observable {\n return fromEvent(window, \"resize\", { passive: true })\n .pipe(\n map(getViewportSize),\n startWith(getViewportSize())\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, combineLatest } from \"rxjs\"\nimport {\n distinctUntilKeyChanged,\n map,\n shareReplay\n} from \"rxjs/operators\"\n\nimport { Header } from \"components\"\n\nimport {\n ViewportOffset,\n watchViewportOffset\n} from \"../offset\"\nimport {\n ViewportSize,\n watchViewportSize\n} from \"../size\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport\n */\nexport interface Viewport {\n offset: ViewportOffset /* Viewport offset */\n size: ViewportSize /* Viewport size */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch at options\n */\ninterface WatchAtOptions {\n header$: Observable
/* Header observable */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport\n *\n * @return Viewport observable\n */\nexport function watchViewport(): Observable {\n return combineLatest([\n watchViewportOffset(),\n watchViewportSize()\n ])\n .pipe(\n map(([offset, size]) => ({ offset, size })),\n shareReplay(1)\n )\n}\n\n/**\n * Watch viewport relative to element\n *\n * @param el - Element\n * @param options - Options\n *\n * @return Viewport observable\n */\nexport function watchViewportAt(\n el: HTMLElement, { header$, viewport$ }: WatchAtOptions\n): Observable {\n const size$ = viewport$\n .pipe(\n distinctUntilKeyChanged(\"size\")\n )\n\n /* Compute element offset */\n const offset$ = combineLatest([size$, header$])\n .pipe(\n map((): ViewportOffset => ({\n x: el.offsetLeft,\n y: el.offsetTop\n }))\n )\n\n /* Compute relative viewport, return hot observable */\n return combineLatest([header$, viewport$, offset$])\n .pipe(\n map(([{ height }, { offset, size }, { x, y }]) => ({\n offset: {\n x: offset.x - x,\n y: offset.y - y + height\n },\n size\n })),\n shareReplay(1)\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, Subject, fromEventPattern } from \"rxjs\"\nimport {\n pluck,\n share,\n switchMapTo,\n tap,\n throttle\n} from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Worker message\n */\nexport interface WorkerMessage {\n type: unknown /* Message type */\n data?: unknown /* Message data */\n}\n\n/**\n * Worker handler\n *\n * @template T - Message type\n */\nexport interface WorkerHandler<\n T extends WorkerMessage\n> {\n tx$: Subject /* Message transmission subject */\n rx$: Observable /* Message receive observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n *\n * @template T - Worker message type\n */\ninterface WatchOptions {\n tx$: Observable /* Message transmission observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch a web worker\n *\n * This function returns an observable that will send all values emitted by the\n * message observable to the web worker. Web worker communication is expected\n * to be bidirectional (request-response) and synchronous. Messages that are\n * emitted during a pending request are throttled, the last one is emitted.\n *\n * @param worker - Web worker\n * @param options - Options\n *\n * @return Worker message observable\n */\nexport function watchWorker(\n worker: Worker, { tx$ }: WatchOptions\n): Observable {\n\n /* Intercept messages from worker-like objects */\n const rx$ = fromEventPattern(next =>\n worker.addEventListener(\"message\", next)\n )\n .pipe(\n pluck(\"data\")\n )\n\n /* Send and receive messages, return hot observable */\n return tx$\n .pipe(\n throttle(() => rx$, { leading: true, trailing: true }),\n tap(message => worker.postMessage(message)),\n switchMapTo(rx$),\n share()\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SearchIndex, SearchTransformFn } from \"integrations\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Feature flags\n */\nexport type Feature =\n | \"tabs\" /* Tabs navigation */\n | \"instant\" /* Instant loading\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Configuration\n */\nexport interface Config {\n base: string /* Base URL */\n features: Feature[] /* Feature flags */\n search: {\n worker: string /* Worker URL */\n index?: Promise /* Promise resolving with index */\n transform?: SearchTransformFn /* Transformation function */\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Ensure that the given value is a valid configuration\n *\n * We could use `jsonschema` or any other schema validation framework, but that\n * would just add more bloat to the bundle, so we'll keep it plain and simple.\n *\n * @param config - Configuration\n *\n * @return Test result\n */\nexport function isConfig(config: any): config is Config {\n return typeof config === \"object\"\n && typeof config.base === \"string\"\n && typeof config.features === \"object\"\n && typeof config.search === \"object\"\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n// tslint:disable no-null-keyword\n\nimport { JSX as JSXInternal } from \"preact\"\nimport { keys } from \"ramda\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * HTML and SVG attributes\n */\ntype Attributes =\n & JSXInternal.HTMLAttributes\n & JSXInternal.SVGAttributes\n & Record\n\n/**\n * Child element\n */\ntype Child =\n | HTMLElement\n | SVGElement\n | Text\n | string\n | number\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create an element\n *\n * @param tagName - HTML or SVG tag\n *\n * @return Element\n */\nfunction createElement(tagName: string): HTMLElement | SVGElement {\n switch (tagName) {\n\n /* SVG elements */\n case \"svg\":\n case \"path\":\n return document.createElementNS(\"http://www.w3.org/2000/svg\", tagName)\n\n /* HTML elements */\n default:\n return document.createElement(tagName)\n }\n}\n\n/**\n * Set an attribute\n *\n * @param el - Element\n * @param name - Attribute name\n * @param value - Attribute value\n */\nfunction setAttribute(\n el: HTMLElement | SVGElement, name: string, value: string) {\n switch (name) {\n\n /* Attributes to be ignored */\n case \"xmlns\":\n break\n\n /* Attributes of SVG elements */\n case \"viewBox\":\n case \"d\":\n if (typeof value !== \"boolean\")\n el.setAttributeNS(null, name, value)\n else if (value)\n el.setAttributeNS(null, name, \"\")\n break\n\n /* Attributes of HTML elements */\n default:\n if (typeof value !== \"boolean\")\n el.setAttribute(name, value)\n else if (value)\n el.setAttribute(name, \"\")\n }\n}\n\n/**\n * Append a child node to an element\n *\n * @param el - Element\n * @param child - Child node(s)\n */\nfunction appendChild(\n el: HTMLElement | SVGElement, child: Child | Child[]\n): void {\n\n /* Handle primitive types (including raw HTML) */\n if (typeof child === \"string\" || typeof child === \"number\") {\n el.innerHTML += child.toString()\n\n /* Handle nodes */\n } else if (child instanceof Node) {\n el.appendChild(child)\n\n /* Handle nested children */\n } else if (Array.isArray(child)) {\n for (const node of child)\n appendChild(el, node)\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * JSX factory\n *\n * @param tagName - HTML or SVG tag\n * @param attributes - HTML attributes\n * @param children - Child elements\n *\n * @return Element\n */\nexport function h(\n tagName: string, attributes: Attributes | null, ...children: Child[]\n): HTMLElement | SVGElement {\n const el = createElement(tagName)\n\n /* Set attributes, if any */\n if (attributes)\n for (const attr of keys(attributes))\n setAttribute(el, attr, attributes[attr])\n\n /* Append child nodes */\n for (const child of children)\n appendChild(el, child)\n\n /* Return element */\n return el\n}\n\n/* ----------------------------------------------------------------------------\n * Namespace\n * ------------------------------------------------------------------------- */\n\nexport declare namespace h {\n namespace JSX {\n type Element = HTMLElement | SVGElement\n type IntrinsicElements = JSXInternal.IntrinsicElements\n }\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, defer, of } from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Cache the last value emitted by an observable in session storage\n *\n * If the key is not found in session storage, the factory is executed and the\n * latest value emitted will automatically be persisted to sessions storage.\n * Note that the values emitted by the returned observable must be serializable\n * as `JSON`, or data will be lost.\n *\n * @template T - Value type\n *\n * @param key - Cache key\n * @param factory - Observable factory\n *\n * @return Value observable\n */\nexport function cache(\n key: string, factory: () => Observable\n): Observable {\n return defer(() => {\n const data = sessionStorage.getItem(key)\n if (data) {\n return of(JSON.parse(data) as T)\n\n /* Retrieve value from observable factory and write to storage */\n } else {\n const value$ = factory()\n value$.subscribe(value => {\n try {\n sessionStorage.setItem(key, JSON.stringify(value))\n } catch (err) {\n /* Uncritical, just swallow */\n }\n })\n\n /* Return value */\n return value$\n }\n })\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { getElementOrThrow } from \"browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Translation keys\n */\ntype TranslateKey =\n | \"clipboard.copy\" /* Copy to clipboard */\n | \"clipboard.copied\" /* Copied to clipboard */\n | \"search.config.lang\" /* Search language */\n | \"search.config.pipeline\" /* Search pipeline */\n | \"search.config.separator\" /* Search separator */\n | \"search.result.placeholder\" /* Type to start searching */\n | \"search.result.none\" /* No matching documents */\n | \"search.result.one\" /* 1 matching document */\n | \"search.result.other\" /* # matching documents */\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Translations\n */\nlet lang: Record\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Translate the given key\n *\n * @param key - Key to be translated\n * @param value - Value to be replaced\n *\n * @return Translation\n */\nexport function translate(key: TranslateKey, value?: string): string {\n if (typeof lang === \"undefined\") {\n const el = getElementOrThrow(\"#__lang\")\n lang = JSON.parse(el.textContent!)\n }\n if (typeof lang[key] === \"undefined\") {\n throw new ReferenceError(`Invalid translation: ${key}`)\n }\n return typeof value !== \"undefined\"\n ? lang[key].replace(\"#\", value)\n : lang[key]\n}\n\n/**\n * Truncate a string after the given number of characters\n *\n * This is not a very reasonable approach, since the summaries kind of suck.\n * It would be better to create something more intelligent, highlighting the\n * search occurrences and making a better summary out of it, but this note was\n * written three years ago, so who knows if we'll ever fix it.\n *\n * @param value - Value to be truncated\n * @param n - Number of characters\n *\n * @return Truncated value\n */\nexport function truncate(value: string, n: number): string {\n let i = n\n if (value.length > i) {\n while (value[i] !== \" \" && --i > 0); // tslint:disable-line\n return `${value.substring(0, i)}...`\n }\n return value\n}\n\n/**\n * Round a number for display with source facts\n *\n * This is a reverse engineered version of GitHub's weird rounding algorithm\n * for stars, forks and all other numbers. While all numbers below `1,000` are\n * returned as-is, bigger numbers are converted to fixed numbers:\n *\n * - `1,049` => `1k`\n * - `1,050` => `1.1k`\n * - `1,949` => `1.9k`\n * - `1,950` => `2k`\n *\n * @param value - Original value\n *\n * @return Rounded value\n */\nexport function round(value: number): string {\n if (value > 999) {\n const digits = +((value - 950) % 1000 > 99)\n return `${((value + 0.000001) / 1000).toFixed(digits)}k`\n } else {\n return value.toString()\n }\n}\n\n/**\n * Simple hash function\n *\n * @see https://bit.ly/2wsVjJ4 - Original source\n *\n * @param value - Value to be hashed\n *\n * @return Hash as 32bit integer\n */\nexport function hash(value: string): number {\n let h = 0\n for (let i = 0, len = value.length; i < len; i++) {\n h = ((h << 5) - h) + value.charCodeAt(i)\n h |= 0 // Convert to 32bit integer\n }\n return h\n }\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nexport * from \"./_\"\nexport * from \"./header\"\nexport * from \"./hero\"\nexport * from \"./main\"\nexport * from \"./navigation\"\nexport * from \"./search\"\nexport * from \"./shared\"\nexport * from \"./tabs\"\nexport * from \"./toc\"\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport * as ClipboardJS from \"clipboard\"\nimport { NEVER, Observable, Subject, fromEventPattern } from \"rxjs\"\nimport { mapTo, share, tap } from \"rxjs/operators\"\n\nimport { getElements } from \"browser\"\nimport { renderClipboardButton } from \"templates\"\nimport { translate } from \"utilities\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n document$: Observable /* Document observable */\n dialog$: Subject /* Dialog subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up clipboard\n *\n * This function implements the Clipboard.js integration and injects a button\n * into all code blocks when the document changes.\n *\n * @param options - Options\n *\n * @return Clipboard observable\n */\nexport function setupClipboard(\n { document$, dialog$ }: SetupOptions\n): Observable {\n if (!ClipboardJS.isSupported())\n return NEVER\n\n /* Inject 'copy-to-clipboard' buttons */\n document$.subscribe(() => {\n const blocks = getElements(\"pre > code\")\n blocks.forEach((block, index) => {\n const parent = block.parentElement!\n parent.id = `__code_${index}`\n parent.insertBefore(renderClipboardButton(parent.id), block)\n })\n })\n\n /* Initialize clipboard */\n const clipboard$ = fromEventPattern(next => {\n new ClipboardJS(\".md-clipboard\").on(\"success\", next)\n })\n .pipe(\n share()\n )\n\n /* Display notification for clipboard event */\n clipboard$\n .pipe(\n tap(ev => ev.clearSelection()),\n mapTo(translate(\"clipboard.copied\"))\n )\n .subscribe(dialog$)\n\n /* Return clipboard */\n return clipboard$\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Subject, animationFrameScheduler, of } from \"rxjs\"\nimport {\n delay,\n map,\n observeOn,\n switchMap,\n tap\n} from \"rxjs/operators\"\n\nimport { createElement } from \"browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n duration?: number /* Display duration (default: 2s) */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up dialog\n *\n * @param options - Options\n *\n * @return Dialog observable\n */\nexport function setupDialog(\n { duration }: SetupOptions = {}\n): Subject {\n const dialog$ = new Subject()\n\n /* Create dialog */\n const dialog = createElement(\"div\") // TODO: improve scoping\n dialog.classList.add(\"md-dialog\", \"md-typeset\")\n\n /* Display dialog */\n dialog$\n .pipe(\n switchMap(text => of(document.body) // useComponent(\"container\")\n .pipe(\n map(container => container.appendChild(dialog)),\n observeOn(animationFrameScheduler),\n delay(1), // Strangley it doesnt work when we push things to the new animation frame...\n tap(el => {\n el.innerHTML = text\n el.setAttribute(\"data-md-state\", \"open\")\n }),\n delay(duration || 2000),\n tap(el => el.removeAttribute(\"data-md-state\")),\n delay(400),\n tap(el => {\n el.innerHTML = \"\"\n el.remove()\n })\n )\n )\n )\n .subscribe()\n\n /* Return dialog */\n return dialog$\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { NEVER, Observable, Subject, fromEvent, merge, of } from \"rxjs\"\nimport { ajax } from \"rxjs//ajax\"\nimport {\n bufferCount,\n catchError,\n debounceTime,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n map,\n pluck,\n sample,\n share,\n skip,\n switchMap,\n withLatestFrom\n} from \"rxjs/operators\"\n\nimport {\n Viewport,\n ViewportOffset,\n getElement,\n isAnchorLocation,\n isLocalLocation,\n replaceElement,\n setLocation,\n setLocationHash,\n setToggle,\n setViewportOffset\n} from \"browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * History state\n */\ninterface State {\n url: URL /* State URL */\n offset?: ViewportOffset /* State viewport offset */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n document$: Subject /* Document subject */\n location$: Subject /* Location subject */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up instant loading\n *\n * When fetching, theoretically, we could use `responseType: \"document\"`, but\n * since all MkDocs links are relative, we need to make sure that the current\n * location matches the document we just loaded. Otherwise any relative links\n * in the document could use the old location.\n *\n * This is the reason why we need to synchronize history events and the process\n * of fetching the document for navigation changes (except `popstate` events):\n *\n * 1. Fetch document via `XMLHTTPRequest`\n * 2. Set new location via `history.pushState`\n * 3. Parse and emit fetched document\n *\n * For `popstate` events, we must not use `history.pushState`, or the forward\n * history will be irreversibly overwritten. In case the request fails, the\n * location change is dispatched regularly.\n *\n * @param options - Options\n */\nexport function setupInstantLoading(\n { document$, viewport$, location$ }: SetupOptions\n): void {\n\n /* Disable automatic scroll restoration */\n if (\"scrollRestoration\" in history)\n history.scrollRestoration = \"manual\"\n\n /* Hack: ensure that reloads restore viewport offset */\n fromEvent(window, \"beforeunload\")\n .subscribe(() => {\n history.scrollRestoration = \"auto\"\n })\n\n /* Hack: ensure absolute favicon link to omit 404s on document switch */\n const favicon = getElement(`link[rel=\"shortcut icon\"]`)\n if (typeof favicon !== \"undefined\")\n favicon.href = favicon.href // tslint:disable-line no-self-assignment\n\n /* Intercept link clicks and convert to state change */\n const state$ = fromEvent(document.body, \"click\")\n .pipe(\n filter(ev => !(ev.metaKey || ev.ctrlKey)),\n switchMap(ev => {\n if (ev.target instanceof HTMLElement) {\n const el = ev.target.closest(\"a\")\n if (el && !el.target && isLocalLocation(el)) {\n if (!isAnchorLocation(el))\n ev.preventDefault()\n return of(el)\n }\n }\n return NEVER\n }),\n map(el => ({ url: new URL(el.href) })),\n share()\n )\n\n /* Always close search on link click */\n state$.subscribe(() => {\n setToggle(\"search\", false)\n })\n\n /* Filter state changes to dispatch */\n const push$ = state$\n .pipe(\n filter(({ url }) => !isAnchorLocation(url)),\n share()\n )\n\n /* Intercept popstate events (history back and forward) */\n const pop$ = fromEvent(window, \"popstate\")\n .pipe(\n filter(ev => ev.state !== null),\n map(ev => ({\n url: new URL(location.href),\n offset: ev.state\n })),\n share()\n )\n\n /* Emit location change */\n merge(push$, pop$)\n .pipe(\n distinctUntilChanged((prev, next) => prev.url.href === next.url.href),\n pluck(\"url\")\n )\n .subscribe(location$)\n\n /* Fetch document on location change */\n const ajax$ = location$\n .pipe(\n distinctUntilKeyChanged(\"pathname\"),\n skip(1),\n switchMap(url => ajax({\n url: url.href,\n responseType: \"text\",\n withCredentials: true\n })\n .pipe(\n catchError(() => {\n setLocation(url)\n return NEVER\n })\n )\n )\n )\n\n /* Set new location as soon as the document was fetched */\n push$\n .pipe(\n sample(ajax$)\n )\n .subscribe(({ url }) => {\n history.pushState({}, \"\", url.toString())\n })\n\n /* Parse and emit document */\n const dom = new DOMParser()\n ajax$\n .pipe(\n map(({ response }) => dom.parseFromString(response, \"text/html\"))\n )\n .subscribe(document$)\n\n /* Intercept instant loading */\n const instant$ = merge(push$, pop$)\n .pipe(\n sample(document$)\n )\n\n // TODO: this must be combined with search scroll restoration on mobile\n instant$.subscribe(({ url, offset }) => {\n if (url.hash && !offset) {\n setLocationHash(url.hash)\n } else {\n setViewportOffset(offset || { y: 0 })\n }\n })\n\n /* Replace document metadata */\n instant$\n .pipe(\n withLatestFrom(document$)\n )\n .subscribe(([, { title, head }]) => {\n document.dispatchEvent(new CustomEvent(\"DOMContentSwitch\"))\n document.title = title\n\n /* Replace meta tags */\n for (const selector of [\n `link[rel=\"canonical\"]`,\n `meta[name=\"author\"]`,\n `meta[name=\"description\"]`\n ]) {\n const next = getElement(selector, head)\n const prev = getElement(selector, document.head)\n if (\n typeof next !== \"undefined\" &&\n typeof prev !== \"undefined\"\n ) {\n replaceElement(prev, next)\n }\n }\n })\n\n /* Debounce update of viewport offset */\n viewport$\n .pipe(\n debounceTime(250),\n distinctUntilKeyChanged(\"offset\")\n )\n .subscribe(({ offset }) => {\n history.replaceState(offset, \"\")\n })\n\n /* Set viewport offset from history */\n merge(state$, pop$)\n .pipe(\n bufferCount(2, 1),\n filter(([prev, next]) => {\n return prev.url.pathname === next.url.pathname\n && !isAnchorLocation(next.url)\n }),\n map(([, state]) => state)\n )\n .subscribe(({ offset }) => {\n setViewportOffset(offset || { y: 0 })\n })\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable } from \"rxjs\"\nimport {\n filter,\n map,\n share,\n withLatestFrom\n} from \"rxjs/operators\"\n\nimport {\n Key,\n getActiveElement,\n getElement,\n getElements,\n getToggle,\n isSusceptibleToKeyboard,\n setElementFocus,\n setElementSelection,\n setToggle,\n watchKeyboard\n} from \"browser\"\nimport { useComponent } from \"components\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Keyboard mode\n */\nexport type KeyboardMode =\n | \"global\" /* Global */\n | \"search\" /* Search is open */\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Keyboard\n */\nexport interface Keyboard extends Key {\n mode: KeyboardMode /* Keyboard mode */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up keyboard\n *\n * This function will set up the keyboard handlers and ensure that keys are\n * correctly propagated. Currently there are two modes:\n *\n * - `global`: This mode is active when the search is closed. It is intended\n * to assign hotkeys to specific functions of the site. Currently the search,\n * previous and next page can be triggered.\n *\n * - `search`: This mode is active when the search is open. It maps certain\n * navigational keys to offer search results that can be entirely navigated\n * through keyboard input.\n *\n * The keyboard observable is returned and can be used to monitor the keyboard\n * in order toassign further hotkeys to custom functions.\n *\n * @return Keyboard observable\n */\nexport function setupKeyboard(): Observable {\n const keyboard$ = watchKeyboard()\n .pipe(\n map(key => ({\n mode: getToggle(\"search\") ? \"search\" : \"global\",\n ...key\n })),\n filter(({ mode }) => {\n if (mode === \"global\") {\n const active = getActiveElement()\n if (typeof active !== \"undefined\")\n return !isSusceptibleToKeyboard(active)\n }\n return true\n }),\n share()\n )\n\n /* Set up search keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"search\"),\n withLatestFrom(\n useComponent(\"search-query\"),\n useComponent(\"search-result\")\n )\n )\n .subscribe(([key, query, result]) => {\n const active = getActiveElement()\n switch (key.type) {\n\n /* Enter: prevent form submission */\n case \"Enter\":\n if (active === query)\n key.claim()\n break\n\n /* Escape or Tab: close search */\n case \"Escape\":\n case \"Tab\":\n setToggle(\"search\", false)\n setElementFocus(query, false)\n break\n\n /* Vertical arrows: select previous or next search result */\n case \"ArrowUp\":\n case \"ArrowDown\":\n if (typeof active === \"undefined\") {\n setElementFocus(query)\n } else {\n const els = [query, ...getElements(\"[href]\", result)]\n const i = Math.max(0, (\n Math.max(0, els.indexOf(active)) + els.length + (\n key.type === \"ArrowUp\" ? -1 : +1\n )\n ) % els.length)\n setElementFocus(els[i])\n }\n\n /* Prevent scrolling of page */\n key.claim()\n break\n\n /* All other keys: hand to search query */\n default:\n if (query !== getActiveElement())\n setElementFocus(query)\n }\n })\n\n /* Set up global keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\"),\n withLatestFrom(useComponent(\"search-query\"))\n )\n .subscribe(([key, query]) => {\n switch (key.type) {\n\n /* Open search and select query */\n case \"f\":\n case \"s\":\n case \"/\":\n setElementFocus(query)\n setElementSelection(query)\n key.claim()\n break\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getElement(\"[href][rel=prev]\")\n if (typeof prev !== \"undefined\")\n prev.click()\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getElement(\"[href][rel=next]\")\n if (typeof next !== \"undefined\")\n next.click()\n break\n }\n })\n\n /* Return keyboard */\n return keyboard$\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { EMPTY, Observable, of } from \"rxjs\"\nimport {\n distinctUntilChanged,\n map,\n scan,\n shareReplay,\n switchMap\n} from \"rxjs/operators\"\n\nimport { getElement, replaceElement } from \"browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Component\n */\nexport type Component =\n | \"announce\" /* Announcement bar */\n | \"container\" /* Container */\n | \"header\" /* Header */\n | \"header-title\" /* Header title */\n | \"hero\" /* Hero */\n | \"main\" /* Main area */\n | \"navigation\" /* Navigation */\n | \"search\" /* Search */\n | \"search-query\" /* Search input */\n | \"search-reset\" /* Search reset */\n | \"search-result\" /* Search results */\n | \"skip\" /* Skip link */\n | \"tabs\" /* Tabs */\n | \"toc\" /* Table of contents */\n\n/**\n * Component map\n */\nexport type ComponentMap = {\n [P in Component]?: HTMLElement\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n document$: Observable /* Document observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Component map observable\n */\nlet components$: Observable\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up bindings to components with given names\n *\n * This function will maintain bindings to the elements identified by the given\n * names in-between document switches and update the elements in-place.\n *\n * @param names - Component names\n * @param options - Options\n */\nexport function setupComponents(\n names: Component[], { document$ }: WatchOptions\n): void {\n components$ = document$\n .pipe(\n\n /* Build component map */\n map(document => names.reduce((components, name) => {\n const el = getElement(`[data-md-component=${name}]`, document)\n return {\n ...components,\n ...typeof el !== \"undefined\" ? { [name]: el } : {}\n }\n }, {})),\n\n /* Re-compute component map on document switch */\n scan((prev, next) => {\n for (const name of names) {\n switch (name) {\n\n /* Top-level components: update */\n case \"announce\":\n case \"header-title\":\n case \"container\":\n case \"skip\":\n if (name in prev && typeof prev[name] !== \"undefined\") {\n replaceElement(prev[name]!, next[name]!)\n prev[name] = next[name]\n }\n break\n\n /* All other components: rebind */\n default:\n if (typeof next[name] !== \"undefined\")\n prev[name] = getElement(`[data-md-component=${name}]`)\n else\n delete prev[name]\n }\n }\n return prev\n }),\n\n /* Convert to hot observable */\n shareReplay(1)\n )\n}\n\n/**\n * Retrieve a component\n *\n * The returned observable will only re-emit if the element changed, i.e. if\n * it was replaced from a document which was switched to.\n *\n * @template T - Element type\n *\n * @param name - Component name\n *\n * @return Component observable\n */\nexport function useComponent(\n name: \"search-query\"\n): Observable\nexport function useComponent(\n name: Component\n): Observable\nexport function useComponent(\n name: Component\n): Observable {\n return components$\n .pipe(\n switchMap(components => (\n typeof components[name] !== \"undefined\"\n ? of(components[name] as T)\n : EMPTY\n )),\n distinctUntilChanged()\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set anchor blur\n *\n * @param el - Anchor element\n * @param value - Whether the anchor is blurred\n */\nexport function setAnchorBlur(\n el: HTMLElement, value: boolean\n): void {\n el.setAttribute(\"data-md-state\", value ? \"blur\" : \"\")\n}\n\n/**\n * Reset anchor blur\n *\n * @param el - Anchor element\n */\nexport function resetAnchorBlur(\n el: HTMLElement\n): void {\n el.removeAttribute(\"data-md-state\")\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Set anchor active\n *\n * @param el - Anchor element\n * @param value - Whether the anchor is active\n */\nexport function setAnchorActive(\n el: HTMLElement, value: boolean\n): void {\n el.classList.toggle(\"md-nav__link--active\", value)\n}\n\n/**\n * Reset anchor active\n *\n * @param el - Anchor element\n */\nexport function resetAnchorActive(\n el: HTMLElement\n): void {\n el.classList.remove(\"md-nav__link--active\")\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nexport * from \"./sidebar\"\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h, translate } from \"utilities\"\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * CSS classes\n */\nconst css = {\n container: \"md-clipboard md-icon\"\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Path of `file-search-outline` icon\n */\nconst path =\n \"M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 \" +\n \"21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a 'copy-to-clipboard' button\n *\n * @param id - Unique identifier\n *\n * @return Element\n */\nexport function renderClipboardButton(\n id: string\n) {\n return (\n code`}\n >\n \n \n \n \n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SearchResult } from \"integrations/search\"\nimport { h, truncate } from \"utilities\"\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * CSS classes\n */\nconst css = {\n item: \"md-search-result__item\",\n link: \"md-search-result__link\",\n article: \"md-search-result__article md-search-result__article--document\",\n section: \"md-search-result__article\",\n title: \"md-search-result__title\",\n teaser: \"md-search-result__teaser\"\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Path of `content-copy` icon\n */\nconst path =\n \"M14,2H6A2,2 0 0,0 4,4V20A2,2 0 0,0 6,22H13C12.59,21.75 12.2,21.44 \" +\n \"11.86,21.1C11.53,20.77 11.25,20.4 11,20H6V4H13V9H18V10.18C18.71,10.34 \" +\n \"19.39,10.61 20,11V8L14,2M20.31,18.9C21.64,16.79 21,14 \" +\n \"18.91,12.68C16.8,11.35 14,12 12.69,14.08C11.35,16.19 12,18.97 \" +\n \"14.09,20.3C15.55,21.23 17.41,21.23 \" +\n \"18.88,20.32L22,23.39L23.39,22L20.31,18.9M16.5,19A2.5,2.5 0 0,1 \" +\n \"14,16.5A2.5,2.5 0 0,1 16.5,14A2.5,2.5 0 0,1 19,16.5A2.5,2.5 0 0,1 16.5,19Z\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a search result\n *\n * @param result - Search result\n *\n * @return Element\n */\nexport function renderSearchResult(\n { article, sections }: SearchResult\n) {\n\n /* Render icon */\n const icon = (\n
\n \n \n \n
\n )\n\n /* Render article and sections */\n const children = [article, ...sections].map(document => {\n const { location, title, text } = document\n return (\n \n
\n {!(\"parent\" in document) && icon}\n

{title}

\n {text.length > 0 &&

{truncate(text, 320)}

}\n
\n
\n )\n })\n\n /* Render search result */\n return (\n
  • \n {children}\n
  • \n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SourceFacts } from \"patches/source\"\nimport { h } from \"utilities\"\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * CSS classes\n */\nconst css = {\n facts: \"md-source__facts\",\n fact: \"md-source__fact\"\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render source facts\n *\n * @param facts - Source facts\n *\n * @return Element\n */\nexport function renderSource(\n facts: SourceFacts\n) {\n const children = facts.map(fact => (\n
  • {fact}
  • \n ))\n return (\n
      \n {children}\n
    \n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"utilities\"\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * CSS classes\n */\nconst css = {\n wrapper: \"md-typeset__scrollwrap\",\n table: \"md-typeset__table\"\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a table inside a wrapper to improve scrolling on mobile\n *\n * @param table - Table element\n *\n * @return Element\n */\nexport function renderTable(\n table: HTMLTableElement\n) {\n return (\n
    \n
    \n {table}\n
    \n
    \n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set sidebar offset\n *\n * @param el - Sidebar element\n * @param value - Sidebar offset\n */\nexport function setSidebarOffset(\n el: HTMLElement, value: number\n): void {\n el.style.top = `${value}px`\n}\n\n/**\n * Reset sidebar offset\n *\n * @param el - Sidebar element\n */\nexport function resetSidebarOffset(\n el: HTMLElement\n): void {\n el.style.top = \"\"\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Set sidebar height\n *\n * @param el - Sidebar element\n * @param value - Sidebar height\n */\nexport function setSidebarHeight(\n el: HTMLElement, value: number\n): void {\n el.style.height = `${value}px`\n}\n\n/**\n * Reset sidebar height\n *\n * @param el - Sidebar element\n */\nexport function resetSidebarHeight(\n el: HTMLElement\n): void {\n el.style.height = \"\"\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nexport * from \"./_\"\nexport * from \"./react\"\nexport * from \"./set\"\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n ArticleDocument,\n SearchDocumentMap,\n SectionDocument,\n setupSearchDocumentMap\n} from \"../document\"\nimport {\n SearchHighlightFactoryFn,\n setupSearchHighlighter\n} from \"../highlighter\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index configuration\n */\nexport interface SearchIndexConfig {\n lang: string[] /* Search languages */\n separator: string /* Search separator */\n}\n\n/**\n * Search index document\n */\nexport interface SearchIndexDocument {\n location: string /* Document location */\n title: string /* Document title */\n text: string /* Document text */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search index pipeline function\n */\nexport type SearchIndexPipelineFn =\n | \"stemmer\" /* Stemmer */\n | \"stopWordFilter\" /* Stop word filter */\n | \"trimmer\" /* Trimmer */\n\n/**\n * Search index pipeline\n */\nexport type SearchIndexPipeline = SearchIndexPipelineFn[]\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search index\n *\n * This interfaces describes the format of the `search_index.json` file which\n * is automatically built by the MkDocs search plugin.\n */\nexport interface SearchIndex {\n config: SearchIndexConfig /* Search index configuration */\n docs: SearchIndexDocument[] /* Search index documents */\n index?: object | string /* Prebuilt or serialized index */\n pipeline?: SearchIndexPipeline /* Search index pipeline */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search result\n */\nexport interface SearchResult {\n article: ArticleDocument /* Article document */\n sections: SectionDocument[] /* Section documents */\n}\n\n/* ----------------------------------------------------------------------------\n * Class\n * ------------------------------------------------------------------------- */\n\n/**\n * Search\n *\n * Note that `lunr` is injected via Webpack, as it will otherwise also be\n * bundled in the application bundle.\n */\nexport class Search {\n\n /**\n * Search document mapping\n *\n * A mapping of URLs (including hash fragments) to the actual articles and\n * sections of the documentation. The search document mapping must be created\n * regardless of whether the index was prebuilt or not, as `lunr` itself will\n * only store the actual index.\n */\n protected documents: SearchDocumentMap\n\n /**\n * Search highlight factory function\n */\n protected highlight: SearchHighlightFactoryFn\n\n /**\n * The `lunr` search index\n */\n protected index: lunr.Index\n\n /**\n * Create the search integration\n *\n * @param data - Search index\n */\n public constructor({ config, docs, pipeline, index }: SearchIndex) {\n this.documents = setupSearchDocumentMap(docs)\n this.highlight = setupSearchHighlighter(config)\n\n /* If no index was given, create it */\n if (typeof index === \"undefined\") {\n this.index = lunr(function() {\n pipeline = pipeline || [\"trimmer\", \"stopWordFilter\"]\n\n /* Set up pipeline according to configuration */\n this.pipeline.reset()\n for (const fn of pipeline)\n this.pipeline.add(lunr[fn])\n\n /* Set up alternate search languages */\n if (config.lang.length === 1 && config.lang[0] !== \"en\") {\n this.use((lunr as any)[config.lang[0]])\n } else if (config.lang.length > 1) {\n this.use((lunr as any).multiLanguage(...config.lang))\n }\n\n /* Set up fields and reference */\n this.field(\"title\", { boost: 1000 })\n this.field(\"text\")\n this.ref(\"location\")\n\n /* Index documents */\n for (const doc of docs)\n this.add(doc)\n })\n\n /* Prebuilt or serialized index */\n } else {\n this.index = lunr.Index.load(\n typeof index === \"string\"\n ? JSON.parse(index)\n : index\n )\n }\n }\n\n /**\n * Search for matching documents\n *\n * The search index which MkDocs provides is divided up into articles, which\n * contain the whole content of the individual pages, and sections, which only\n * contain the contents of the subsections obtained by breaking the individual\n * pages up at `h1` ... `h6`. As there may be many sections on different pages\n * with identical titles (for example within this very project, e.g. \"Usage\"\n * or \"Installation\"), they need to be put into the context of the containing\n * page. For this reason, section results are grouped within their respective\n * articles which are the top-level results that are returned.\n *\n * @param value - Query value\n *\n * @return Search results\n */\n public query(value: string): SearchResult[] {\n if (value) {\n try {\n\n /* Group sections by containing article */\n const groups = this.index.search(value)\n .reduce((results, result) => {\n const document = this.documents.get(result.ref)\n if (typeof document !== \"undefined\") {\n if (\"parent\" in document) {\n const ref = document.parent.location\n results.set(ref, [...results.get(ref) || [], result])\n } else {\n const ref = document.location\n results.set(ref, results.get(ref) || [])\n }\n }\n return results\n }, new Map())\n\n /* Create highlighter for query */\n const fn = this.highlight(value)\n\n /* Map groups to search documents */\n return [...groups].map(([ref, sections]) => ({\n article: fn(this.documents.get(ref) as ArticleDocument),\n sections: sections.map(section => {\n return fn(this.documents.get(section.ref) as SectionDocument)\n })\n }))\n\n /* Log errors to console (for now) */\n } catch (err) {\n // tslint:disable-next-line no-console\n console.warn(`Invalid query: ${value} – see https://bit.ly/2s3ChXG`)\n }\n }\n\n /* Return nothing in case of error or empty query */\n return []\n }\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport * as escapeHTML from \"escape-html\"\n\nimport { SearchIndexDocument } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * A top-level article\n */\nexport interface ArticleDocument extends SearchIndexDocument {\n linked: boolean /* Whether the section was linked */\n}\n\n/**\n * A section of an article\n */\nexport interface SectionDocument extends SearchIndexDocument {\n parent: ArticleDocument /* Parent article */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search document\n */\nexport type SearchDocument =\n | ArticleDocument\n | SectionDocument\n\n/**\n * Search document mapping\n */\nexport type SearchDocumentMap = Map\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search document mapping\n *\n * @param docs - Search index documents\n *\n * @return Search document map\n */\nexport function setupSearchDocumentMap(\n docs: SearchIndexDocument[]\n): SearchDocumentMap {\n const documents = new Map()\n for (const doc of docs) {\n const [path, hash] = doc.location.split(\"#\")\n\n /* Extract location and title */\n const location = doc.location\n const title = doc.title\n\n /* Escape and cleanup text */\n const text = escapeHTML(doc.text)\n .replace(/\\s+(?=[,.:;!?])/g, \"\")\n .replace(/\\s+/g, \" \")\n\n /* Handle section */\n if (hash) {\n const parent = documents.get(path) as ArticleDocument\n\n /* Ignore first section, override article */\n if (!parent.linked) {\n parent.title = doc.title\n parent.text = text\n parent.linked = true\n\n /* Add subsequent section */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n parent\n })\n }\n\n /* Add article */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n linked: false\n })\n }\n }\n return documents\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SearchIndexConfig } from \"../_\"\nimport { SearchDocument } from \"../document\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlight function\n *\n * @template T - Search document type\n *\n * @param document - Search document\n *\n * @return Highlighted document\n */\nexport type SearchHighlightFn = <\n T extends SearchDocument\n>(document: Readonly) => T\n\n/**\n * Search highlight factory function\n *\n * @param value - Query value\n *\n * @return Search highlight function\n */\nexport type SearchHighlightFactoryFn = (value: string) => SearchHighlightFn\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search highlighter\n *\n * @param config - Search index configuration\n *\n * @return Search highlight factory function\n */\nexport function setupSearchHighlighter(\n config: SearchIndexConfig\n): SearchHighlightFactoryFn {\n const separator = new RegExp(config.separator, \"img\")\n const highlight = (_: unknown, data: string, term: string) => {\n return `${data}${term}`\n }\n\n /* Return factory function */\n return (value: string) => {\n value = value\n .replace(/[\\s*+-:~^]+/g, \" \")\n .trim()\n\n /* Create search term match expression */\n const match = new RegExp(`(^|${config.separator})(${\n value\n .replace(/[|\\\\{}()[\\]^$+*?.-]/g, \"\\\\$&\")\n .replace(separator, \"|\")\n })`, \"img\")\n\n /* Highlight document */\n return document => ({\n ...document,\n title: document.title.replace(match, highlight),\n text: document.text.replace(match, highlight)\n })\n }\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search transformation function\n *\n * @param value - Query value\n *\n * @return Transformed query value\n */\nexport type SearchTransformFn = (value: string) => string\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Default transformation function\n *\n * Rogue control characters are filtered before handing the query to the\n * search index, as `lunr` will throw otherwise.\n *\n * @param value - Query value\n *\n * @return Transformed query value\n */\nexport function defaultTransform(value: string): string {\n return value\n .replace(/(?:^|\\s+)[*+-:^~]+(?=\\s+|$)/g, \"\")\n .trim()\n .replace(/\\s+|\\b$/g, \"* \")\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SearchIndex, SearchResult } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search message type\n */\nexport const enum SearchMessageType {\n SETUP, /* Search index setup */\n READY, /* Search index ready */\n QUERY, /* Search query */\n RESULT /* Search results */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * A message containing the data necessary to setup the search index\n */\nexport interface SearchSetupMessage {\n type: SearchMessageType.SETUP /* Message type */\n data: SearchIndex /* Message data */\n}\n\n/**\n * A message indicating the search index is ready\n */\nexport interface SearchReadyMessage {\n type: SearchMessageType.READY /* Message type */\n}\n\n/**\n * A message containing a search query\n */\nexport interface SearchQueryMessage {\n type: SearchMessageType.QUERY /* Message type */\n data: string /* Message data */\n}\n\n/**\n * A message containing results for a search query\n */\nexport interface SearchResultMessage {\n type: SearchMessageType.RESULT /* Message type */\n data: SearchResult[] /* Message data */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * A message exchanged with the search worker\n */\nexport type SearchMessage =\n | SearchSetupMessage\n | SearchReadyMessage\n | SearchQueryMessage\n | SearchResultMessage\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Type guard for search setup messages\n *\n * @param message - Search worker message\n *\n * @return Test result\n */\nexport function isSearchSetupMessage(\n message: SearchMessage\n): message is SearchSetupMessage {\n return message.type === SearchMessageType.SETUP\n}\n\n/**\n * Type guard for search ready messages\n *\n * @param message - Search worker message\n *\n * @return Test result\n */\nexport function isSearchReadyMessage(\n message: SearchMessage\n): message is SearchReadyMessage {\n return message.type === SearchMessageType.READY\n}\n\n/**\n * Type guard for search query messages\n *\n * @param message - Search worker message\n *\n * @return Test result\n */\nexport function isSearchQueryMessage(\n message: SearchMessage\n): message is SearchQueryMessage {\n return message.type === SearchMessageType.QUERY\n}\n\n/**\n * Type guard for search result messages\n *\n * @param message - Search worker message\n *\n * @return Test result\n */\nexport function isSearchResultMessage(\n message: SearchMessage\n): message is SearchResultMessage {\n return message.type === SearchMessageType.RESULT\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { identity } from \"ramda\"\nimport { Observable, Subject, asyncScheduler } from \"rxjs\"\nimport {\n map,\n observeOn,\n shareReplay,\n withLatestFrom\n} from \"rxjs/operators\"\n\nimport { WorkerHandler, watchWorker } from \"browser\"\nimport { translate } from \"utilities\"\n\nimport { SearchIndex, SearchIndexPipeline } from \"../../_\"\nimport {\n SearchMessage,\n SearchMessageType,\n SearchSetupMessage,\n isSearchResultMessage\n} from \"../message\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n index$: Observable /* Search index observable */\n base$: Observable /* Location base observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up search index\n *\n * @param data - Search index\n *\n * @return Search index\n */\nfunction setupSearchIndex(\n { config, docs, index }: SearchIndex\n): SearchIndex {\n\n /* Override default language with value from translation */\n if (config.lang.length === 1 && config.lang[0] === \"en\")\n config.lang = [translate(\"search.config.lang\")]\n\n /* Override default separator with value from translation */\n if (config.separator === \"[\\s\\-]+\")\n config.separator = translate(\"search.config.separator\")\n\n /* Set pipeline from translation */\n const pipeline = translate(\"search.config.pipeline\")\n .split(/\\s*,\\s*/)\n .filter(identity) as SearchIndexPipeline\n\n /* Return search index after defaulting */\n return { config, docs, index, pipeline }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up search web worker\n *\n * This function will create a web worker to set up and query the search index\n * which is done using `lunr`. The index must be passed as an observable to\n * enable hacks like _localsearch_ via search index embedding as JSON.\n *\n * @param url - Worker URL\n * @param options - Options\n *\n * @return Worker handler\n */\nexport function setupSearchWorker(\n url: string, { index$, base$ }: SetupOptions\n): WorkerHandler {\n const worker = new Worker(url)\n\n /* Create communication channels and resolve relative links */\n const tx$ = new Subject()\n const rx$ = watchWorker(worker, { tx$ })\n .pipe(\n withLatestFrom(base$),\n map(([message, base]) => {\n if (isSearchResultMessage(message)) {\n for (const { article, sections } of message.data) {\n article.location = `${base}/${article.location}`\n for (const section of sections)\n section.location = `${base}/${section.location}`\n }\n }\n return message\n }),\n shareReplay(1)\n )\n\n /* Set up search index */\n index$\n .pipe(\n map(index => ({\n type: SearchMessageType.SETUP,\n data: setupSearchIndex(index)\n })),\n observeOn(asyncScheduler)\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Return worker handler */\n return { tx$, rx$ }\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nexport * from \"./_\"\nexport * from \"./react\"\nexport * from \"./set\"\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n MonoTypeOperatorFunction,\n Observable,\n animationFrameScheduler,\n combineLatest,\n pipe\n} from \"rxjs\"\nimport {\n distinctUntilChanged,\n finalize,\n map,\n observeOn,\n tap,\n withLatestFrom\n} from \"rxjs/operators\"\n\nimport { Viewport } from \"browser\"\n\nimport { Header } from \"../../../header\"\nimport { Main } from \"../../../main\"\nimport { Sidebar } from \"../_\"\nimport {\n resetSidebarHeight,\n resetSidebarOffset,\n setSidebarHeight,\n setSidebarOffset\n} from \"../set\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n main$: Observable
    /* Main area observable */\n viewport$: Observable /* Viewport observable */\n}\n\n/**\n * Apply options\n */\ninterface ApplyOptions {\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch sidebar\n *\n * This function returns an observable that computes the visual parameters of\n * the sidebar which depends on the vertical viewport offset, as well as the\n * height of the main area. When the page is scrolled beyond the header, the\n * sidebar is locked and fills the remaining space.\n *\n * @param el - Sidebar element\n * @param options - Options\n *\n * @return Sidebar observable\n */\nexport function watchSidebar(\n el: HTMLElement, { main$, viewport$ }: WatchOptions\n): Observable {\n const adjust = el.parentElement!.offsetTop\n - el.parentElement!.parentElement!.offsetTop\n\n /* Compute the sidebar's available height and if it should be locked */\n return combineLatest([main$, viewport$])\n .pipe(\n map(([{ offset, height }, { offset: { y } }]) => {\n height = height\n + Math.min(adjust, Math.max(0, y - offset))\n - adjust\n return {\n height,\n lock: y >= offset + adjust\n }\n }),\n distinctUntilChanged((a, b) => {\n return a.height === b.height\n && a.lock === b.lock\n })\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Apply sidebar\n *\n * @param el - Sidebar element\n * @param options - Options\n *\n * @return Operator function\n */\nexport function applySidebar(\n el: HTMLElement, { header$ }: ApplyOptions\n): MonoTypeOperatorFunction {\n return pipe(\n\n /* Defer repaint to next animation frame */\n observeOn(animationFrameScheduler),\n withLatestFrom(header$),\n tap(([{ height, lock }, { height: offset }]) => {\n setSidebarHeight(el, height)\n\n /* Set offset in locked state depending on header height */\n if (lock)\n setSidebarOffset(el, offset)\n else\n resetSidebarOffset(el)\n }),\n\n /* Re-map to sidebar */\n map(([sidebar]) => sidebar),\n\n /* Reset on complete or error */\n finalize(() => {\n resetSidebarOffset(el)\n resetSidebarHeight(el)\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nexport * from \"./_\"\nexport * from \"./anchor\"\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n OperatorFunction,\n combineLatest,\n of,\n pipe\n} from \"rxjs\"\nimport { map, switchMap } from \"rxjs/operators\"\n\nimport { Viewport, getElements } from \"browser\"\n\nimport { Header } from \"../../header\"\nimport { Main } from \"../../main\"\nimport {\n Sidebar,\n applySidebar,\n watchSidebar\n} from \"../../shared\"\nimport {\n AnchorList,\n applyAnchorList,\n watchAnchorList\n} from \"../anchor\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Table of contents for [tablet -]\n */\ninterface TableOfContentsBelowTablet {} // tslint:disable-line\n\n/**\n * Table of contents for [tablet +]\n */\ninterface TableOfContentsAboveTablet {\n sidebar: Sidebar /* Sidebar */\n anchors: AnchorList /* Anchor list */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Table of contents\n */\nexport type TableOfContents =\n | TableOfContentsBelowTablet\n | TableOfContentsAboveTablet\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n viewport$: Observable /* Viewport observable */\n tablet$: Observable /* Tablet media observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount table of contents from source observable\n *\n * @param options - Options\n *\n * @return Operator function\n */\nexport function mountTableOfContents(\n { header$, main$, viewport$, tablet$ }: MountOptions\n): OperatorFunction {\n return pipe(\n switchMap(el => tablet$\n .pipe(\n switchMap(tablet => {\n\n /* [tablet +]: Mount table of contents in sidebar */\n if (tablet) {\n const els = getElements(\".md-nav__link\", el)\n\n /* Watch and apply sidebar */\n const sidebar$ = watchSidebar(el, { main$, viewport$ })\n .pipe(\n applySidebar(el, { header$ })\n )\n\n /* Watch and apply anchor list (scroll spy) */\n const anchors$ = watchAnchorList(els, { header$, viewport$ })\n .pipe(\n applyAnchorList(els)\n )\n\n /* Combine into single hot observable */\n return combineLatest([sidebar$, anchors$])\n .pipe(\n map(([sidebar, anchors]) => ({ sidebar, anchors }))\n )\n\n /* [tablet -]: Unmount table of contents */\n } else {\n return of({})\n }\n })\n )\n )\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { reverse } from \"ramda\"\nimport {\n MonoTypeOperatorFunction,\n Observable,\n animationFrameScheduler,\n combineLatest,\n pipe\n} from \"rxjs\"\nimport {\n bufferCount,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n finalize,\n map,\n observeOn,\n scan,\n startWith,\n switchMap,\n tap\n} from \"rxjs/operators\"\n\nimport { Viewport, getElement, watchElementSize } from \"browser\"\n\nimport { Header } from \"../../../header\"\nimport { AnchorList } from \"../_\"\nimport {\n resetAnchorActive,\n resetAnchorBlur,\n setAnchorActive,\n setAnchorBlur\n} from \"../set\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n header$: Observable
    /* Header observable */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch anchor list\n *\n * This is effectively a scroll-spy implementation which will account for the\n * fixed header and automatically re-calculate anchor offsets when the viewport\n * is resized. The returned observable will only emit if the anchor list needs\n * to be repainted.\n *\n * This implementation tracks an anchor element's entire path starting from its\n * level up to the top-most anchor element, e.g. `[h3, h2, h1]`. Although the\n * Material theme currently doesn't make use of this information, it enables\n * the styling of the entire hierarchy through customization.\n *\n * Note that the current anchor is the last item of the `prev` anchor list.\n *\n * @param els - Anchor elements\n * @param options - Options\n *\n * @return Anchor list observable\n */\nexport function watchAnchorList(\n els: HTMLAnchorElement[], { header$, viewport$ }: WatchOptions\n): Observable {\n const table = new Map()\n for (const el of els) {\n const id = decodeURIComponent(el.hash.substring(1))\n const target = getElement(`[id=\"${id}\"]`)\n if (typeof target !== \"undefined\")\n table.set(el, target)\n }\n\n /* Compute necessary adjustment for header */\n const adjust$ = header$\n .pipe(\n map(header => 18 + header.height)\n )\n\n /* Compute partition of previous and next anchors */\n const partition$ = watchElementSize(document.body)\n .pipe(\n distinctUntilKeyChanged(\"height\"),\n\n /* Build index to map anchor paths to vertical offsets */\n map(() => {\n let path: HTMLAnchorElement[] = []\n return [...table].reduce((index, [anchor, target]) => {\n while (path.length) {\n const last = table.get(path[path.length - 1])!\n if (last.tagName >= target.tagName) {\n path.pop()\n } else {\n break\n }\n }\n\n /* If the current anchor is hidden, continue with its parent */\n let offset = target.offsetTop\n while (!offset && target.parentElement) {\n target = target.parentElement\n offset = target.offsetTop\n }\n\n /* Map reversed anchor path to vertical offset */\n return index.set(\n reverse(path = [...path, anchor]),\n offset\n )\n }, new Map())\n }),\n\n /* Re-compute partition when viewport offset changes */\n switchMap(index => combineLatest([adjust$, viewport$])\n .pipe(\n scan(([prev, next], [adjust, { offset: { y } }]) => {\n\n /* Look forward */\n while (next.length) {\n const [, offset] = next[0]\n if (offset - adjust < y) {\n prev = [...prev, next.shift()!]\n } else {\n break\n }\n }\n\n /* Look backward */\n while (prev.length) {\n const [, offset] = prev[prev.length - 1]\n if (offset - adjust >= y) {\n next = [prev.pop()!, ...next]\n } else {\n break\n }\n }\n\n /* Return partition */\n return [prev, next]\n }, [[], [...index]]),\n distinctUntilChanged((a, b) => {\n return a[0] === b[0]\n && a[1] === b[1]\n })\n )\n )\n )\n\n /* Compute and return anchor list migrations */\n return partition$\n .pipe(\n map(([prev, next]) => ({\n prev: prev.map(([path]) => path),\n next: next.map(([path]) => path)\n })),\n\n /* Extract anchor list migrations */\n startWith({ prev: [], next: [] }),\n bufferCount(2, 1),\n map(([a, b]) => {\n\n /* Moving down */\n if (a.prev.length < b.prev.length) {\n return {\n prev: b.prev.slice(Math.max(0, a.prev.length - 1), b.prev.length),\n next: []\n }\n\n /* Moving up */\n } else {\n return {\n prev: b.prev.slice(-1),\n next: b.next.slice(0, b.next.length - a.next.length)\n }\n }\n })\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Apply anchor list\n *\n * @param els - Anchor elements\n *\n * @return Operator function\n */\nexport function applyAnchorList(\n els: HTMLAnchorElement[]\n): MonoTypeOperatorFunction {\n return pipe(\n\n /* Defer repaint to next animation frame */\n observeOn(animationFrameScheduler),\n tap(({ prev, next }) => {\n\n /* Look forward */\n for (const [el] of next) {\n resetAnchorActive(el)\n resetAnchorBlur(el)\n }\n\n /* Look backward */\n prev.forEach(([el], index) => {\n setAnchorActive(el, index === prev.length - 1)\n setAnchorBlur(el, true)\n })\n }),\n\n /* Reset on complete or error */\n finalize(() => {\n for (const el of els) {\n resetAnchorActive(el)\n resetAnchorBlur(el)\n }\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, OperatorFunction, combineLatest, pipe } from \"rxjs\"\nimport {\n filter,\n map,\n mapTo,\n sample,\n startWith,\n switchMap,\n take\n} from \"rxjs/operators\"\n\nimport { WorkerHandler } from \"browser\"\nimport {\n SearchMessage,\n SearchResult,\n isSearchQueryMessage,\n isSearchReadyMessage\n} from \"integrations/search\"\n\nimport { SearchQuery } from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search status\n */\nexport type SearchStatus =\n | \"waiting\" /* Search waiting for initialization */\n | \"ready\" /* Search ready */\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search\n */\nexport interface Search {\n status: SearchStatus /* Search status */\n query: SearchQuery /* Search query */\n result: SearchResult[] /* Search result list */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n query$: Observable /* Search query observable */\n reset$: Observable /* Search reset observable */\n result$: Observable /* Search result observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search from source observable\n *\n * @param handler - Worker handler\n * @param options - Options\n *\n * @return Operator function\n */\nexport function mountSearch(\n { rx$, tx$ }: WorkerHandler,\n { query$, reset$, result$ }: MountOptions\n): OperatorFunction {\n return pipe(\n switchMap(() => {\n\n /* Compute search status */\n const status$ = rx$\n .pipe(\n filter(isSearchReadyMessage),\n mapTo(\"ready\"),\n startWith(\"waiting\")\n ) as Observable\n\n /* Re-emit the latest query when search is ready */\n tx$\n .pipe(\n filter(isSearchQueryMessage),\n sample(status$),\n take(1)\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Combine into single observable */\n return combineLatest([status$, query$, result$, reset$])\n .pipe(\n map(([status, query, result]) => ({\n status,\n query,\n result\n }))\n )\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { OperatorFunction, pipe } from \"rxjs\"\nimport {\n distinctUntilKeyChanged,\n map,\n switchMap\n} from \"rxjs/operators\"\n\nimport { WorkerHandler, setToggle } from \"browser\"\nimport {\n SearchMessage,\n SearchMessageType,\n SearchQueryMessage,\n SearchTransformFn\n} from \"integrations\"\n\nimport { watchSearchQuery } from \"../react\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search query\n */\nexport interface SearchQuery {\n value: string /* Query value */\n focus: boolean /* Query focus */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n transform?: SearchTransformFn /* Transformation function */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search query from source observable\n *\n * @param handler - Worker handler\n * @param options - Options\n *\n * @return Operator function\n */\nexport function mountSearchQuery(\n { tx$ }: WorkerHandler, options: MountOptions = {}\n): OperatorFunction {\n return pipe(\n switchMap(el => {\n const query$ = watchSearchQuery(el, options)\n\n /* Subscribe worker to search query */\n query$\n .pipe(\n distinctUntilKeyChanged(\"value\"),\n map(({ value }): SearchQueryMessage => ({\n type: SearchMessageType.QUERY,\n data: value\n }))\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Toggle search on focus */\n query$\n .pipe(\n distinctUntilKeyChanged(\"focus\")\n )\n .subscribe(({ focus }) => {\n if (focus)\n setToggle(\"search\", focus)\n })\n\n /* Return search query */\n return query$\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, combineLatest, fromEvent, merge } from \"rxjs\"\nimport {\n delay,\n distinctUntilChanged,\n map,\n startWith\n} from \"rxjs/operators\"\n\nimport { watchElementFocus } from \"browser\"\nimport { SearchTransformFn, defaultTransform } from \"integrations\"\n\nimport { SearchQuery } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n transform?: SearchTransformFn /* Transformation function */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch search query\n *\n * Note that the focus event which triggers re-reading the current query value\n * is delayed by `1ms` so the input's empty state is allowed to propagate.\n *\n * @param el - Search query element\n * @param options - Options\n *\n * @return Search query observable\n */\nexport function watchSearchQuery(\n el: HTMLInputElement, { transform }: WatchOptions = {}\n): Observable {\n const fn = transform || defaultTransform\n\n /* Intercept keyboard events */\n const value$ = merge(\n fromEvent(el, \"keyup\"),\n fromEvent(el, \"focus\").pipe(delay(1))\n )\n .pipe(\n map(() => fn(el.value)),\n startWith(fn(el.value)),\n distinctUntilChanged()\n )\n\n /* Intercept focus events */\n const focus$ = watchElementFocus(el)\n\n /* Combine into single observable */\n return combineLatest([value$, focus$])\n .pipe(\n map(([value, focus]) => ({ value, focus }))\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { OperatorFunction, pipe } from \"rxjs\"\nimport {\n mapTo,\n startWith,\n switchMap,\n switchMapTo,\n tap\n} from \"rxjs/operators\"\n\nimport { setElementFocus } from \"browser\"\n\nimport { useComponent } from \"../../../_\"\nimport { watchSearchReset } from \"../react\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search reset from source observable\n *\n * @return Operator function\n */\nexport function mountSearchReset(): OperatorFunction {\n return pipe(\n switchMap(el => watchSearchReset(el)\n .pipe(\n switchMapTo(useComponent(\"search-query\")),\n tap(setElementFocus),\n mapTo(undefined)\n )\n ),\n startWith(undefined)\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, fromEvent } from \"rxjs\"\nimport { mapTo } from \"rxjs/operators\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch search reset\n *\n * @param el - Search reset element\n *\n * @return Search reset observable\n */\nexport function watchSearchReset(\n el: HTMLElement\n): Observable {\n return fromEvent(el, \"click\")\n .pipe(\n mapTo(undefined)\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { translate } from \"utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set number of search results\n *\n * @param el - Search result metadata element\n * @param value - Number of results\n */\nexport function setSearchResultMeta(\n el: HTMLElement, value: number\n): void {\n switch (value) {\n\n /* No results */\n case 0:\n el.textContent = translate(\"search.result.none\")\n break\n\n /* One result */\n case 1:\n el.textContent = translate(\"search.result.one\")\n break\n\n /* Multiple result */\n default:\n el.textContent = translate(\"search.result.other\", value.toString())\n }\n}\n\n/**\n * Reset number of search results\n *\n * @param el - Search result metadata element\n */\nexport function resetSearchResultMeta(\n el: HTMLElement\n): void {\n el.textContent = translate(\"search.result.placeholder\")\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Add an element to the search result list\n *\n * @param el - Search result list element\n * @param child - Search result element\n */\nexport function addToSearchResultList(\n el: HTMLElement, child: Element\n): void {\n el.appendChild(child)\n}\n\n/**\n * Reset search result list\n *\n * @param el - Search result list element\n */\nexport function resetSearchResultList(\n el: HTMLElement\n): void {\n el.innerHTML = \"\"\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n MonoTypeOperatorFunction,\n Observable,\n animationFrameScheduler,\n pipe\n} from \"rxjs\"\nimport {\n finalize,\n map,\n mapTo,\n observeOn,\n scan,\n switchMap,\n withLatestFrom\n} from \"rxjs/operators\"\n\nimport { getElementOrThrow } from \"browser\"\nimport { SearchResult } from \"integrations/search\"\nimport { renderSearchResult } from \"templates\"\n\nimport { SearchQuery } from \"../../query\"\nimport {\n addToSearchResultList,\n resetSearchResultList,\n resetSearchResultMeta,\n setSearchResultMeta\n} from \"../set\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Apply options\n */\ninterface ApplyOptions {\n query$: Observable /* Search query observable */\n ready$: Observable /* Search ready observable */\n fetch$: Observable /* Result fetch observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Apply search results\n *\n * This function will perform a lazy rendering of the search results, depending\n * on the vertical offset of the search result container. When the scroll offset\n * reaches the bottom of the element, more results are fetched and rendered.\n *\n * @param el - Search result element\n * @param options - Options\n *\n * @return Operator function\n */\nexport function applySearchResult(\n el: HTMLElement, { query$, ready$, fetch$ }: ApplyOptions\n): MonoTypeOperatorFunction {\n const list = getElementOrThrow(\".md-search-result__list\", el)\n const meta = getElementOrThrow(\".md-search-result__meta\", el)\n return pipe(\n\n /* Apply search result metadata */\n withLatestFrom(query$, ready$),\n map(([result, query]) => {\n if (query.value) {\n setSearchResultMeta(meta, result.length)\n } else {\n resetSearchResultMeta(meta)\n }\n return result\n }),\n\n /* Apply search result list */\n switchMap(result => fetch$\n .pipe(\n\n /* Defer repaint to next animation frame */\n observeOn(animationFrameScheduler),\n scan(index => {\n const container = el.parentElement!\n while (index < result.length) {\n addToSearchResultList(list, renderSearchResult(result[index++]))\n if (container.scrollHeight - container.offsetHeight > 16)\n break\n }\n return index\n }, 0),\n\n /* Re-map to search result */\n mapTo(result),\n\n /* Reset on complete or error */\n finalize(() => {\n resetSearchResultList(list)\n })\n )\n )\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { identity } from \"ramda\"\nimport { Observable, OperatorFunction, pipe } from \"rxjs\"\nimport {\n distinctUntilChanged,\n filter,\n map,\n mapTo,\n pluck,\n startWith,\n switchMap\n} from \"rxjs/operators\"\n\nimport { WorkerHandler, watchElementOffset } from \"browser\"\nimport {\n SearchMessage,\n SearchResult,\n isSearchReadyMessage,\n isSearchResultMessage\n} from \"integrations\"\n\nimport { SearchQuery } from \"../../query\"\nimport { applySearchResult } from \"../react\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n query$: Observable /* Search query observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search result from source observable\n *\n * @param handler - Worker handler\n * @param options - Options\n *\n * @return Operator function\n */\nexport function mountSearchResult(\n { rx$ }: WorkerHandler, { query$ }: MountOptions\n): OperatorFunction {\n return pipe(\n switchMap(el => {\n const container = el.parentElement!\n\n /* Compute if search is ready */\n const ready$ = rx$\n .pipe(\n filter(isSearchReadyMessage),\n mapTo(true)\n )\n\n /* Compute whether there are more search results to fetch */\n const fetch$ = watchElementOffset(container)\n .pipe(\n map(({ y }) => {\n return y >= container.scrollHeight - container.offsetHeight - 16\n }),\n distinctUntilChanged(),\n filter(identity)\n )\n\n /* Apply search results */\n return rx$\n .pipe(\n filter(isSearchResultMessage),\n pluck(\"data\"),\n applySearchResult(el, { query$, ready$, fetch$ }),\n startWith([])\n )\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, OperatorFunction, Subject, pipe } from \"rxjs\"\nimport { distinctUntilKeyChanged, switchMap, tap } from \"rxjs/operators\"\n\nimport { Viewport } from \"browser\"\n\nimport { useComponent } from \"../../_\"\nimport { Header } from \"../../header\"\nimport {\n applyHeaderShadow,\n watchMain\n} from \"../react\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Main area\n */\nexport interface Main {\n offset: number /* Main area top offset */\n height: number /* Main area visible height */\n active: boolean /* Scrolled past top offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n header$: Observable
    /* Header observable */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount main area from source observable\n *\n * The header must be connected to the main area observable outside of the\n * operator function, as the header will persist in-between document switches\n * while the main area is replaced. However, the header observable must be\n * passed to this function, so we connect both via a long-living subject.\n *\n * @param options - Options\n *\n * @return Operator function\n */\nexport function mountMain(\n { header$, viewport$ }: MountOptions\n): OperatorFunction {\n const main$ = new Subject
    ()\n\n /* Connect to main area observable via long-living subject */\n useComponent(\"header\")\n .pipe(\n switchMap(header => main$\n .pipe(\n distinctUntilKeyChanged(\"active\"),\n applyHeaderShadow(header)\n )\n )\n )\n .subscribe()\n\n /* Return operator */\n return pipe(\n switchMap(el => watchMain(el, { header$, viewport$ })),\n tap(main => main$.next(main))\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n MonoTypeOperatorFunction,\n Observable,\n animationFrameScheduler,\n combineLatest,\n pipe\n} from \"rxjs\"\nimport {\n distinctUntilChanged,\n distinctUntilKeyChanged,\n finalize,\n map,\n observeOn,\n pluck,\n shareReplay,\n switchMap,\n tap\n} from \"rxjs/operators\"\n\nimport { Viewport, watchElementSize } from \"browser\"\n\nimport { Header } from \"../../header\"\nimport { Main } from \"../_\"\nimport {\n resetHeaderShadow,\n setHeaderShadow\n} from \"../set\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n header$: Observable
    /* Header observable */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch main area\n *\n * This function returns an observable that computes the visual parameters of\n * the main area which depends on the viewport vertical offset and height, as\n * well as the height of the header element, if the header is fixed.\n *\n * @param el - Main area element\n * @param options - Options\n *\n * @return Main area observable\n */\nexport function watchMain(\n el: HTMLElement, { header$, viewport$ }: WatchOptions\n): Observable
    {\n\n /* Compute necessary adjustment for header */\n const adjust$ = header$\n .pipe(\n pluck(\"height\"),\n distinctUntilChanged(),\n shareReplay(1)\n )\n\n /* Compute the main area's top and bottom borders */\n const border$ = adjust$\n .pipe(\n switchMap(() => watchElementSize(el)\n .pipe(\n map(({ height }) => ({\n top: el.offsetTop,\n bottom: el.offsetTop + height\n }))\n )\n ),\n distinctUntilKeyChanged(\"bottom\"),\n shareReplay(1)\n )\n\n /* Compute the main area's offset, visible height and if we scrolled past */\n return combineLatest([adjust$, border$, viewport$])\n .pipe(\n map(([header, { top, bottom }, { offset: { y }, size: { height } }]) => {\n height = Math.max(0, height\n - Math.max(0, top - y, header)\n - Math.max(0, height + y - bottom)\n )\n return {\n offset: top - header,\n height,\n active: top - header <= y\n }\n }),\n distinctUntilChanged
    ((a, b) => {\n return a.offset === b.offset\n && a.height === b.height\n && a.active === b.active\n })\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Apply header shadow\n *\n * @param el - Header element\n *\n * @return Operator function\n */\nexport function applyHeaderShadow(\n el: HTMLElement\n): MonoTypeOperatorFunction
    {\n return pipe(\n\n /* Defer repaint to next animation frame */\n observeOn(animationFrameScheduler),\n tap(({ active }) => {\n setHeaderShadow(el, active)\n }),\n\n /* Reset on complete or error */\n finalize(() => {\n resetHeaderShadow(el)\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set header shadow\n *\n * @param el - Header element\n * @param value - Whether the shadow is shown\n */\nexport function setHeaderShadow(\n el: HTMLElement, value: boolean\n): void {\n el.setAttribute(\"data-md-state\", value ? \"shadow\" : \"\")\n}\n\n/**\n * Reset header shadow\n *\n * @param el - Header element\n */\nexport function resetHeaderShadow(\n el: HTMLElement\n): void {\n el.removeAttribute(\"data-md-state\")\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, OperatorFunction, pipe } from \"rxjs\"\nimport {\n distinctUntilKeyChanged,\n map,\n switchMap\n} from \"rxjs/operators\"\n\nimport { Viewport, watchViewportAt } from \"browser\"\n\nimport { Header } from \"../../header\"\nimport { applyHero } from \"../react\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Hero\n */\nexport interface Hero {\n hidden: boolean /* Whether the hero is hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n header$: Observable
    /* Header observable */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount hero from source observable\n *\n * @param options - Options\n *\n * @return Operator function\n */\nexport function mountHero(\n { header$, viewport$ }: MountOptions\n): OperatorFunction {\n return pipe(\n switchMap(el => watchViewportAt(el, { header$, viewport$ })\n .pipe(\n map(({ offset: { y } }) => ({ hidden: y >= 20 })),\n distinctUntilKeyChanged(\"hidden\"),\n applyHero(el)\n )\n )\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n MonoTypeOperatorFunction,\n animationFrameScheduler,\n pipe\n} from \"rxjs\"\nimport { finalize, observeOn, tap } from \"rxjs/operators\"\n\nimport { Hero } from \"../_\"\nimport {\n resetHeroHidden,\n setHeroHidden\n} from \"../set\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Apply hero\n *\n * @param el - Hero element\n *\n * @return Operator function\n */\nexport function applyHero(\n el: HTMLElement\n): MonoTypeOperatorFunction {\n return pipe(\n\n /* Defer repaint to next animation frame */\n observeOn(animationFrameScheduler),\n tap(({ hidden }) => {\n setHeroHidden(el, hidden)\n }),\n\n /* Reset on complete or error */\n finalize(() => {\n resetHeroHidden(el)\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set hero hidden\n *\n * @param el - Hero element\n * @param value - Whether the element is hidden\n */\nexport function setHeroHidden(\n el: HTMLElement, value: boolean\n): void {\n el.setAttribute(\"data-md-state\", value ? \"hidden\" : \"\")\n}\n\n/**\n * Reset hero hidden\n *\n * @param el - Hero element\n */\nexport function resetHeroHidden(\n el: HTMLElement\n): void {\n el.removeAttribute(\"data-md-state\")\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, OperatorFunction, combineLatest, pipe } from \"rxjs\"\nimport {\n distinctUntilChanged,\n filter,\n map,\n shareReplay,\n startWith,\n switchMap,\n withLatestFrom\n} from \"rxjs/operators\"\n\nimport {\n Viewport,\n getElement,\n watchViewportAt\n} from \"browser\"\n\nimport { useComponent } from \"../../_\"\nimport {\n applyHeaderType,\n watchHeader\n} from \"../react\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Header type\n */\nexport type HeaderType =\n | \"site\" /* Header shows site title */\n | \"page\" /* Header shows page title */\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Header\n */\nexport interface Header {\n type: HeaderType /* Header type */\n sticky: boolean /* Header stickyness */\n height: number /* Header visible height */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n document$: Observable /* Document observable */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount header from source observable\n *\n * @param options - Options\n *\n * @return Operator function\n */\nexport function mountHeader(\n { document$, viewport$ }: MountOptions\n): OperatorFunction {\n return pipe(\n switchMap(el => {\n const header$ = watchHeader(el, { document$ })\n\n /* Compute whether the header should switch to page header */\n const type$ = useComponent(\"main\")\n .pipe(\n map(main => getElement(\"h1, h2, h3, h4, h5, h6\", main)!),\n filter(hx => typeof hx !== \"undefined\"),\n withLatestFrom(useComponent(\"header-title\")),\n switchMap(([hx, title]) => watchViewportAt(hx, { header$, viewport$ })\n .pipe(\n map(({ offset: { y } }) => {\n return y >= hx.offsetHeight ? \"page\" : \"site\"\n }),\n distinctUntilChanged(),\n applyHeaderType(title)\n )\n ),\n startWith(\"site\")\n )\n\n /* Combine into single observable */\n return combineLatest([header$, type$])\n .pipe(\n map(([header, type]): Header => ({ type, ...header })),\n shareReplay(1)\n )\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n MonoTypeOperatorFunction,\n Observable,\n animationFrameScheduler,\n of,\n pipe\n} from \"rxjs\"\nimport {\n distinctUntilChanged,\n finalize,\n map,\n observeOn,\n shareReplay,\n switchMap,\n tap\n} from \"rxjs/operators\"\n\nimport { watchElementSize } from \"browser\"\n\nimport { Header, HeaderType } from \"../_\"\nimport {\n resetHeaderTitleActive,\n setHeaderTitleActive\n} from \"../set\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n document$: Observable /* Document observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch header\n *\n * @param el - Header element\n *\n * @return Header observable\n */\nexport function watchHeader(\n el: HTMLElement, { document$ }: WatchOptions\n): Observable> {\n return document$\n .pipe(\n map(() => {\n const styles = getComputedStyle(el)\n return [\n \"sticky\", /* Modern browsers */\n \"-webkit-sticky\" /* Safari */\n ].includes(styles.position)\n }),\n distinctUntilChanged(),\n switchMap(sticky => {\n if (sticky) {\n return watchElementSize(el)\n .pipe(\n map(({ height }) => ({\n sticky: true,\n height\n }))\n )\n } else {\n return of({\n sticky: false,\n height: 0\n })\n }\n }),\n shareReplay(1)\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Apply header title type\n *\n * @param el - Header title element\n *\n * @return Operator function\n */\nexport function applyHeaderType(\n el: HTMLElement\n): MonoTypeOperatorFunction {\n return pipe(\n\n /* Defer repaint to next animation frame */\n observeOn(animationFrameScheduler),\n tap(type => {\n setHeaderTitleActive(el, type === \"page\")\n }),\n\n /* Reset on complete or error */\n finalize(() => {\n resetHeaderTitleActive(el)\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set header title active\n *\n * @param el - Header title element\n * @param value - Whether the title is shown\n */\nexport function setHeaderTitleActive(\n el: HTMLElement, value: boolean\n): void {\n el.setAttribute(\"data-md-state\", value ? \"active\" : \"\")\n}\n\n/**\n * Reset header title active\n *\n * @param el - Header title element\n */\nexport function resetHeaderTitleActive(\n el: HTMLElement\n): void {\n el.removeAttribute(\"data-md-state\")\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, OperatorFunction, of, pipe } from \"rxjs\"\nimport {\n distinctUntilKeyChanged,\n map,\n switchMap\n} from \"rxjs/operators\"\n\nimport { Viewport, watchViewportAt } from \"browser\"\n\nimport { Header } from \"../../header\"\nimport { applyTabs } from \"../react\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Tabs\n */\nexport interface Tabs {\n hidden: boolean /* Whether the tabs are hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n header$: Observable
    /* Header observable */\n viewport$: Observable /* Viewport observable */\n screen$: Observable /* Media screen observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount tabs from source observable\n *\n * @param options - Options\n *\n * @return Operator function\n */\nexport function mountTabs(\n { header$, viewport$, screen$ }: MountOptions\n): OperatorFunction {\n return pipe(\n switchMap(el => screen$\n .pipe(\n switchMap(screen => {\n\n /* [screen +]: Mount tabs above screen breakpoint */\n if (screen) {\n return watchViewportAt(el, { header$, viewport$ })\n .pipe(\n map(({ offset: { y } }) => ({ hidden: y >= 10 })),\n distinctUntilKeyChanged(\"hidden\"),\n applyTabs(el)\n )\n\n /* [screen -]: Unmount tabs below screen breakpoint */\n } else {\n return of({ hidden: true })\n }\n })\n )\n )\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n MonoTypeOperatorFunction,\n animationFrameScheduler,\n pipe\n} from \"rxjs\"\nimport { finalize, observeOn, tap } from \"rxjs/operators\"\n\nimport { Tabs } from \"../_\"\nimport {\n resetTabsHidden,\n setTabsHidden\n} from \"../set\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Apply tabs\n *\n * @param el - Tabs element\n *\n * @return Operator function\n */\nexport function applyTabs(\n el: HTMLElement\n): MonoTypeOperatorFunction {\n return pipe(\n\n /* Defer repaint to next animation frame */\n observeOn(animationFrameScheduler),\n tap(({ hidden }) => {\n setTabsHidden(el, hidden)\n }),\n\n /* Reset on complete or error */\n finalize(() => {\n resetTabsHidden(el)\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set tabs hidden\n *\n * @param el - Tabs element\n * @param value - Whether the element is hidden\n */\nexport function setTabsHidden(\n el: HTMLElement, value: boolean\n): void {\n el.setAttribute(\"data-md-state\", value ? \"hidden\" : \"\")\n}\n\n/**\n * Reset tabs hidden\n *\n * @param el - Tabs element\n */\nexport function resetTabsHidden(\n el: HTMLElement\n): void {\n el.removeAttribute(\"data-md-state\")\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, OperatorFunction, of, pipe } from \"rxjs\"\nimport { map, switchMap } from \"rxjs/operators\"\n\nimport { Viewport } from \"browser\"\n\nimport { Header } from \"../../header\"\nimport { Main } from \"../../main\"\nimport {\n Sidebar,\n applySidebar,\n watchSidebar\n} from \"../../shared\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Navigation for [screen -]\n */\ninterface NavigationBelowScreen {} // tslint:disable-line\n\n/**\n * Navigation for [screen +]\n */\ninterface NavigationAboveScreen {\n sidebar: Sidebar /* Sidebar */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Navigation\n */\nexport type Navigation =\n | NavigationBelowScreen\n | NavigationAboveScreen\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n viewport$: Observable /* Viewport observable */\n screen$: Observable /* Screen media observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount navigation from source observable\n *\n * @param options - Options\n *\n * @return Operator function\n */\nexport function mountNavigation(\n { header$, main$, viewport$, screen$ }: MountOptions\n): OperatorFunction {\n return pipe(\n switchMap(el => screen$\n .pipe(\n switchMap(screen => {\n\n /* [screen +]: Mount navigation in sidebar */\n if (screen) {\n return watchSidebar(el, { main$, viewport$ })\n .pipe(\n applySidebar(el, { header$ }),\n map(sidebar => ({ sidebar }))\n )\n\n /* [screen -]: Mount navigation in drawer */\n } else {\n return of({})\n }\n })\n )\n )\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { NEVER, Observable, fromEvent, iif, merge } from \"rxjs\"\nimport { map, mapTo, shareReplay, switchMap } from \"rxjs/operators\"\n\nimport { getElements } from \"browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Check whether the given device is an Apple device\n *\n * @return Test result\n */\nfunction isAppleDevice(): boolean {\n return /(iPad|iPhone|iPod)/.test(navigator.userAgent)\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch all elements with `data-md-scrollfix` attributes\n *\n * This is a year-old patch which ensures that overflow scrolling works at the\n * top and bottom of containers on iOS by ensuring a `1px` scroll offset upon\n * the start of a touch event.\n *\n * @see https://bit.ly/2SCtAOO - Original source\n *\n * @param options - Options\n */\nexport function patchScrollfix(\n { document$ }: PatchOptions\n): void {\n const els$ = document$\n .pipe(\n map(() => getElements(\"[data-md-scrollfix]\")),\n shareReplay(1)\n )\n\n /* Remove marker attribute, so we'll only add the fix once */\n els$.subscribe(els => {\n for (const el of els)\n el.removeAttribute(\"data-md-scrollfix\")\n })\n\n /* Patch overflow scrolling on touch start */\n iif(isAppleDevice, els$, NEVER)\n .pipe(\n switchMap(els => merge(...els.map(el => (\n fromEvent(el, \"touchstart\", { passive: true })\n .pipe(\n mapTo(el)\n )\n ))))\n )\n .subscribe(el => {\n const top = el.scrollTop\n\n /* We're at the top of the container */\n if (top === 0) {\n el.scrollTop = 1\n\n /* We're at the bottom of the container */\n } else if (top + el.offsetHeight === el.scrollHeight) {\n el.scrollTop = top - 1\n }\n })\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { NEVER, Observable } from \"rxjs\"\nimport { catchError, map, switchMap } from \"rxjs/operators\"\n\nimport { getElementOrThrow, getElements } from \"browser\"\nimport { renderSource } from \"templates\"\nimport { cache, hash } from \"utilities\"\n\nimport { fetchSourceFactsFromGitHub } from \"./github\"\nimport { fetchSourceFactsFromGitLab } from \"./gitlab\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Source facts\n */\nexport type SourceFacts = string[]\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch source facts\n *\n * @param url - Source repository URL\n *\n * @return Source facts observable\n */\nfunction fetchSourceFacts(\n url: string\n): Observable {\n const [type] = url.match(/(git(?:hub|lab))/i) || []\n switch (type.toLowerCase()) {\n\n /* GitHub repository */\n case \"github\":\n const [, user, repo] = url.match(/^.+github\\.com\\/([^\\/]+)\\/?([^\\/]+)/i)\n return fetchSourceFactsFromGitHub(user, repo)\n\n /* GitLab repository */\n case \"gitlab\":\n const [, base, slug] = url.match(/^.+?([^\\/]*gitlab[^\\/]+)\\/(.+?)\\/?$/i)\n return fetchSourceFactsFromGitLab(base, slug)\n\n /* Everything else */\n default:\n return NEVER\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch elements containing repository information\n *\n * This function will retrieve the URL from the repository link and try to\n * query data from integrated source code platforms like GitHub or GitLab.\n *\n * @param options - Options\n */\nexport function patchSource(\n { document$ }: PatchOptions\n): void {\n document$\n .pipe(\n map(() => getElementOrThrow(\".md-source[href]\")),\n switchMap(({ href }) => (\n cache(`${hash(href)}`, () => fetchSourceFacts(href))\n )),\n catchError(() => NEVER)\n )\n .subscribe(facts => {\n for (const el of getElements(\".md-source__repository\")) {\n if (!el.hasAttribute(\"data-md-state\")) {\n el.setAttribute(\"data-md-state\", \"done\")\n el.appendChild(renderSource(facts))\n }\n }\n })\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Repo, User } from \"github-types\"\nimport { Observable, of } from \"rxjs\"\nimport { ajax } from \"rxjs/ajax\"\nimport { filter, pluck, switchMap } from \"rxjs/operators\"\n\nimport { round } from \"utilities\"\n\nimport { SourceFacts } from \"..\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch GitHub source facts\n *\n * @param user - GitHub user\n * @param repo - GitHub repository\n *\n * @return Source facts observable\n */\nexport function fetchSourceFactsFromGitHub(\n user: string, repo?: string\n): Observable {\n return ajax({\n url: typeof repo !== \"undefined\"\n ? `https://api.github.com/repos/${user}/${repo}`\n : `https://api.github.com/users/${user}`,\n responseType: \"json\"\n })\n .pipe(\n filter(({ status }) => status === 200),\n pluck(\"response\"),\n switchMap(data => {\n\n /* GitHub repository */\n if (typeof repo !== \"undefined\") {\n const { stargazers_count, forks_count }: Repo = data\n return of([\n `${round(stargazers_count || 0)} Stars`,\n `${round(forks_count || 0)} Forks`\n ])\n\n /* GitHub user/organization */\n } else {\n const { public_repos }: User = data\n return of([\n `${round(public_repos || 0)} Repositories`\n ])\n }\n })\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ProjectSchema } from \"gitlab\"\nimport { Observable } from \"rxjs\"\nimport { ajax } from \"rxjs/ajax\"\nimport { filter, map, pluck } from \"rxjs/operators\"\n\nimport { round } from \"utilities\"\n\nimport { SourceFacts } from \"..\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch GitLab source facts\n *\n * @param base - GitLab base\n * @param project - GitLab project\n *\n * @return Source facts observable\n */\nexport function fetchSourceFactsFromGitLab(\n base: string, project: string\n): Observable {\n return ajax({\n url: `https://${base}/api/v4/projects/${encodeURIComponent(project)}`,\n responseType: \"json\"\n })\n .pipe(\n filter(({ status }) => status === 200),\n pluck(\"response\"),\n map(({ star_count, forks_count }: ProjectSchema) => ([\n `${round(star_count)} Stars`,\n `${round(forks_count)} Forks`\n ]))\n )\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n// DISCLAIMER: this file is still WIP. There're some refactoring opportunities\n// which must be tackled after we gathered some feedback on v5.\n// tslint:disable\n\nimport { values } from \"ramda\"\nimport {\n merge,\n combineLatest,\n animationFrameScheduler,\n fromEvent,\n from,\n defer,\n of,\n NEVER\n} from \"rxjs\"\nimport { ajax } from \"rxjs/ajax\"\nimport {\n delay,\n switchMap,\n tap,\n filter,\n withLatestFrom,\n observeOn,\n take,\n shareReplay,\n pluck,\n catchError\n} from \"rxjs/operators\"\n\nimport {\n watchToggle,\n setToggle,\n getElements,\n watchMedia,\n watchDocument,\n watchLocation,\n watchLocationHash,\n watchViewport,\n isLocalLocation,\n setLocationHash,\n watchLocationBase\n} from \"browser\"\nimport {\n mountHeader,\n mountHero,\n mountMain,\n mountNavigation,\n mountSearch,\n mountTableOfContents,\n mountTabs,\n useComponent,\n setupComponents,\n mountSearchQuery,\n mountSearchReset,\n mountSearchResult\n} from \"components\"\nimport {\n setupClipboard,\n setupDialog,\n setupKeyboard,\n setupInstantLoading,\n setupSearchWorker,\n SearchIndex\n} from \"integrations\"\nimport {\n patchTables,\n patchDetails,\n patchScrollfix,\n patchSource,\n patchScripts\n} from \"patches\"\nimport { isConfig } from \"utilities\"\n\n/* ------------------------------------------------------------------------- */\n\n/* Denote that JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Test for iOS */\nif (navigator.userAgent.match(/(iPad|iPhone|iPod)/g))\n document.documentElement.classList.add(\"ios\")\n\n/**\n * Set scroll lock\n *\n * @param el - Scrollable element\n * @param value - Vertical offset\n */\nexport function setScrollLock(\n el: HTMLElement, value: number\n): void {\n el.setAttribute(\"data-md-state\", \"lock\")\n el.style.top = `-${value}px`\n}\n\n/**\n * Reset scroll lock\n *\n * @param el - Scrollable element\n */\nexport function resetScrollLock(\n el: HTMLElement\n): void {\n const value = -1 * parseInt(el.style.top, 10)\n el.removeAttribute(\"data-md-state\")\n el.style.top = \"\"\n if (value)\n window.scrollTo(0, value)\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Initialize Material for MkDocs\n *\n * @param config - Configuration\n */\nexport function initialize(config: unknown) {\n if (!isConfig(config))\n throw new SyntaxError(`Invalid configuration: ${JSON.stringify(config)}`)\n\n /* Set up subjects */\n const document$ = watchDocument()\n const location$ = watchLocation()\n\n /* Set up user interface observables */\n const base$ = watchLocationBase(config.base, { location$ })\n const hash$ = watchLocationHash()\n const viewport$ = watchViewport()\n const tablet$ = watchMedia(\"(min-width: 960px)\")\n const screen$ = watchMedia(\"(min-width: 1220px)\")\n\n /* ----------------------------------------------------------------------- */\n\n /* Set up component bindings */\n setupComponents([\n \"announce\", /* Announcement bar */\n \"container\", /* Container */\n \"header\", /* Header */\n \"header-title\", /* Header title */\n \"hero\", /* Hero */\n \"main\", /* Main area */\n \"navigation\", /* Navigation */\n \"search\", /* Search */\n \"search-query\", /* Search input */\n \"search-reset\", /* Search reset */\n \"search-result\", /* Search results */\n \"skip\", /* Skip link */\n \"tabs\", /* Tabs */\n \"toc\" /* Table of contents */\n ], { document$ })\n\n const keyboard$ = setupKeyboard()\n\n patchDetails({ document$, hash$ })\n patchScripts({ document$ })\n patchSource({ document$ })\n patchTables({ document$ })\n\n /* Force 1px scroll offset to trigger overflow scrolling */\n patchScrollfix({ document$ })\n\n /* Set up clipboard and dialog */\n const dialog$ = setupDialog()\n const clipboard$ = setupClipboard({ document$, dialog$ })\n\n /* ----------------------------------------------------------------------- */\n\n /* Create header observable */\n const header$ = useComponent(\"header\")\n .pipe(\n mountHeader({ document$, viewport$ }),\n shareReplay(1)\n )\n\n const main$ = useComponent(\"main\")\n .pipe(\n mountMain({ header$, viewport$ }),\n shareReplay(1)\n )\n\n /* ----------------------------------------------------------------------- */\n\n const navigation$ = useComponent(\"navigation\")\n .pipe(\n mountNavigation({ header$, main$, viewport$, screen$ }),\n shareReplay(1) // shareReplay because there might be late subscribers\n )\n\n const toc$ = useComponent(\"toc\")\n .pipe(\n mountTableOfContents({ header$, main$, viewport$, tablet$ }),\n shareReplay(1)\n )\n\n const tabs$ = useComponent(\"tabs\")\n .pipe(\n mountTabs({ header$, viewport$, screen$ }),\n shareReplay(1)\n )\n\n const hero$ = useComponent(\"hero\")\n .pipe(\n mountHero({ header$, viewport$ }),\n shareReplay(1)\n )\n\n /* ----------------------------------------------------------------------- */\n\n /* Search worker */\n const worker$ = defer(() => {\n const index = config.search && config.search.index\n ? config.search.index\n : undefined\n\n /* Fetch index if it wasn't passed explicitly */\n const index$ = typeof index !== \"undefined\"\n ? from(index)\n : base$\n .pipe(\n switchMap(base => ajax({\n url: `${base}/search/search_index.json`,\n responseType: \"json\",\n withCredentials: true\n })\n .pipe(\n pluck(\"response\")\n )\n )\n )\n\n return of(setupSearchWorker(config.search.worker, {\n base$, index$\n }))\n })\n\n /* ----------------------------------------------------------------------- */\n\n /* Mount search query */\n const search$ = worker$\n .pipe(\n switchMap(worker => {\n\n const query$ = useComponent(\"search-query\")\n .pipe(\n mountSearchQuery(worker, { transform: config.search.transform }),\n shareReplay(1)\n )\n\n /* Mount search reset */\n const reset$ = useComponent(\"search-reset\")\n .pipe(\n mountSearchReset(),\n shareReplay(1)\n )\n\n /* Mount search result */\n const result$ = useComponent(\"search-result\")\n .pipe(\n mountSearchResult(worker, { query$ }),\n shareReplay(1)\n )\n\n return useComponent(\"search\")\n .pipe(\n mountSearch(worker, { query$, reset$, result$ }),\n shareReplay(1)\n )\n }),\n catchError(() => {\n useComponent(\"search\")\n .subscribe(el => el.hidden = true) // TODO: Hack\n return NEVER\n })\n )\n\n /* ----------------------------------------------------------------------- */\n\n // // put into search...\n hash$\n .pipe(\n tap(() => setToggle(\"search\", false)),\n delay(125), // ensure that it runs after the body scroll reset...\n )\n .subscribe(hash => setLocationHash(`#${hash}`))\n\n // TODO: scroll restoration must be centralized\n combineLatest([\n watchToggle(\"search\"),\n tablet$,\n ])\n .pipe(\n withLatestFrom(viewport$),\n switchMap(([[toggle, tablet], { offset: { y }}]) => {\n const active = toggle && !tablet\n return document$\n .pipe(\n delay(active ? 400 : 100),\n observeOn(animationFrameScheduler),\n tap(({ body }) => active\n ? setScrollLock(body, y)\n : resetScrollLock(body)\n )\n )\n })\n )\n .subscribe()\n\n /* ----------------------------------------------------------------------- */\n\n /* Always close drawer on click */\n fromEvent(document.body, \"click\")\n .pipe(\n filter(ev => !(ev.metaKey || ev.ctrlKey)),\n filter(ev => {\n if (ev.target instanceof HTMLElement) {\n const el = ev.target.closest(\"a\") // TODO: abstract as link click?\n if (el && isLocalLocation(el)) {\n return true\n }\n }\n return false\n })\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n })\n\n /* Enable instant loading, if not on file:// protocol */\n if (config.features.includes(\"instant\") && location.protocol !== \"file:\")\n setupInstantLoading({ document$, location$, viewport$ })\n\n /* ----------------------------------------------------------------------- */\n\n /* Unhide permalinks on first tab */\n keyboard$\n .pipe(\n filter(key => key.mode === \"global\" && key.type === \"Tab\"),\n take(1)\n )\n .subscribe(() => {\n for (const link of getElements(\".headerlink\"))\n link.style.visibility = \"visible\"\n })\n\n /* ----------------------------------------------------------------------- */\n\n const state = {\n\n /* Browser observables */\n document$,\n location$,\n viewport$,\n\n /* Component observables */\n header$,\n hero$,\n main$,\n navigation$,\n search$,\n tabs$,\n toc$,\n\n /* Integration observables */\n clipboard$,\n keyboard$,\n dialog$\n }\n\n /* Subscribe to all observables */\n merge(...values(state))\n .subscribe()\n return state\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { identity } from \"ramda\"\nimport { Observable, fromEvent, merge } from \"rxjs\"\nimport {\n filter,\n map,\n switchMapTo,\n tap\n} from \"rxjs/operators\"\n\nimport {\n getElement,\n getElements,\n watchMedia\n} from \"browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n hash$: Observable /* Location hash observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch all `details` elements\n *\n * This function will ensure that all `details` tags are opened prior to\n * printing, so the whole content of the page is included, and on anchor jumps.\n *\n * @param options - Options\n */\nexport function patchDetails(\n { document$, hash$ }: PatchOptions\n): void {\n const els$ = document$\n .pipe(\n map(() => getElements(\"details\"))\n )\n\n /* Open all details before printing */\n merge(\n watchMedia(\"print\").pipe(filter(identity)), /* Webkit */\n fromEvent(window, \"beforeprint\") /* IE, FF */\n )\n .pipe(\n switchMapTo(els$)\n )\n .subscribe(els => {\n for (const el of els)\n el.setAttribute(\"open\", \"\")\n })\n\n /* Open parent details and fix anchor jump */\n hash$\n .pipe(\n map(id => getElement(`[id=\"${id}\"]`)!),\n filter(el => typeof el !== \"undefined\"),\n tap(el => {\n const details = el.closest(\"details\")\n if (details && !details.open)\n details.setAttribute(\"open\", \"\")\n })\n )\n .subscribe(el => el.scrollIntoView())\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable } from \"rxjs\"\nimport { map, skip, withLatestFrom } from \"rxjs/operators\"\n\nimport {\n createElement,\n getElements,\n replaceElement\n} from \"browser\"\nimport { useComponent } from \"components\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch all `script` elements\n *\n * This function must be run after a document switch, which means the first\n * emission must be ignored.\n *\n * @param options - Options\n */\nexport function patchScripts(\n { document$ }: PatchOptions\n): void {\n const els$ = document$\n .pipe(\n skip(1),\n withLatestFrom(useComponent(\"container\")),\n map(([, el]) => getElements(\"script\", el))\n )\n\n /* Evaluate all scripts via replacement */\n els$.subscribe(els => {\n for (const el of els) {\n if (el.src || /(^|\\/javascript)$/i.test(el.type)) {\n const script = createElement(\"script\")\n const key = el.src ? \"src\" : \"textContent\"\n script[key] = el[key]!\n replaceElement(el, script)\n }\n }\n })\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable } from \"rxjs\"\nimport { map } from \"rxjs/operators\"\n\nimport {\n createElement,\n getElements,\n replaceElement\n} from \"browser\"\nimport { renderTable } from \"templates\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n document$: Observable /* Document observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch all `table` elements\n *\n * This function will re-render all tables by wrapping them to improve overflow\n * scrolling on smaller screen sizes.\n *\n * @param options - Options\n */\nexport function patchTables(\n { document$ }: MountOptions\n): void {\n const sentinel = createElement(\"table\")\n document$\n .pipe(\n map(() => getElements(\"table:not([class])\"))\n )\n .subscribe(els => {\n for (const el of els) {\n replaceElement(el, sentinel)\n replaceElement(sentinel, renderTable(el))\n }\n })\n}\n"],"sourceRoot":""} \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ar.min.js b/assets/javascripts/lunr/min/lunr.ar.min.js new file mode 100644 index 00000000..248ddc5d --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ar.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ar=function(){this.pipeline.reset(),this.pipeline.add(e.ar.trimmer,e.ar.stopWordFilter,e.ar.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ar.stemmer))},e.ar.wordCharacters="ء-ٛٱـ",e.ar.trimmer=e.trimmerSupport.generateTrimmer(e.ar.wordCharacters),e.Pipeline.registerFunction(e.ar.trimmer,"trimmer-ar"),e.ar.stemmer=function(){var e=this;return e.result=!1,e.preRemoved=!1,e.sufRemoved=!1,e.pre={pre1:"ف ك ب و س ل ن ا ي ت",pre2:"ال لل",pre3:"بال وال فال تال كال ولل",pre4:"فبال كبال وبال وكال"},e.suf={suf1:"ه ك ت ن ا ي",suf2:"نك نه ها وك يا اه ون ين تن تم نا وا ان كم كن ني نن ما هم هن تك ته ات يه",suf3:"تين كهم نيه نهم ونه وها يهم ونا ونك وني وهم تكم تنا تها تني تهم كما كها ناه نكم هنا تان يها",suf4:"كموه ناها ونني ونهم تكما تموه تكاه كماه ناكم ناهم نيها وننا"},e.patterns=JSON.parse('{"pt43":[{"pt":[{"c":"ا","l":1}]},{"pt":[{"c":"ا,ت,ن,ي","l":0}],"mPt":[{"c":"ف","l":0,"m":1},{"c":"ع","l":1,"m":2},{"c":"ل","l":2,"m":3}]},{"pt":[{"c":"و","l":2}],"mPt":[{"c":"ف","l":0,"m":0},{"c":"ع","l":1,"m":1},{"c":"ل","l":2,"m":3}]},{"pt":[{"c":"ا","l":2}]},{"pt":[{"c":"ي","l":2}],"mPt":[{"c":"ف","l":0,"m":0},{"c":"ع","l":1,"m":1},{"c":"ا","l":2},{"c":"ل","l":3,"m":3}]},{"pt":[{"c":"م","l":0}]}],"pt53":[{"pt":[{"c":"ت","l":0},{"c":"ا","l":2}]},{"pt":[{"c":"ا,ن,ت,ي","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":0},{"c":"ا","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":3},{"c":"ل","l":3,"m":4},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":0},{"c":"ا","l":3}],"mPt":[{"c":"ف","l":0,"m":1},{"c":"ع","l":1,"m":2},{"c":"ل","l":2,"m":4}]},{"pt":[{"c":"ا","l":3},{"c":"ن","l":4}]},{"pt":[{"c":"ت","l":0},{"c":"ي","l":3}]},{"pt":[{"c":"م","l":0},{"c":"و","l":3}]},{"pt":[{"c":"ا","l":1},{"c":"و","l":3}]},{"pt":[{"c":"و","l":1},{"c":"ا","l":2}]},{"pt":[{"c":"م","l":0},{"c":"ا","l":3}]},{"pt":[{"c":"م","l":0},{"c":"ي","l":3}]},{"pt":[{"c":"ا","l":2},{"c":"ن","l":3}]},{"pt":[{"c":"م","l":0},{"c":"ن","l":1}],"mPt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ف","l":2,"m":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"م","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"م","l":0},{"c":"ا","l":2}]},{"pt":[{"c":"م","l":1},{"c":"ا","l":3}]},{"pt":[{"c":"ي,ت,ا,ن","l":0},{"c":"ت","l":1}],"mPt":[{"c":"ف","l":0,"m":2},{"c":"ع","l":1,"m":3},{"c":"ا","l":2},{"c":"ل","l":3,"m":4}]},{"pt":[{"c":"ت,ي,ا,ن","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":2},{"c":"ي","l":3}]},{"pt":[{"c":"ا,ي,ت,ن","l":0},{"c":"ن","l":1}],"mPt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ف","l":2,"m":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":3},{"c":"ء","l":4}]}],"pt63":[{"pt":[{"c":"ا","l":0},{"c":"ت","l":2},{"c":"ا","l":4}]},{"pt":[{"c":"ا,ت,ن,ي","l":0},{"c":"س","l":1},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ف","l":3,"m":3},{"c":"ع","l":4,"m":4},{"c":"ا","l":5},{"c":"ل","l":6,"m":5}]},{"pt":[{"c":"ا,ن,ت,ي","l":0},{"c":"و","l":3}]},{"pt":[{"c":"م","l":0},{"c":"س","l":1},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ف","l":3,"m":3},{"c":"ع","l":4,"m":4},{"c":"ا","l":5},{"c":"ل","l":6,"m":5}]},{"pt":[{"c":"ي","l":1},{"c":"ي","l":3},{"c":"ا","l":4},{"c":"ء","l":5}]},{"pt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ا","l":4}]}],"pt54":[{"pt":[{"c":"ت","l":0}]},{"pt":[{"c":"ا,ي,ت,ن","l":0}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":2},{"c":"ل","l":3,"m":3},{"c":"ر","l":4,"m":4},{"c":"ا","l":5},{"c":"ر","l":6,"m":4}]},{"pt":[{"c":"م","l":0}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":2},{"c":"ل","l":3,"m":3},{"c":"ر","l":4,"m":4},{"c":"ا","l":5},{"c":"ر","l":6,"m":4}]},{"pt":[{"c":"ا","l":2}]},{"pt":[{"c":"ا","l":0},{"c":"ن","l":2}]}],"pt64":[{"pt":[{"c":"ا","l":0},{"c":"ا","l":4}]},{"pt":[{"c":"م","l":0},{"c":"ت","l":1}]}],"pt73":[{"pt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ا","l":5}]}],"pt75":[{"pt":[{"c":"ا","l":0},{"c":"ا","l":5}]}]}'),e.execArray=["cleanWord","removeDiacritics","cleanAlef","removeStopWords","normalizeHamzaAndAlef","removeStartWaw","removePre432","removeEndTaa","wordCheck"],e.stem=function(){var r=0;for(e.result=!1,e.preRemoved=!1,e.sufRemoved=!1;r=0)return!0},e.normalizeHamzaAndAlef=function(){return e.word=e.word.replace("ؤ","ء"),e.word=e.word.replace("ئ","ء"),e.word=e.word.replace(/([\u0627])\1+/gi,"ا"),!1},e.removeEndTaa=function(){return!(e.word.length>2)||(e.word=e.word.replace(/[\u0627]$/,""),e.word=e.word.replace("ة",""),!1)},e.removeStartWaw=function(){return e.word.length>3&&"و"==e.word[0]&&"و"==e.word[1]&&(e.word=e.word.slice(1)),!1},e.removePre432=function(){var r=e.word;if(e.word.length>=7){var t=new RegExp("^("+e.pre.pre4.split(" ").join("|")+")");e.word=e.word.replace(t,"")}if(e.word==r&&e.word.length>=6){var c=new RegExp("^("+e.pre.pre3.split(" ").join("|")+")");e.word=e.word.replace(c,"")}if(e.word==r&&e.word.length>=5){var l=new RegExp("^("+e.pre.pre2.split(" ").join("|")+")");e.word=e.word.replace(l,"")}return r!=e.word&&(e.preRemoved=!0),!1},e.patternCheck=function(r){for(var t=0;t3){var t=new RegExp("^("+e.pre.pre1.split(" ").join("|")+")");e.word=e.word.replace(t,"")}return r!=e.word&&(e.preRemoved=!0),!1},e.removeSuf1=function(){var r=e.word;if(0==e.sufRemoved&&e.word.length>3){var t=new RegExp("("+e.suf.suf1.split(" ").join("|")+")$");e.word=e.word.replace(t,"")}return r!=e.word&&(e.sufRemoved=!0),!1},e.removeSuf432=function(){var r=e.word;if(e.word.length>=6){var t=new RegExp("("+e.suf.suf4.split(" ").join("|")+")$");e.word=e.word.replace(t,"")}if(e.word==r&&e.word.length>=5){var c=new RegExp("("+e.suf.suf3.split(" ").join("|")+")$");e.word=e.word.replace(c,"")}if(e.word==r&&e.word.length>=4){var l=new RegExp("("+e.suf.suf2.split(" ").join("|")+")$");e.word=e.word.replace(l,"")}return r!=e.word&&(e.sufRemoved=!0),!1},e.wordCheck=function(){for(var r=(e.word,[e.removeSuf432,e.removeSuf1,e.removePre1]),t=0,c=!1;e.word.length>=7&&!e.result&&t=f.limit)return;f.cursor++}for(;!f.out_grouping(w,97,248);){if(f.cursor>=f.limit)return;f.cursor++}d=f.cursor,d=d&&(r=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,e=f.find_among_b(c,32),f.limit_backward=r,e))switch(f.bra=f.cursor,e){case 1:f.slice_del();break;case 2:f.in_grouping_b(p,97,229)&&f.slice_del()}}function t(){var e,r=f.limit-f.cursor;f.cursor>=d&&(e=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,f.find_among_b(l,4)?(f.bra=f.cursor,f.limit_backward=e,f.cursor=f.limit-r,f.cursor>f.limit_backward&&(f.cursor--,f.bra=f.cursor,f.slice_del())):f.limit_backward=e)}function s(){var e,r,i,n=f.limit-f.cursor;if(f.ket=f.cursor,f.eq_s_b(2,"st")&&(f.bra=f.cursor,f.eq_s_b(2,"ig")&&f.slice_del()),f.cursor=f.limit-n,f.cursor>=d&&(r=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,e=f.find_among_b(m,5),f.limit_backward=r,e))switch(f.bra=f.cursor,e){case 1:f.slice_del(),i=f.limit-f.cursor,t(),f.cursor=f.limit-i;break;case 2:f.slice_from("løs")}}function o(){var e;f.cursor>=d&&(e=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,f.out_grouping_b(w,97,248)?(f.bra=f.cursor,u=f.slice_to(u),f.limit_backward=e,f.eq_v_b(u)&&f.slice_del()):f.limit_backward=e)}var a,d,u,c=[new r("hed",-1,1),new r("ethed",0,1),new r("ered",-1,1),new r("e",-1,1),new r("erede",3,1),new r("ende",3,1),new r("erende",5,1),new r("ene",3,1),new r("erne",3,1),new r("ere",3,1),new r("en",-1,1),new r("heden",10,1),new r("eren",10,1),new r("er",-1,1),new r("heder",13,1),new r("erer",13,1),new r("s",-1,2),new r("heds",16,1),new r("es",16,1),new r("endes",18,1),new r("erendes",19,1),new r("enes",18,1),new r("ernes",18,1),new r("eres",18,1),new r("ens",16,1),new r("hedens",24,1),new r("erens",24,1),new r("ers",16,1),new r("ets",16,1),new r("erets",28,1),new r("et",-1,1),new r("eret",30,1)],l=[new r("gd",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1)],m=[new r("ig",-1,1),new r("lig",0,1),new r("elig",1,1),new r("els",-1,1),new r("løst",-1,2)],w=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],p=[239,254,42,3,0,0,0,0,0,0,0,0,0,0,0,0,16],f=new i;this.setCurrent=function(e){f.setCurrent(e)},this.getCurrent=function(){return f.getCurrent()},this.stem=function(){var r=f.cursor;return e(),f.limit_backward=r,f.cursor=f.limit,n(),f.cursor=f.limit,t(),f.cursor=f.limit,s(),f.cursor=f.limit,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.da.stemmer,"stemmer-da"),e.da.stopWordFilter=e.generateStopWordFilter("ad af alle alt anden at blev blive bliver da de dem den denne der deres det dette dig din disse dog du efter eller en end er et for fra ham han hans har havde have hende hendes her hos hun hvad hvis hvor i ikke ind jeg jer jo kunne man mange med meget men mig min mine mit mod ned noget nogle nu når og også om op os over på selv sig sin sine sit skal skulle som sådan thi til ud under var vi vil ville vor være været".split(" ")),e.Pipeline.registerFunction(e.da.stopWordFilter,"stopWordFilter-da")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.de.min.js b/assets/javascripts/lunr/min/lunr.de.min.js new file mode 100644 index 00000000..f3b5c108 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.de.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `German` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.de=function(){this.pipeline.reset(),this.pipeline.add(e.de.trimmer,e.de.stopWordFilter,e.de.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.de.stemmer))},e.de.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.de.trimmer=e.trimmerSupport.generateTrimmer(e.de.wordCharacters),e.Pipeline.registerFunction(e.de.trimmer,"trimmer-de"),e.de.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,n){return!(!v.eq_s(1,e)||(v.ket=v.cursor,!v.in_grouping(p,97,252)))&&(v.slice_from(r),v.cursor=n,!0)}function i(){for(var r,n,i,s,t=v.cursor;;)if(r=v.cursor,v.bra=r,v.eq_s(1,"ß"))v.ket=v.cursor,v.slice_from("ss");else{if(r>=v.limit)break;v.cursor=r+1}for(v.cursor=t;;)for(n=v.cursor;;){if(i=v.cursor,v.in_grouping(p,97,252)){if(s=v.cursor,v.bra=s,e("u","U",i))break;if(v.cursor=s,e("y","Y",i))break}if(i>=v.limit)return void(v.cursor=n);v.cursor=i+1}}function s(){for(;!v.in_grouping(p,97,252);){if(v.cursor>=v.limit)return!0;v.cursor++}for(;!v.out_grouping(p,97,252);){if(v.cursor>=v.limit)return!0;v.cursor++}return!1}function t(){m=v.limit,l=m;var e=v.cursor+3;0<=e&&e<=v.limit&&(d=e,s()||(m=v.cursor,m=v.limit)return;v.cursor++}}}function c(){return m<=v.cursor}function u(){return l<=v.cursor}function a(){var e,r,n,i,s=v.limit-v.cursor;if(v.ket=v.cursor,(e=v.find_among_b(w,7))&&(v.bra=v.cursor,c()))switch(e){case 1:v.slice_del();break;case 2:v.slice_del(),v.ket=v.cursor,v.eq_s_b(1,"s")&&(v.bra=v.cursor,v.eq_s_b(3,"nis")&&v.slice_del());break;case 3:v.in_grouping_b(g,98,116)&&v.slice_del()}if(v.cursor=v.limit-s,v.ket=v.cursor,(e=v.find_among_b(f,4))&&(v.bra=v.cursor,c()))switch(e){case 1:v.slice_del();break;case 2:if(v.in_grouping_b(k,98,116)){var t=v.cursor-3;v.limit_backward<=t&&t<=v.limit&&(v.cursor=t,v.slice_del())}}if(v.cursor=v.limit-s,v.ket=v.cursor,(e=v.find_among_b(_,8))&&(v.bra=v.cursor,u()))switch(e){case 1:v.slice_del(),v.ket=v.cursor,v.eq_s_b(2,"ig")&&(v.bra=v.cursor,r=v.limit-v.cursor,v.eq_s_b(1,"e")||(v.cursor=v.limit-r,u()&&v.slice_del()));break;case 2:n=v.limit-v.cursor,v.eq_s_b(1,"e")||(v.cursor=v.limit-n,v.slice_del());break;case 3:if(v.slice_del(),v.ket=v.cursor,i=v.limit-v.cursor,!v.eq_s_b(2,"er")&&(v.cursor=v.limit-i,!v.eq_s_b(2,"en")))break;v.bra=v.cursor,c()&&v.slice_del();break;case 4:v.slice_del(),v.ket=v.cursor,e=v.find_among_b(b,2),e&&(v.bra=v.cursor,u()&&1==e&&v.slice_del())}}var d,l,m,h=[new r("",-1,6),new r("U",0,2),new r("Y",0,1),new r("ä",0,3),new r("ö",0,4),new r("ü",0,5)],w=[new r("e",-1,2),new r("em",-1,1),new r("en",-1,2),new r("ern",-1,1),new r("er",-1,1),new r("s",-1,3),new r("es",5,2)],f=[new r("en",-1,1),new r("er",-1,1),new r("st",-1,2),new r("est",2,1)],b=[new r("ig",-1,1),new r("lich",-1,1)],_=[new r("end",-1,1),new r("ig",-1,2),new r("ung",-1,1),new r("lich",-1,3),new r("isch",-1,2),new r("ik",-1,2),new r("heit",-1,3),new r("keit",-1,4)],p=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32,8],g=[117,30,5],k=[117,30,4],v=new n;this.setCurrent=function(e){v.setCurrent(e)},this.getCurrent=function(){return v.getCurrent()},this.stem=function(){var e=v.cursor;return i(),v.cursor=e,t(),v.limit_backward=e,v.cursor=v.limit,a(),v.cursor=v.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.de.stemmer,"stemmer-de"),e.de.stopWordFilter=e.generateStopWordFilter("aber alle allem allen aller alles als also am an ander andere anderem anderen anderer anderes anderm andern anderr anders auch auf aus bei bin bis bist da damit dann das dasselbe dazu daß dein deine deinem deinen deiner deines dem demselben den denn denselben der derer derselbe derselben des desselben dessen dich die dies diese dieselbe dieselben diesem diesen dieser dieses dir doch dort du durch ein eine einem einen einer eines einig einige einigem einigen einiger einiges einmal er es etwas euch euer eure eurem euren eurer eures für gegen gewesen hab habe haben hat hatte hatten hier hin hinter ich ihm ihn ihnen ihr ihre ihrem ihren ihrer ihres im in indem ins ist jede jedem jeden jeder jedes jene jenem jenen jener jenes jetzt kann kein keine keinem keinen keiner keines können könnte machen man manche manchem manchen mancher manches mein meine meinem meinen meiner meines mich mir mit muss musste nach nicht nichts noch nun nur ob oder ohne sehr sein seine seinem seinen seiner seines selbst sich sie sind so solche solchem solchen solcher solches soll sollte sondern sonst um und uns unse unsem unsen unser unses unter viel vom von vor war waren warst was weg weil weiter welche welchem welchen welcher welches wenn werde werden wie wieder will wir wird wirst wo wollen wollte während würde würden zu zum zur zwar zwischen über".split(" ")),e.Pipeline.registerFunction(e.de.stopWordFilter,"stopWordFilter-de")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.du.min.js b/assets/javascripts/lunr/min/lunr.du.min.js new file mode 100644 index 00000000..49a0f3f0 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.du.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Dutch` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");console.warn('[Lunr Languages] Please use the "nl" instead of the "du". The "nl" code is the standard code for Dutch language, and "du" will be removed in the next major versions.'),e.du=function(){this.pipeline.reset(),this.pipeline.add(e.du.trimmer,e.du.stopWordFilter,e.du.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.du.stemmer))},e.du.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.du.trimmer=e.trimmerSupport.generateTrimmer(e.du.wordCharacters),e.Pipeline.registerFunction(e.du.trimmer,"trimmer-du"),e.du.stemmer=function(){var r=e.stemmerSupport.Among,i=e.stemmerSupport.SnowballProgram,n=new function(){function e(){for(var e,r,i,o=C.cursor;;){if(C.bra=C.cursor,e=C.find_among(b,11))switch(C.ket=C.cursor,e){case 1:C.slice_from("a");continue;case 2:C.slice_from("e");continue;case 3:C.slice_from("i");continue;case 4:C.slice_from("o");continue;case 5:C.slice_from("u");continue;case 6:if(C.cursor>=C.limit)break;C.cursor++;continue}break}for(C.cursor=o,C.bra=o,C.eq_s(1,"y")?(C.ket=C.cursor,C.slice_from("Y")):C.cursor=o;;)if(r=C.cursor,C.in_grouping(q,97,232)){if(i=C.cursor,C.bra=i,C.eq_s(1,"i"))C.ket=C.cursor,C.in_grouping(q,97,232)&&(C.slice_from("I"),C.cursor=r);else if(C.cursor=i,C.eq_s(1,"y"))C.ket=C.cursor,C.slice_from("Y"),C.cursor=r;else if(n(r))break}else if(n(r))break}function n(e){return C.cursor=e,e>=C.limit||(C.cursor++,!1)}function o(){_=C.limit,f=_,t()||(_=C.cursor,_<3&&(_=3),t()||(f=C.cursor))}function t(){for(;!C.in_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}for(;!C.out_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}return!1}function s(){for(var e;;)if(C.bra=C.cursor,e=C.find_among(p,3))switch(C.ket=C.cursor,e){case 1:C.slice_from("y");break;case 2:C.slice_from("i");break;case 3:if(C.cursor>=C.limit)return;C.cursor++}}function u(){return _<=C.cursor}function c(){return f<=C.cursor}function a(){var e=C.limit-C.cursor;C.find_among_b(g,3)&&(C.cursor=C.limit-e,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del()))}function l(){var e;w=!1,C.ket=C.cursor,C.eq_s_b(1,"e")&&(C.bra=C.cursor,u()&&(e=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-e,C.slice_del(),w=!0,a())))}function m(){var e;u()&&(e=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-e,C.eq_s_b(3,"gem")||(C.cursor=C.limit-e,C.slice_del(),a())))}function d(){var e,r,i,n,o,t,s=C.limit-C.cursor;if(C.ket=C.cursor,e=C.find_among_b(h,5))switch(C.bra=C.cursor,e){case 1:u()&&C.slice_from("heid");break;case 2:m();break;case 3:u()&&C.out_grouping_b(z,97,232)&&C.slice_del()}if(C.cursor=C.limit-s,l(),C.cursor=C.limit-s,C.ket=C.cursor,C.eq_s_b(4,"heid")&&(C.bra=C.cursor,c()&&(r=C.limit-C.cursor,C.eq_s_b(1,"c")||(C.cursor=C.limit-r,C.slice_del(),C.ket=C.cursor,C.eq_s_b(2,"en")&&(C.bra=C.cursor,m())))),C.cursor=C.limit-s,C.ket=C.cursor,e=C.find_among_b(k,6))switch(C.bra=C.cursor,e){case 1:if(c()){if(C.slice_del(),i=C.limit-C.cursor,C.ket=C.cursor,C.eq_s_b(2,"ig")&&(C.bra=C.cursor,c()&&(n=C.limit-C.cursor,!C.eq_s_b(1,"e")))){C.cursor=C.limit-n,C.slice_del();break}C.cursor=C.limit-i,a()}break;case 2:c()&&(o=C.limit-C.cursor,C.eq_s_b(1,"e")||(C.cursor=C.limit-o,C.slice_del()));break;case 3:c()&&(C.slice_del(),l());break;case 4:c()&&C.slice_del();break;case 5:c()&&w&&C.slice_del()}C.cursor=C.limit-s,C.out_grouping_b(j,73,232)&&(t=C.limit-C.cursor,C.find_among_b(v,4)&&C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-t,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del())))}var f,_,w,b=[new r("",-1,6),new r("á",0,1),new r("ä",0,1),new r("é",0,2),new r("ë",0,2),new r("í",0,3),new r("ï",0,3),new r("ó",0,4),new r("ö",0,4),new r("ú",0,5),new r("ü",0,5)],p=[new r("",-1,3),new r("I",0,2),new r("Y",0,1)],g=[new r("dd",-1,-1),new r("kk",-1,-1),new r("tt",-1,-1)],h=[new r("ene",-1,2),new r("se",-1,3),new r("en",-1,2),new r("heden",2,1),new r("s",-1,3)],k=[new r("end",-1,1),new r("ig",-1,2),new r("ing",-1,1),new r("lijk",-1,3),new r("baar",-1,4),new r("bar",-1,5)],v=[new r("aa",-1,-1),new r("ee",-1,-1),new r("oo",-1,-1),new r("uu",-1,-1)],q=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],j=[1,0,0,17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],z=[17,67,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],C=new i;this.setCurrent=function(e){C.setCurrent(e)},this.getCurrent=function(){return C.getCurrent()},this.stem=function(){var r=C.cursor;return e(),C.cursor=r,o(),C.limit_backward=r,C.cursor=C.limit,d(),C.cursor=C.limit_backward,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.du.stemmer,"stemmer-du"),e.du.stopWordFilter=e.generateStopWordFilter(" aan al alles als altijd andere ben bij daar dan dat de der deze die dit doch doen door dus een eens en er ge geen geweest haar had heb hebben heeft hem het hier hij hoe hun iemand iets ik in is ja je kan kon kunnen maar me meer men met mij mijn moet na naar niet niets nog nu of om omdat onder ons ook op over reeds te tegen toch toen tot u uit uw van veel voor want waren was wat werd wezen wie wil worden wordt zal ze zelf zich zij zijn zo zonder zou".split(" ")),e.Pipeline.registerFunction(e.du.stopWordFilter,"stopWordFilter-du")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.es.min.js b/assets/javascripts/lunr/min/lunr.es.min.js new file mode 100644 index 00000000..2989d342 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.es.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Spanish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,s){"function"==typeof define&&define.amd?define(s):"object"==typeof exports?module.exports=s():s()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.es=function(){this.pipeline.reset(),this.pipeline.add(e.es.trimmer,e.es.stopWordFilter,e.es.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.es.stemmer))},e.es.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.es.trimmer=e.trimmerSupport.generateTrimmer(e.es.wordCharacters),e.Pipeline.registerFunction(e.es.trimmer,"trimmer-es"),e.es.stemmer=function(){var s=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,n=new function(){function e(){if(A.out_grouping(x,97,252)){for(;!A.in_grouping(x,97,252);){if(A.cursor>=A.limit)return!0;A.cursor++}return!1}return!0}function n(){if(A.in_grouping(x,97,252)){var s=A.cursor;if(e()){if(A.cursor=s,!A.in_grouping(x,97,252))return!0;for(;!A.out_grouping(x,97,252);){if(A.cursor>=A.limit)return!0;A.cursor++}}return!1}return!0}function i(){var s,r=A.cursor;if(n()){if(A.cursor=r,!A.out_grouping(x,97,252))return;if(s=A.cursor,e()){if(A.cursor=s,!A.in_grouping(x,97,252)||A.cursor>=A.limit)return;A.cursor++}}g=A.cursor}function a(){for(;!A.in_grouping(x,97,252);){if(A.cursor>=A.limit)return!1;A.cursor++}for(;!A.out_grouping(x,97,252);){if(A.cursor>=A.limit)return!1;A.cursor++}return!0}function t(){var e=A.cursor;g=A.limit,p=g,v=g,i(),A.cursor=e,a()&&(p=A.cursor,a()&&(v=A.cursor))}function o(){for(var e;;){if(A.bra=A.cursor,e=A.find_among(k,6))switch(A.ket=A.cursor,e){case 1:A.slice_from("a");continue;case 2:A.slice_from("e");continue;case 3:A.slice_from("i");continue;case 4:A.slice_from("o");continue;case 5:A.slice_from("u");continue;case 6:if(A.cursor>=A.limit)break;A.cursor++;continue}break}}function u(){return g<=A.cursor}function w(){return p<=A.cursor}function c(){return v<=A.cursor}function m(){var e;if(A.ket=A.cursor,A.find_among_b(y,13)&&(A.bra=A.cursor,(e=A.find_among_b(q,11))&&u()))switch(e){case 1:A.bra=A.cursor,A.slice_from("iendo");break;case 2:A.bra=A.cursor,A.slice_from("ando");break;case 3:A.bra=A.cursor,A.slice_from("ar");break;case 4:A.bra=A.cursor,A.slice_from("er");break;case 5:A.bra=A.cursor,A.slice_from("ir");break;case 6:A.slice_del();break;case 7:A.eq_s_b(1,"u")&&A.slice_del()}}function l(e,s){if(!c())return!0;A.slice_del(),A.ket=A.cursor;var r=A.find_among_b(e,s);return r&&(A.bra=A.cursor,1==r&&c()&&A.slice_del()),!1}function d(e){return!c()||(A.slice_del(),A.ket=A.cursor,A.eq_s_b(2,e)&&(A.bra=A.cursor,c()&&A.slice_del()),!1)}function b(){var e;if(A.ket=A.cursor,e=A.find_among_b(S,46)){switch(A.bra=A.cursor,e){case 1:if(!c())return!1;A.slice_del();break;case 2:if(d("ic"))return!1;break;case 3:if(!c())return!1;A.slice_from("log");break;case 4:if(!c())return!1;A.slice_from("u");break;case 5:if(!c())return!1;A.slice_from("ente");break;case 6:if(!w())return!1;A.slice_del(),A.ket=A.cursor,e=A.find_among_b(C,4),e&&(A.bra=A.cursor,c()&&(A.slice_del(),1==e&&(A.ket=A.cursor,A.eq_s_b(2,"at")&&(A.bra=A.cursor,c()&&A.slice_del()))));break;case 7:if(l(P,3))return!1;break;case 8:if(l(F,3))return!1;break;case 9:if(d("at"))return!1}return!0}return!1}function f(){var e,s;if(A.cursor>=g&&(s=A.limit_backward,A.limit_backward=g,A.ket=A.cursor,e=A.find_among_b(W,12),A.limit_backward=s,e)){if(A.bra=A.cursor,1==e){if(!A.eq_s_b(1,"u"))return!1;A.slice_del()}return!0}return!1}function _(){var e,s,r,n;if(A.cursor>=g&&(s=A.limit_backward,A.limit_backward=g,A.ket=A.cursor,e=A.find_among_b(L,96),A.limit_backward=s,e))switch(A.bra=A.cursor,e){case 1:r=A.limit-A.cursor,A.eq_s_b(1,"u")?(n=A.limit-A.cursor,A.eq_s_b(1,"g")?A.cursor=A.limit-n:A.cursor=A.limit-r):A.cursor=A.limit-r,A.bra=A.cursor;case 2:A.slice_del()}}function h(){var e,s;if(A.ket=A.cursor,e=A.find_among_b(z,8))switch(A.bra=A.cursor,e){case 1:u()&&A.slice_del();break;case 2:u()&&(A.slice_del(),A.ket=A.cursor,A.eq_s_b(1,"u")&&(A.bra=A.cursor,s=A.limit-A.cursor,A.eq_s_b(1,"g")&&(A.cursor=A.limit-s,u()&&A.slice_del())))}}var v,p,g,k=[new s("",-1,6),new s("á",0,1),new s("é",0,2),new s("í",0,3),new s("ó",0,4),new s("ú",0,5)],y=[new s("la",-1,-1),new s("sela",0,-1),new s("le",-1,-1),new s("me",-1,-1),new s("se",-1,-1),new s("lo",-1,-1),new s("selo",5,-1),new s("las",-1,-1),new s("selas",7,-1),new s("les",-1,-1),new s("los",-1,-1),new s("selos",10,-1),new s("nos",-1,-1)],q=[new s("ando",-1,6),new s("iendo",-1,6),new s("yendo",-1,7),new s("ándo",-1,2),new s("iéndo",-1,1),new s("ar",-1,6),new s("er",-1,6),new s("ir",-1,6),new s("ár",-1,3),new s("ér",-1,4),new s("ír",-1,5)],C=[new s("ic",-1,-1),new s("ad",-1,-1),new s("os",-1,-1),new s("iv",-1,1)],P=[new s("able",-1,1),new s("ible",-1,1),new s("ante",-1,1)],F=[new s("ic",-1,1),new s("abil",-1,1),new s("iv",-1,1)],S=[new s("ica",-1,1),new s("ancia",-1,2),new s("encia",-1,5),new s("adora",-1,2),new s("osa",-1,1),new s("ista",-1,1),new s("iva",-1,9),new s("anza",-1,1),new s("logía",-1,3),new s("idad",-1,8),new s("able",-1,1),new s("ible",-1,1),new s("ante",-1,2),new s("mente",-1,7),new s("amente",13,6),new s("ación",-1,2),new s("ución",-1,4),new s("ico",-1,1),new s("ismo",-1,1),new s("oso",-1,1),new s("amiento",-1,1),new s("imiento",-1,1),new s("ivo",-1,9),new s("ador",-1,2),new s("icas",-1,1),new s("ancias",-1,2),new s("encias",-1,5),new s("adoras",-1,2),new s("osas",-1,1),new s("istas",-1,1),new s("ivas",-1,9),new s("anzas",-1,1),new s("logías",-1,3),new s("idades",-1,8),new s("ables",-1,1),new s("ibles",-1,1),new s("aciones",-1,2),new s("uciones",-1,4),new s("adores",-1,2),new s("antes",-1,2),new s("icos",-1,1),new s("ismos",-1,1),new s("osos",-1,1),new s("amientos",-1,1),new s("imientos",-1,1),new s("ivos",-1,9)],W=[new s("ya",-1,1),new s("ye",-1,1),new s("yan",-1,1),new s("yen",-1,1),new s("yeron",-1,1),new s("yendo",-1,1),new s("yo",-1,1),new s("yas",-1,1),new s("yes",-1,1),new s("yais",-1,1),new s("yamos",-1,1),new s("yó",-1,1)],L=[new s("aba",-1,2),new s("ada",-1,2),new s("ida",-1,2),new s("ara",-1,2),new s("iera",-1,2),new s("ía",-1,2),new s("aría",5,2),new s("ería",5,2),new s("iría",5,2),new s("ad",-1,2),new s("ed",-1,2),new s("id",-1,2),new s("ase",-1,2),new s("iese",-1,2),new s("aste",-1,2),new s("iste",-1,2),new s("an",-1,2),new s("aban",16,2),new s("aran",16,2),new s("ieran",16,2),new s("ían",16,2),new s("arían",20,2),new s("erían",20,2),new s("irían",20,2),new s("en",-1,1),new s("asen",24,2),new s("iesen",24,2),new s("aron",-1,2),new s("ieron",-1,2),new s("arán",-1,2),new s("erán",-1,2),new s("irán",-1,2),new s("ado",-1,2),new s("ido",-1,2),new s("ando",-1,2),new s("iendo",-1,2),new s("ar",-1,2),new s("er",-1,2),new s("ir",-1,2),new s("as",-1,2),new s("abas",39,2),new s("adas",39,2),new s("idas",39,2),new s("aras",39,2),new s("ieras",39,2),new s("ías",39,2),new s("arías",45,2),new s("erías",45,2),new s("irías",45,2),new s("es",-1,1),new s("ases",49,2),new s("ieses",49,2),new s("abais",-1,2),new s("arais",-1,2),new s("ierais",-1,2),new s("íais",-1,2),new s("aríais",55,2),new s("eríais",55,2),new s("iríais",55,2),new s("aseis",-1,2),new s("ieseis",-1,2),new s("asteis",-1,2),new s("isteis",-1,2),new s("áis",-1,2),new s("éis",-1,1),new s("aréis",64,2),new s("eréis",64,2),new s("iréis",64,2),new s("ados",-1,2),new s("idos",-1,2),new s("amos",-1,2),new s("ábamos",70,2),new s("áramos",70,2),new s("iéramos",70,2),new s("íamos",70,2),new s("aríamos",74,2),new s("eríamos",74,2),new s("iríamos",74,2),new s("emos",-1,1),new s("aremos",78,2),new s("eremos",78,2),new s("iremos",78,2),new s("ásemos",78,2),new s("iésemos",78,2),new s("imos",-1,2),new s("arás",-1,2),new s("erás",-1,2),new s("irás",-1,2),new s("ís",-1,2),new s("ará",-1,2),new s("erá",-1,2),new s("irá",-1,2),new s("aré",-1,2),new s("eré",-1,2),new s("iré",-1,2),new s("ió",-1,2)],z=[new s("a",-1,1),new s("e",-1,2),new s("o",-1,1),new s("os",-1,1),new s("á",-1,1),new s("é",-1,2),new s("í",-1,1),new s("ó",-1,1)],x=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,1,17,4,10],A=new r;this.setCurrent=function(e){A.setCurrent(e)},this.getCurrent=function(){return A.getCurrent()},this.stem=function(){var e=A.cursor;return t(),A.limit_backward=e,A.cursor=A.limit,m(),A.cursor=A.limit,b()||(A.cursor=A.limit,f()||(A.cursor=A.limit,_())),A.cursor=A.limit,h(),A.cursor=A.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.es.stemmer,"stemmer-es"),e.es.stopWordFilter=e.generateStopWordFilter("a al algo algunas algunos ante antes como con contra cual cuando de del desde donde durante e el ella ellas ellos en entre era erais eran eras eres es esa esas ese eso esos esta estaba estabais estaban estabas estad estada estadas estado estados estamos estando estar estaremos estará estarán estarás estaré estaréis estaría estaríais estaríamos estarían estarías estas este estemos esto estos estoy estuve estuviera estuvierais estuvieran estuvieras estuvieron estuviese estuvieseis estuviesen estuvieses estuvimos estuviste estuvisteis estuviéramos estuviésemos estuvo está estábamos estáis están estás esté estéis estén estés fue fuera fuerais fueran fueras fueron fuese fueseis fuesen fueses fui fuimos fuiste fuisteis fuéramos fuésemos ha habida habidas habido habidos habiendo habremos habrá habrán habrás habré habréis habría habríais habríamos habrían habrías habéis había habíais habíamos habían habías han has hasta hay haya hayamos hayan hayas hayáis he hemos hube hubiera hubierais hubieran hubieras hubieron hubiese hubieseis hubiesen hubieses hubimos hubiste hubisteis hubiéramos hubiésemos hubo la las le les lo los me mi mis mucho muchos muy más mí mía mías mío míos nada ni no nos nosotras nosotros nuestra nuestras nuestro nuestros o os otra otras otro otros para pero poco por porque que quien quienes qué se sea seamos sean seas seremos será serán serás seré seréis sería seríais seríamos serían serías seáis sido siendo sin sobre sois somos son soy su sus suya suyas suyo suyos sí también tanto te tendremos tendrá tendrán tendrás tendré tendréis tendría tendríais tendríamos tendrían tendrías tened tenemos tenga tengamos tengan tengas tengo tengáis tenida tenidas tenido tenidos teniendo tenéis tenía teníais teníamos tenían tenías ti tiene tienen tienes todo todos tu tus tuve tuviera tuvierais tuvieran tuvieras tuvieron tuviese tuvieseis tuviesen tuvieses tuvimos tuviste tuvisteis tuviéramos tuviésemos tuvo tuya tuyas tuyo tuyos tú un una uno unos vosotras vosotros vuestra vuestras vuestro vuestros y ya yo él éramos".split(" ")),e.Pipeline.registerFunction(e.es.stopWordFilter,"stopWordFilter-es")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.fi.min.js b/assets/javascripts/lunr/min/lunr.fi.min.js new file mode 100644 index 00000000..29f5dfce --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.fi.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Finnish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(i,e){"function"==typeof define&&define.amd?define(e):"object"==typeof exports?module.exports=e():e()(i.lunr)}(this,function(){return function(i){if(void 0===i)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===i.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");i.fi=function(){this.pipeline.reset(),this.pipeline.add(i.fi.trimmer,i.fi.stopWordFilter,i.fi.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(i.fi.stemmer))},i.fi.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",i.fi.trimmer=i.trimmerSupport.generateTrimmer(i.fi.wordCharacters),i.Pipeline.registerFunction(i.fi.trimmer,"trimmer-fi"),i.fi.stemmer=function(){var e=i.stemmerSupport.Among,r=i.stemmerSupport.SnowballProgram,n=new function(){function i(){f=A.limit,d=f,n()||(f=A.cursor,n()||(d=A.cursor))}function n(){for(var i;;){if(i=A.cursor,A.in_grouping(W,97,246))break;if(A.cursor=i,i>=A.limit)return!0;A.cursor++}for(A.cursor=i;!A.out_grouping(W,97,246);){if(A.cursor>=A.limit)return!0;A.cursor++}return!1}function t(){return d<=A.cursor}function s(){var i,e;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(h,10)){switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:if(!A.in_grouping_b(x,97,246))return;break;case 2:if(!t())return}A.slice_del()}else A.limit_backward=e}function o(){var i,e,r;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(v,9))switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:r=A.limit-A.cursor,A.eq_s_b(1,"k")||(A.cursor=A.limit-r,A.slice_del());break;case 2:A.slice_del(),A.ket=A.cursor,A.eq_s_b(3,"kse")&&(A.bra=A.cursor,A.slice_from("ksi"));break;case 3:A.slice_del();break;case 4:A.find_among_b(p,6)&&A.slice_del();break;case 5:A.find_among_b(g,6)&&A.slice_del();break;case 6:A.find_among_b(j,2)&&A.slice_del()}else A.limit_backward=e}function l(){return A.find_among_b(q,7)}function a(){return A.eq_s_b(1,"i")&&A.in_grouping_b(L,97,246)}function u(){var i,e,r;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(C,30)){switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:if(!A.eq_s_b(1,"a"))return;break;case 2:case 9:if(!A.eq_s_b(1,"e"))return;break;case 3:if(!A.eq_s_b(1,"i"))return;break;case 4:if(!A.eq_s_b(1,"o"))return;break;case 5:if(!A.eq_s_b(1,"ä"))return;break;case 6:if(!A.eq_s_b(1,"ö"))return;break;case 7:if(r=A.limit-A.cursor,!l()&&(A.cursor=A.limit-r,!A.eq_s_b(2,"ie"))){A.cursor=A.limit-r;break}if(A.cursor=A.limit-r,A.cursor<=A.limit_backward){A.cursor=A.limit-r;break}A.cursor--,A.bra=A.cursor;break;case 8:if(!A.in_grouping_b(W,97,246)||!A.out_grouping_b(W,97,246))return}A.slice_del(),k=!0}else A.limit_backward=e}function c(){var i,e,r;if(A.cursor>=d)if(e=A.limit_backward,A.limit_backward=d,A.ket=A.cursor,i=A.find_among_b(P,14)){if(A.bra=A.cursor,A.limit_backward=e,1==i){if(r=A.limit-A.cursor,A.eq_s_b(2,"po"))return;A.cursor=A.limit-r}A.slice_del()}else A.limit_backward=e}function m(){var i;A.cursor>=f&&(i=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,A.find_among_b(F,2)?(A.bra=A.cursor,A.limit_backward=i,A.slice_del()):A.limit_backward=i)}function w(){var i,e,r,n,t,s;if(A.cursor>=f){if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,A.eq_s_b(1,"t")&&(A.bra=A.cursor,r=A.limit-A.cursor,A.in_grouping_b(W,97,246)&&(A.cursor=A.limit-r,A.slice_del(),A.limit_backward=e,n=A.limit-A.cursor,A.cursor>=d&&(A.cursor=d,t=A.limit_backward,A.limit_backward=A.cursor,A.cursor=A.limit-n,A.ket=A.cursor,i=A.find_among_b(S,2))))){if(A.bra=A.cursor,A.limit_backward=t,1==i){if(s=A.limit-A.cursor,A.eq_s_b(2,"po"))return;A.cursor=A.limit-s}return void A.slice_del()}A.limit_backward=e}}function _(){var i,e,r,n;if(A.cursor>=f){for(i=A.limit_backward,A.limit_backward=f,e=A.limit-A.cursor,l()&&(A.cursor=A.limit-e,A.ket=A.cursor,A.cursor>A.limit_backward&&(A.cursor--,A.bra=A.cursor,A.slice_del())),A.cursor=A.limit-e,A.ket=A.cursor,A.in_grouping_b(y,97,228)&&(A.bra=A.cursor,A.out_grouping_b(W,97,246)&&A.slice_del()),A.cursor=A.limit-e,A.ket=A.cursor,A.eq_s_b(1,"j")&&(A.bra=A.cursor,r=A.limit-A.cursor,A.eq_s_b(1,"o")?A.slice_del():(A.cursor=A.limit-r,A.eq_s_b(1,"u")&&A.slice_del())),A.cursor=A.limit-e,A.ket=A.cursor,A.eq_s_b(1,"o")&&(A.bra=A.cursor,A.eq_s_b(1,"j")&&A.slice_del()),A.cursor=A.limit-e,A.limit_backward=i;;){if(n=A.limit-A.cursor,A.out_grouping_b(W,97,246)){A.cursor=A.limit-n;break}if(A.cursor=A.limit-n,A.cursor<=A.limit_backward)return;A.cursor--}A.ket=A.cursor,A.cursor>A.limit_backward&&(A.cursor--,A.bra=A.cursor,b=A.slice_to(),A.eq_v_b(b)&&A.slice_del())}}var k,b,d,f,h=[new e("pa",-1,1),new e("sti",-1,2),new e("kaan",-1,1),new e("han",-1,1),new e("kin",-1,1),new e("hän",-1,1),new e("kään",-1,1),new e("ko",-1,1),new e("pä",-1,1),new e("kö",-1,1)],p=[new e("lla",-1,-1),new e("na",-1,-1),new e("ssa",-1,-1),new e("ta",-1,-1),new e("lta",3,-1),new e("sta",3,-1)],g=[new e("llä",-1,-1),new e("nä",-1,-1),new e("ssä",-1,-1),new e("tä",-1,-1),new e("ltä",3,-1),new e("stä",3,-1)],j=[new e("lle",-1,-1),new e("ine",-1,-1)],v=[new e("nsa",-1,3),new e("mme",-1,3),new e("nne",-1,3),new e("ni",-1,2),new e("si",-1,1),new e("an",-1,4),new e("en",-1,6),new e("än",-1,5),new e("nsä",-1,3)],q=[new e("aa",-1,-1),new e("ee",-1,-1),new e("ii",-1,-1),new e("oo",-1,-1),new e("uu",-1,-1),new e("ää",-1,-1),new e("öö",-1,-1)],C=[new e("a",-1,8),new e("lla",0,-1),new e("na",0,-1),new e("ssa",0,-1),new e("ta",0,-1),new e("lta",4,-1),new e("sta",4,-1),new e("tta",4,9),new e("lle",-1,-1),new e("ine",-1,-1),new e("ksi",-1,-1),new e("n",-1,7),new e("han",11,1),new e("den",11,-1,a),new e("seen",11,-1,l),new e("hen",11,2),new e("tten",11,-1,a),new e("hin",11,3),new e("siin",11,-1,a),new e("hon",11,4),new e("hän",11,5),new e("hön",11,6),new e("ä",-1,8),new e("llä",22,-1),new e("nä",22,-1),new e("ssä",22,-1),new e("tä",22,-1),new e("ltä",26,-1),new e("stä",26,-1),new e("ttä",26,9)],P=[new e("eja",-1,-1),new e("mma",-1,1),new e("imma",1,-1),new e("mpa",-1,1),new e("impa",3,-1),new e("mmi",-1,1),new e("immi",5,-1),new e("mpi",-1,1),new e("impi",7,-1),new e("ejä",-1,-1),new e("mmä",-1,1),new e("immä",10,-1),new e("mpä",-1,1),new e("impä",12,-1)],F=[new e("i",-1,-1),new e("j",-1,-1)],S=[new e("mma",-1,1),new e("imma",0,-1)],y=[17,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8],W=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],L=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],x=[17,97,24,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],A=new r;this.setCurrent=function(i){A.setCurrent(i)},this.getCurrent=function(){return A.getCurrent()},this.stem=function(){var e=A.cursor;return i(),k=!1,A.limit_backward=e,A.cursor=A.limit,s(),A.cursor=A.limit,o(),A.cursor=A.limit,u(),A.cursor=A.limit,c(),A.cursor=A.limit,k?(m(),A.cursor=A.limit):(A.cursor=A.limit,w(),A.cursor=A.limit),_(),!0}};return function(i){return"function"==typeof i.update?i.update(function(i){return n.setCurrent(i),n.stem(),n.getCurrent()}):(n.setCurrent(i),n.stem(),n.getCurrent())}}(),i.Pipeline.registerFunction(i.fi.stemmer,"stemmer-fi"),i.fi.stopWordFilter=i.generateStopWordFilter("ei eivät emme en et ette että he heidän heidät heihin heille heillä heiltä heissä heistä heitä hän häneen hänelle hänellä häneltä hänen hänessä hänestä hänet häntä itse ja johon joiden joihin joiksi joilla joille joilta joina joissa joista joita joka joksi jolla jolle jolta jona jonka jos jossa josta jota jotka kanssa keiden keihin keiksi keille keillä keiltä keinä keissä keistä keitä keneen keneksi kenelle kenellä keneltä kenen kenenä kenessä kenestä kenet ketkä ketkä ketä koska kuin kuka kun me meidän meidät meihin meille meillä meiltä meissä meistä meitä mihin miksi mikä mille millä miltä minkä minkä minua minulla minulle minulta minun minussa minusta minut minuun minä minä missä mistä mitkä mitä mukaan mutta ne niiden niihin niiksi niille niillä niiltä niin niin niinä niissä niistä niitä noiden noihin noiksi noilla noille noilta noin noina noissa noista noita nuo nyt näiden näihin näiksi näille näillä näiltä näinä näissä näistä näitä nämä ole olemme olen olet olette oli olimme olin olisi olisimme olisin olisit olisitte olisivat olit olitte olivat olla olleet ollut on ovat poikki se sekä sen siihen siinä siitä siksi sille sillä sillä siltä sinua sinulla sinulle sinulta sinun sinussa sinusta sinut sinuun sinä sinä sitä tai te teidän teidät teihin teille teillä teiltä teissä teistä teitä tuo tuohon tuoksi tuolla tuolle tuolta tuon tuona tuossa tuosta tuota tähän täksi tälle tällä tältä tämä tämän tänä tässä tästä tätä vaan vai vaikka yli".split(" ")),i.Pipeline.registerFunction(i.fi.stopWordFilter,"stopWordFilter-fi")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.fr.min.js b/assets/javascripts/lunr/min/lunr.fr.min.js new file mode 100644 index 00000000..68cd0094 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.fr.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `French` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.fr=function(){this.pipeline.reset(),this.pipeline.add(e.fr.trimmer,e.fr.stopWordFilter,e.fr.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.fr.stemmer))},e.fr.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.fr.trimmer=e.trimmerSupport.generateTrimmer(e.fr.wordCharacters),e.Pipeline.registerFunction(e.fr.trimmer,"trimmer-fr"),e.fr.stemmer=function(){var r=e.stemmerSupport.Among,s=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,s){return!(!W.eq_s(1,e)||(W.ket=W.cursor,!W.in_grouping(F,97,251)))&&(W.slice_from(r),W.cursor=s,!0)}function i(e,r,s){return!!W.eq_s(1,e)&&(W.ket=W.cursor,W.slice_from(r),W.cursor=s,!0)}function n(){for(var r,s;;){if(r=W.cursor,W.in_grouping(F,97,251)){if(W.bra=W.cursor,s=W.cursor,e("u","U",r))continue;if(W.cursor=s,e("i","I",r))continue;if(W.cursor=s,i("y","Y",r))continue}if(W.cursor=r,W.bra=r,!e("y","Y",r)){if(W.cursor=r,W.eq_s(1,"q")&&(W.bra=W.cursor,i("u","U",r)))continue;if(W.cursor=r,r>=W.limit)return;W.cursor++}}}function t(){for(;!W.in_grouping(F,97,251);){if(W.cursor>=W.limit)return!0;W.cursor++}for(;!W.out_grouping(F,97,251);){if(W.cursor>=W.limit)return!0;W.cursor++}return!1}function u(){var e=W.cursor;if(q=W.limit,g=q,p=q,W.in_grouping(F,97,251)&&W.in_grouping(F,97,251)&&W.cursor=W.limit){W.cursor=q;break}W.cursor++}while(!W.in_grouping(F,97,251))}q=W.cursor,W.cursor=e,t()||(g=W.cursor,t()||(p=W.cursor))}function o(){for(var e,r;;){if(r=W.cursor,W.bra=r,!(e=W.find_among(h,4)))break;switch(W.ket=W.cursor,e){case 1:W.slice_from("i");break;case 2:W.slice_from("u");break;case 3:W.slice_from("y");break;case 4:if(W.cursor>=W.limit)return;W.cursor++}}}function c(){return q<=W.cursor}function a(){return g<=W.cursor}function l(){return p<=W.cursor}function w(){var e,r;if(W.ket=W.cursor,e=W.find_among_b(C,43)){switch(W.bra=W.cursor,e){case 1:if(!l())return!1;W.slice_del();break;case 2:if(!l())return!1;W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"ic")&&(W.bra=W.cursor,l()?W.slice_del():W.slice_from("iqU"));break;case 3:if(!l())return!1;W.slice_from("log");break;case 4:if(!l())return!1;W.slice_from("u");break;case 5:if(!l())return!1;W.slice_from("ent");break;case 6:if(!c())return!1;if(W.slice_del(),W.ket=W.cursor,e=W.find_among_b(z,6))switch(W.bra=W.cursor,e){case 1:l()&&(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"at")&&(W.bra=W.cursor,l()&&W.slice_del()));break;case 2:l()?W.slice_del():a()&&W.slice_from("eux");break;case 3:l()&&W.slice_del();break;case 4:c()&&W.slice_from("i")}break;case 7:if(!l())return!1;if(W.slice_del(),W.ket=W.cursor,e=W.find_among_b(y,3))switch(W.bra=W.cursor,e){case 1:l()?W.slice_del():W.slice_from("abl");break;case 2:l()?W.slice_del():W.slice_from("iqU");break;case 3:l()&&W.slice_del()}break;case 8:if(!l())return!1;if(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"at")&&(W.bra=W.cursor,l()&&(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"ic")))){W.bra=W.cursor,l()?W.slice_del():W.slice_from("iqU");break}break;case 9:W.slice_from("eau");break;case 10:if(!a())return!1;W.slice_from("al");break;case 11:if(l())W.slice_del();else{if(!a())return!1;W.slice_from("eux")}break;case 12:if(!a()||!W.out_grouping_b(F,97,251))return!1;W.slice_del();break;case 13:return c()&&W.slice_from("ant"),!1;case 14:return c()&&W.slice_from("ent"),!1;case 15:return r=W.limit-W.cursor,W.in_grouping_b(F,97,251)&&c()&&(W.cursor=W.limit-r,W.slice_del()),!1}return!0}return!1}function f(){var e,r;if(W.cursor=q){if(s=W.limit_backward,W.limit_backward=q,W.ket=W.cursor,e=W.find_among_b(P,7))switch(W.bra=W.cursor,e){case 1:if(l()){if(i=W.limit-W.cursor,!W.eq_s_b(1,"s")&&(W.cursor=W.limit-i,!W.eq_s_b(1,"t")))break;W.slice_del()}break;case 2:W.slice_from("i");break;case 3:W.slice_del();break;case 4:W.eq_s_b(2,"gu")&&W.slice_del()}W.limit_backward=s}}function b(){var e=W.limit-W.cursor;W.find_among_b(U,5)&&(W.cursor=W.limit-e,W.ket=W.cursor,W.cursor>W.limit_backward&&(W.cursor--,W.bra=W.cursor,W.slice_del()))}function d(){for(var e,r=1;W.out_grouping_b(F,97,251);)r--;if(r<=0){if(W.ket=W.cursor,e=W.limit-W.cursor,!W.eq_s_b(1,"é")&&(W.cursor=W.limit-e,!W.eq_s_b(1,"è")))return;W.bra=W.cursor,W.slice_from("e")}}function k(){if(!w()&&(W.cursor=W.limit,!f()&&(W.cursor=W.limit,!m())))return W.cursor=W.limit,void _();W.cursor=W.limit,W.ket=W.cursor,W.eq_s_b(1,"Y")?(W.bra=W.cursor,W.slice_from("i")):(W.cursor=W.limit,W.eq_s_b(1,"ç")&&(W.bra=W.cursor,W.slice_from("c")))}var p,g,q,v=[new r("col",-1,-1),new r("par",-1,-1),new r("tap",-1,-1)],h=[new r("",-1,4),new r("I",0,1),new r("U",0,2),new r("Y",0,3)],z=[new r("iqU",-1,3),new r("abl",-1,3),new r("Ièr",-1,4),new r("ièr",-1,4),new r("eus",-1,2),new r("iv",-1,1)],y=[new r("ic",-1,2),new r("abil",-1,1),new r("iv",-1,3)],C=[new r("iqUe",-1,1),new r("atrice",-1,2),new r("ance",-1,1),new r("ence",-1,5),new r("logie",-1,3),new r("able",-1,1),new r("isme",-1,1),new r("euse",-1,11),new r("iste",-1,1),new r("ive",-1,8),new r("if",-1,8),new r("usion",-1,4),new r("ation",-1,2),new r("ution",-1,4),new r("ateur",-1,2),new r("iqUes",-1,1),new r("atrices",-1,2),new r("ances",-1,1),new r("ences",-1,5),new r("logies",-1,3),new r("ables",-1,1),new r("ismes",-1,1),new r("euses",-1,11),new r("istes",-1,1),new r("ives",-1,8),new r("ifs",-1,8),new r("usions",-1,4),new r("ations",-1,2),new r("utions",-1,4),new r("ateurs",-1,2),new r("ments",-1,15),new r("ements",30,6),new r("issements",31,12),new r("ités",-1,7),new r("ment",-1,15),new r("ement",34,6),new r("issement",35,12),new r("amment",34,13),new r("emment",34,14),new r("aux",-1,10),new r("eaux",39,9),new r("eux",-1,1),new r("ité",-1,7)],x=[new r("ira",-1,1),new r("ie",-1,1),new r("isse",-1,1),new r("issante",-1,1),new r("i",-1,1),new r("irai",4,1),new r("ir",-1,1),new r("iras",-1,1),new r("ies",-1,1),new r("îmes",-1,1),new r("isses",-1,1),new r("issantes",-1,1),new r("îtes",-1,1),new r("is",-1,1),new r("irais",13,1),new r("issais",13,1),new r("irions",-1,1),new r("issions",-1,1),new r("irons",-1,1),new r("issons",-1,1),new r("issants",-1,1),new r("it",-1,1),new r("irait",21,1),new r("issait",21,1),new r("issant",-1,1),new r("iraIent",-1,1),new r("issaIent",-1,1),new r("irent",-1,1),new r("issent",-1,1),new r("iront",-1,1),new r("ît",-1,1),new r("iriez",-1,1),new r("issiez",-1,1),new r("irez",-1,1),new r("issez",-1,1)],I=[new r("a",-1,3),new r("era",0,2),new r("asse",-1,3),new r("ante",-1,3),new r("ée",-1,2),new r("ai",-1,3),new r("erai",5,2),new r("er",-1,2),new r("as",-1,3),new r("eras",8,2),new r("âmes",-1,3),new r("asses",-1,3),new r("antes",-1,3),new r("âtes",-1,3),new r("ées",-1,2),new r("ais",-1,3),new r("erais",15,2),new r("ions",-1,1),new r("erions",17,2),new r("assions",17,3),new r("erons",-1,2),new r("ants",-1,3),new r("és",-1,2),new r("ait",-1,3),new r("erait",23,2),new r("ant",-1,3),new r("aIent",-1,3),new r("eraIent",26,2),new r("èrent",-1,2),new r("assent",-1,3),new r("eront",-1,2),new r("ât",-1,3),new r("ez",-1,2),new r("iez",32,2),new r("eriez",33,2),new r("assiez",33,3),new r("erez",32,2),new r("é",-1,2)],P=[new r("e",-1,3),new r("Ière",0,2),new r("ière",0,2),new r("ion",-1,1),new r("Ier",-1,2),new r("ier",-1,2),new r("ë",-1,4)],U=[new r("ell",-1,-1),new r("eill",-1,-1),new r("enn",-1,-1),new r("onn",-1,-1),new r("ett",-1,-1)],F=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,128,130,103,8,5],S=[1,65,20,0,0,0,0,0,0,0,0,0,0,0,0,0,128],W=new s;this.setCurrent=function(e){W.setCurrent(e)},this.getCurrent=function(){return W.getCurrent()},this.stem=function(){var e=W.cursor;return n(),W.cursor=e,u(),W.limit_backward=e,W.cursor=W.limit,k(),W.cursor=W.limit,b(),W.cursor=W.limit,d(),W.cursor=W.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.fr.stemmer,"stemmer-fr"),e.fr.stopWordFilter=e.generateStopWordFilter("ai aie aient aies ait as au aura aurai auraient aurais aurait auras aurez auriez aurions aurons auront aux avaient avais avait avec avez aviez avions avons ayant ayez ayons c ce ceci celà ces cet cette d dans de des du elle en es est et eu eue eues eurent eus eusse eussent eusses eussiez eussions eut eux eûmes eût eûtes furent fus fusse fussent fusses fussiez fussions fut fûmes fût fûtes ici il ils j je l la le les leur leurs lui m ma mais me mes moi mon même n ne nos notre nous on ont ou par pas pour qu que quel quelle quelles quels qui s sa sans se sera serai seraient serais serait seras serez seriez serions serons seront ses soi soient sois soit sommes son sont soyez soyons suis sur t ta te tes toi ton tu un une vos votre vous y à étaient étais était étant étiez étions été étée étées étés êtes".split(" ")),e.Pipeline.registerFunction(e.fr.stopWordFilter,"stopWordFilter-fr")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.hu.min.js b/assets/javascripts/lunr/min/lunr.hu.min.js new file mode 100644 index 00000000..ed9d909f --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.hu.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Hungarian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,n){"function"==typeof define&&define.amd?define(n):"object"==typeof exports?module.exports=n():n()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.hu=function(){this.pipeline.reset(),this.pipeline.add(e.hu.trimmer,e.hu.stopWordFilter,e.hu.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.hu.stemmer))},e.hu.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.hu.trimmer=e.trimmerSupport.generateTrimmer(e.hu.wordCharacters),e.Pipeline.registerFunction(e.hu.trimmer,"trimmer-hu"),e.hu.stemmer=function(){var n=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,i=new function(){function e(){var e,n=L.cursor;if(d=L.limit,L.in_grouping(W,97,252))for(;;){if(e=L.cursor,L.out_grouping(W,97,252))return L.cursor=e,L.find_among(g,8)||(L.cursor=e,e=L.limit)return void(d=e);L.cursor++}if(L.cursor=n,L.out_grouping(W,97,252)){for(;!L.in_grouping(W,97,252);){if(L.cursor>=L.limit)return;L.cursor++}d=L.cursor}}function i(){return d<=L.cursor}function a(){var e;if(L.ket=L.cursor,(e=L.find_among_b(h,2))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("a");break;case 2:L.slice_from("e")}}function t(){var e=L.limit-L.cursor;return!!L.find_among_b(p,23)&&(L.cursor=L.limit-e,!0)}function s(){if(L.cursor>L.limit_backward){L.cursor--,L.ket=L.cursor;var e=L.cursor-1;L.limit_backward<=e&&e<=L.limit&&(L.cursor=e,L.bra=e,L.slice_del())}}function c(){var e;if(L.ket=L.cursor,(e=L.find_among_b(_,2))&&(L.bra=L.cursor,i())){if((1==e||2==e)&&!t())return;L.slice_del(),s()}}function o(){L.ket=L.cursor,L.find_among_b(v,44)&&(L.bra=L.cursor,i()&&(L.slice_del(),a()))}function w(){var e;if(L.ket=L.cursor,(e=L.find_among_b(z,3))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("e");break;case 2:case 3:L.slice_from("a")}}function l(){var e;if(L.ket=L.cursor,(e=L.find_among_b(y,6))&&(L.bra=L.cursor,i()))switch(e){case 1:case 2:L.slice_del();break;case 3:L.slice_from("a");break;case 4:L.slice_from("e")}}function u(){var e;if(L.ket=L.cursor,(e=L.find_among_b(j,2))&&(L.bra=L.cursor,i())){if((1==e||2==e)&&!t())return;L.slice_del(),s()}}function m(){var e;if(L.ket=L.cursor,(e=L.find_among_b(C,7))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("a");break;case 2:L.slice_from("e");break;case 3:case 4:case 5:case 6:case 7:L.slice_del()}}function k(){var e;if(L.ket=L.cursor,(e=L.find_among_b(P,12))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 7:case 9:L.slice_del();break;case 2:case 5:case 8:L.slice_from("e");break;case 3:case 6:L.slice_from("a")}}function f(){var e;if(L.ket=L.cursor,(e=L.find_among_b(F,31))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 7:case 8:case 9:case 12:case 13:case 16:case 17:case 18:L.slice_del();break;case 2:case 5:case 10:case 14:case 19:L.slice_from("a");break;case 3:case 6:case 11:case 15:case 20:L.slice_from("e")}}function b(){var e;if(L.ket=L.cursor,(e=L.find_among_b(S,42))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 5:case 6:case 9:case 10:case 11:case 14:case 15:case 16:case 17:case 20:case 21:case 24:case 25:case 26:case 29:L.slice_del();break;case 2:case 7:case 12:case 18:case 22:case 27:L.slice_from("a");break;case 3:case 8:case 13:case 19:case 23:case 28:L.slice_from("e")}}var d,g=[new n("cs",-1,-1),new n("dzs",-1,-1),new n("gy",-1,-1),new n("ly",-1,-1),new n("ny",-1,-1),new n("sz",-1,-1),new n("ty",-1,-1),new n("zs",-1,-1)],h=[new n("á",-1,1),new n("é",-1,2)],p=[new n("bb",-1,-1),new n("cc",-1,-1),new n("dd",-1,-1),new n("ff",-1,-1),new n("gg",-1,-1),new n("jj",-1,-1),new n("kk",-1,-1),new n("ll",-1,-1),new n("mm",-1,-1),new n("nn",-1,-1),new n("pp",-1,-1),new n("rr",-1,-1),new n("ccs",-1,-1),new n("ss",-1,-1),new n("zzs",-1,-1),new n("tt",-1,-1),new n("vv",-1,-1),new n("ggy",-1,-1),new n("lly",-1,-1),new n("nny",-1,-1),new n("tty",-1,-1),new n("ssz",-1,-1),new n("zz",-1,-1)],_=[new n("al",-1,1),new n("el",-1,2)],v=[new n("ba",-1,-1),new n("ra",-1,-1),new n("be",-1,-1),new n("re",-1,-1),new n("ig",-1,-1),new n("nak",-1,-1),new n("nek",-1,-1),new n("val",-1,-1),new n("vel",-1,-1),new n("ul",-1,-1),new n("nál",-1,-1),new n("nél",-1,-1),new n("ból",-1,-1),new n("ról",-1,-1),new n("tól",-1,-1),new n("bõl",-1,-1),new n("rõl",-1,-1),new n("tõl",-1,-1),new n("ül",-1,-1),new n("n",-1,-1),new n("an",19,-1),new n("ban",20,-1),new n("en",19,-1),new n("ben",22,-1),new n("képpen",22,-1),new n("on",19,-1),new n("ön",19,-1),new n("képp",-1,-1),new n("kor",-1,-1),new n("t",-1,-1),new n("at",29,-1),new n("et",29,-1),new n("ként",29,-1),new n("anként",32,-1),new n("enként",32,-1),new n("onként",32,-1),new n("ot",29,-1),new n("ért",29,-1),new n("öt",29,-1),new n("hez",-1,-1),new n("hoz",-1,-1),new n("höz",-1,-1),new n("vá",-1,-1),new n("vé",-1,-1)],z=[new n("án",-1,2),new n("én",-1,1),new n("ánként",-1,3)],y=[new n("stul",-1,2),new n("astul",0,1),new n("ástul",0,3),new n("stül",-1,2),new n("estül",3,1),new n("éstül",3,4)],j=[new n("á",-1,1),new n("é",-1,2)],C=[new n("k",-1,7),new n("ak",0,4),new n("ek",0,6),new n("ok",0,5),new n("ák",0,1),new n("ék",0,2),new n("ök",0,3)],P=[new n("éi",-1,7),new n("áéi",0,6),new n("ééi",0,5),new n("é",-1,9),new n("ké",3,4),new n("aké",4,1),new n("eké",4,1),new n("oké",4,1),new n("áké",4,3),new n("éké",4,2),new n("öké",4,1),new n("éé",3,8)],F=[new n("a",-1,18),new n("ja",0,17),new n("d",-1,16),new n("ad",2,13),new n("ed",2,13),new n("od",2,13),new n("ád",2,14),new n("éd",2,15),new n("öd",2,13),new n("e",-1,18),new n("je",9,17),new n("nk",-1,4),new n("unk",11,1),new n("ánk",11,2),new n("énk",11,3),new n("ünk",11,1),new n("uk",-1,8),new n("juk",16,7),new n("ájuk",17,5),new n("ük",-1,8),new n("jük",19,7),new n("éjük",20,6),new n("m",-1,12),new n("am",22,9),new n("em",22,9),new n("om",22,9),new n("ám",22,10),new n("ém",22,11),new n("o",-1,18),new n("á",-1,19),new n("é",-1,20)],S=[new n("id",-1,10),new n("aid",0,9),new n("jaid",1,6),new n("eid",0,9),new n("jeid",3,6),new n("áid",0,7),new n("éid",0,8),new n("i",-1,15),new n("ai",7,14),new n("jai",8,11),new n("ei",7,14),new n("jei",10,11),new n("ái",7,12),new n("éi",7,13),new n("itek",-1,24),new n("eitek",14,21),new n("jeitek",15,20),new n("éitek",14,23),new n("ik",-1,29),new n("aik",18,26),new n("jaik",19,25),new n("eik",18,26),new n("jeik",21,25),new n("áik",18,27),new n("éik",18,28),new n("ink",-1,20),new n("aink",25,17),new n("jaink",26,16),new n("eink",25,17),new n("jeink",28,16),new n("áink",25,18),new n("éink",25,19),new n("aitok",-1,21),new n("jaitok",32,20),new n("áitok",-1,22),new n("im",-1,5),new n("aim",35,4),new n("jaim",36,1),new n("eim",35,4),new n("jeim",38,1),new n("áim",35,2),new n("éim",35,3)],W=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,1,17,52,14],L=new r;this.setCurrent=function(e){L.setCurrent(e)},this.getCurrent=function(){return L.getCurrent()},this.stem=function(){var n=L.cursor;return e(),L.limit_backward=n,L.cursor=L.limit,c(),L.cursor=L.limit,o(),L.cursor=L.limit,w(),L.cursor=L.limit,l(),L.cursor=L.limit,u(),L.cursor=L.limit,k(),L.cursor=L.limit,f(),L.cursor=L.limit,b(),L.cursor=L.limit,m(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.hu.stemmer,"stemmer-hu"),e.hu.stopWordFilter=e.generateStopWordFilter("a abban ahhoz ahogy ahol aki akik akkor alatt amely amelyek amelyekben amelyeket amelyet amelynek ami amikor amit amolyan amíg annak arra arról az azok azon azonban azt aztán azután azzal azért be belül benne bár cikk cikkek cikkeket csak de e ebben eddig egy egyes egyetlen egyik egyre egyéb egész ehhez ekkor el ellen elsõ elég elõ elõször elõtt emilyen ennek erre ez ezek ezen ezt ezzel ezért fel felé hanem hiszen hogy hogyan igen ill ill. illetve ilyen ilyenkor ismét ison itt jobban jó jól kell kellett keressünk keresztül ki kívül között közül legalább legyen lehet lehetett lenne lenni lesz lett maga magát majd majd meg mellett mely melyek mert mi mikor milyen minden mindenki mindent mindig mint mintha mit mivel miért most már más másik még míg nagy nagyobb nagyon ne nekem neki nem nincs néha néhány nélkül olyan ott pedig persze rá s saját sem semmi sok sokat sokkal szemben szerint szinte számára talán tehát teljes tovább továbbá több ugyanis utolsó után utána vagy vagyis vagyok valaki valami valamint való van vannak vele vissza viszont volna volt voltak voltam voltunk által általában át én éppen és így õ õk õket össze úgy új újabb újra".split(" ")),e.Pipeline.registerFunction(e.hu.stopWordFilter,"stopWordFilter-hu")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.it.min.js b/assets/javascripts/lunr/min/lunr.it.min.js new file mode 100644 index 00000000..344b6a3c --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.it.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Italian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.it=function(){this.pipeline.reset(),this.pipeline.add(e.it.trimmer,e.it.stopWordFilter,e.it.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.it.stemmer))},e.it.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.it.trimmer=e.trimmerSupport.generateTrimmer(e.it.wordCharacters),e.Pipeline.registerFunction(e.it.trimmer,"trimmer-it"),e.it.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,n){return!(!x.eq_s(1,e)||(x.ket=x.cursor,!x.in_grouping(L,97,249)))&&(x.slice_from(r),x.cursor=n,!0)}function i(){for(var r,n,i,o,t=x.cursor;;){if(x.bra=x.cursor,r=x.find_among(h,7))switch(x.ket=x.cursor,r){case 1:x.slice_from("à");continue;case 2:x.slice_from("è");continue;case 3:x.slice_from("ì");continue;case 4:x.slice_from("ò");continue;case 5:x.slice_from("ù");continue;case 6:x.slice_from("qU");continue;case 7:if(x.cursor>=x.limit)break;x.cursor++;continue}break}for(x.cursor=t;;)for(n=x.cursor;;){if(i=x.cursor,x.in_grouping(L,97,249)){if(x.bra=x.cursor,o=x.cursor,e("u","U",i))break;if(x.cursor=o,e("i","I",i))break}if(x.cursor=i,x.cursor>=x.limit)return void(x.cursor=n);x.cursor++}}function o(e){if(x.cursor=e,!x.in_grouping(L,97,249))return!1;for(;!x.out_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}return!0}function t(){if(x.in_grouping(L,97,249)){var e=x.cursor;if(x.out_grouping(L,97,249)){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return o(e);x.cursor++}return!0}return o(e)}return!1}function s(){var e,r=x.cursor;if(!t()){if(x.cursor=r,!x.out_grouping(L,97,249))return;if(e=x.cursor,x.out_grouping(L,97,249)){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return x.cursor=e,void(x.in_grouping(L,97,249)&&x.cursor=x.limit)return;x.cursor++}k=x.cursor}function a(){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}for(;!x.out_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}return!0}function u(){var e=x.cursor;k=x.limit,p=k,g=k,s(),x.cursor=e,a()&&(p=x.cursor,a()&&(g=x.cursor))}function c(){for(var e;;){if(x.bra=x.cursor,!(e=x.find_among(q,3)))break;switch(x.ket=x.cursor,e){case 1:x.slice_from("i");break;case 2:x.slice_from("u");break;case 3:if(x.cursor>=x.limit)return;x.cursor++}}}function w(){return k<=x.cursor}function l(){return p<=x.cursor}function m(){return g<=x.cursor}function f(){var e;if(x.ket=x.cursor,x.find_among_b(C,37)&&(x.bra=x.cursor,(e=x.find_among_b(z,5))&&w()))switch(e){case 1:x.slice_del();break;case 2:x.slice_from("e")}}function v(){var e;if(x.ket=x.cursor,!(e=x.find_among_b(S,51)))return!1;switch(x.bra=x.cursor,e){case 1:if(!m())return!1;x.slice_del();break;case 2:if(!m())return!1;x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"ic")&&(x.bra=x.cursor,m()&&x.slice_del());break;case 3:if(!m())return!1;x.slice_from("log");break;case 4:if(!m())return!1;x.slice_from("u");break;case 5:if(!m())return!1;x.slice_from("ente");break;case 6:if(!w())return!1;x.slice_del();break;case 7:if(!l())return!1;x.slice_del(),x.ket=x.cursor,e=x.find_among_b(P,4),e&&(x.bra=x.cursor,m()&&(x.slice_del(),1==e&&(x.ket=x.cursor,x.eq_s_b(2,"at")&&(x.bra=x.cursor,m()&&x.slice_del()))));break;case 8:if(!m())return!1;x.slice_del(),x.ket=x.cursor,e=x.find_among_b(F,3),e&&(x.bra=x.cursor,1==e&&m()&&x.slice_del());break;case 9:if(!m())return!1;x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"at")&&(x.bra=x.cursor,m()&&(x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"ic")&&(x.bra=x.cursor,m()&&x.slice_del())))}return!0}function b(){var e,r;x.cursor>=k&&(r=x.limit_backward,x.limit_backward=k,x.ket=x.cursor,e=x.find_among_b(W,87),e&&(x.bra=x.cursor,1==e&&x.slice_del()),x.limit_backward=r)}function d(){var e=x.limit-x.cursor;if(x.ket=x.cursor,x.in_grouping_b(y,97,242)&&(x.bra=x.cursor,w()&&(x.slice_del(),x.ket=x.cursor,x.eq_s_b(1,"i")&&(x.bra=x.cursor,w()))))return void x.slice_del();x.cursor=x.limit-e}function _(){d(),x.ket=x.cursor,x.eq_s_b(1,"h")&&(x.bra=x.cursor,x.in_grouping_b(U,99,103)&&w()&&x.slice_del())}var g,p,k,h=[new r("",-1,7),new r("qu",0,6),new r("á",0,1),new r("é",0,2),new r("í",0,3),new r("ó",0,4),new r("ú",0,5)],q=[new r("",-1,3),new r("I",0,1),new r("U",0,2)],C=[new r("la",-1,-1),new r("cela",0,-1),new r("gliela",0,-1),new r("mela",0,-1),new r("tela",0,-1),new r("vela",0,-1),new r("le",-1,-1),new r("cele",6,-1),new r("gliele",6,-1),new r("mele",6,-1),new r("tele",6,-1),new r("vele",6,-1),new r("ne",-1,-1),new r("cene",12,-1),new r("gliene",12,-1),new r("mene",12,-1),new r("sene",12,-1),new r("tene",12,-1),new r("vene",12,-1),new r("ci",-1,-1),new r("li",-1,-1),new r("celi",20,-1),new r("glieli",20,-1),new r("meli",20,-1),new r("teli",20,-1),new r("veli",20,-1),new r("gli",20,-1),new r("mi",-1,-1),new r("si",-1,-1),new r("ti",-1,-1),new r("vi",-1,-1),new r("lo",-1,-1),new r("celo",31,-1),new r("glielo",31,-1),new r("melo",31,-1),new r("telo",31,-1),new r("velo",31,-1)],z=[new r("ando",-1,1),new r("endo",-1,1),new r("ar",-1,2),new r("er",-1,2),new r("ir",-1,2)],P=[new r("ic",-1,-1),new r("abil",-1,-1),new r("os",-1,-1),new r("iv",-1,1)],F=[new r("ic",-1,1),new r("abil",-1,1),new r("iv",-1,1)],S=[new r("ica",-1,1),new r("logia",-1,3),new r("osa",-1,1),new r("ista",-1,1),new r("iva",-1,9),new r("anza",-1,1),new r("enza",-1,5),new r("ice",-1,1),new r("atrice",7,1),new r("iche",-1,1),new r("logie",-1,3),new r("abile",-1,1),new r("ibile",-1,1),new r("usione",-1,4),new r("azione",-1,2),new r("uzione",-1,4),new r("atore",-1,2),new r("ose",-1,1),new r("ante",-1,1),new r("mente",-1,1),new r("amente",19,7),new r("iste",-1,1),new r("ive",-1,9),new r("anze",-1,1),new r("enze",-1,5),new r("ici",-1,1),new r("atrici",25,1),new r("ichi",-1,1),new r("abili",-1,1),new r("ibili",-1,1),new r("ismi",-1,1),new r("usioni",-1,4),new r("azioni",-1,2),new r("uzioni",-1,4),new r("atori",-1,2),new r("osi",-1,1),new r("anti",-1,1),new r("amenti",-1,6),new r("imenti",-1,6),new r("isti",-1,1),new r("ivi",-1,9),new r("ico",-1,1),new r("ismo",-1,1),new r("oso",-1,1),new r("amento",-1,6),new r("imento",-1,6),new r("ivo",-1,9),new r("ità",-1,8),new r("istà",-1,1),new r("istè",-1,1),new r("istì",-1,1)],W=[new r("isca",-1,1),new r("enda",-1,1),new r("ata",-1,1),new r("ita",-1,1),new r("uta",-1,1),new r("ava",-1,1),new r("eva",-1,1),new r("iva",-1,1),new r("erebbe",-1,1),new r("irebbe",-1,1),new r("isce",-1,1),new r("ende",-1,1),new r("are",-1,1),new r("ere",-1,1),new r("ire",-1,1),new r("asse",-1,1),new r("ate",-1,1),new r("avate",16,1),new r("evate",16,1),new r("ivate",16,1),new r("ete",-1,1),new r("erete",20,1),new r("irete",20,1),new r("ite",-1,1),new r("ereste",-1,1),new r("ireste",-1,1),new r("ute",-1,1),new r("erai",-1,1),new r("irai",-1,1),new r("isci",-1,1),new r("endi",-1,1),new r("erei",-1,1),new r("irei",-1,1),new r("assi",-1,1),new r("ati",-1,1),new r("iti",-1,1),new r("eresti",-1,1),new r("iresti",-1,1),new r("uti",-1,1),new r("avi",-1,1),new r("evi",-1,1),new r("ivi",-1,1),new r("isco",-1,1),new r("ando",-1,1),new r("endo",-1,1),new r("Yamo",-1,1),new r("iamo",-1,1),new r("avamo",-1,1),new r("evamo",-1,1),new r("ivamo",-1,1),new r("eremo",-1,1),new r("iremo",-1,1),new r("assimo",-1,1),new r("ammo",-1,1),new r("emmo",-1,1),new r("eremmo",54,1),new r("iremmo",54,1),new r("immo",-1,1),new r("ano",-1,1),new r("iscano",58,1),new r("avano",58,1),new r("evano",58,1),new r("ivano",58,1),new r("eranno",-1,1),new r("iranno",-1,1),new r("ono",-1,1),new r("iscono",65,1),new r("arono",65,1),new r("erono",65,1),new r("irono",65,1),new r("erebbero",-1,1),new r("irebbero",-1,1),new r("assero",-1,1),new r("essero",-1,1),new r("issero",-1,1),new r("ato",-1,1),new r("ito",-1,1),new r("uto",-1,1),new r("avo",-1,1),new r("evo",-1,1),new r("ivo",-1,1),new r("ar",-1,1),new r("ir",-1,1),new r("erà",-1,1),new r("irà",-1,1),new r("erò",-1,1),new r("irò",-1,1)],L=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2,1],y=[17,65,0,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2],U=[17],x=new n;this.setCurrent=function(e){x.setCurrent(e)},this.getCurrent=function(){return x.getCurrent()},this.stem=function(){var e=x.cursor;return i(),x.cursor=e,u(),x.limit_backward=e,x.cursor=x.limit,f(),x.cursor=x.limit,v()||(x.cursor=x.limit,b()),x.cursor=x.limit,_(),x.cursor=x.limit_backward,c(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.it.stemmer,"stemmer-it"),e.it.stopWordFilter=e.generateStopWordFilter("a abbia abbiamo abbiano abbiate ad agl agli ai al all alla alle allo anche avemmo avendo avesse avessero avessi avessimo aveste avesti avete aveva avevamo avevano avevate avevi avevo avrai avranno avrebbe avrebbero avrei avremmo avremo avreste avresti avrete avrà avrò avuta avute avuti avuto c che chi ci coi col come con contro cui da dagl dagli dai dal dall dalla dalle dallo degl degli dei del dell della delle dello di dov dove e ebbe ebbero ebbi ed era erano eravamo eravate eri ero essendo faccia facciamo facciano facciate faccio facemmo facendo facesse facessero facessi facessimo faceste facesti faceva facevamo facevano facevate facevi facevo fai fanno farai faranno farebbe farebbero farei faremmo faremo fareste faresti farete farà farò fece fecero feci fosse fossero fossi fossimo foste fosti fu fui fummo furono gli ha hai hanno ho i il in io l la le lei li lo loro lui ma mi mia mie miei mio ne negl negli nei nel nell nella nelle nello noi non nostra nostre nostri nostro o per perché più quale quanta quante quanti quanto quella quelle quelli quello questa queste questi questo sarai saranno sarebbe sarebbero sarei saremmo saremo sareste saresti sarete sarà sarò se sei si sia siamo siano siate siete sono sta stai stando stanno starai staranno starebbe starebbero starei staremmo staremo stareste staresti starete starà starò stava stavamo stavano stavate stavi stavo stemmo stesse stessero stessi stessimo steste stesti stette stettero stetti stia stiamo stiano stiate sto su sua sue sugl sugli sui sul sull sulla sulle sullo suo suoi ti tra tu tua tue tuo tuoi tutti tutto un una uno vi voi vostra vostre vostri vostro è".split(" ")),e.Pipeline.registerFunction(e.it.stopWordFilter,"stopWordFilter-it")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ja.min.js b/assets/javascripts/lunr/min/lunr.ja.min.js new file mode 100644 index 00000000..5f254ebe --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ja.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r="2"==e.version[0];e.ja=function(){this.pipeline.reset(),this.pipeline.add(e.ja.trimmer,e.ja.stopWordFilter,e.ja.stemmer),r?this.tokenizer=e.ja.tokenizer:(e.tokenizer&&(e.tokenizer=e.ja.tokenizer),this.tokenizerFn&&(this.tokenizerFn=e.ja.tokenizer))};var t=new e.TinySegmenter;e.ja.tokenizer=function(i){var n,o,s,p,a,u,m,l,c,f;if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(t){return r?new e.Token(t.toLowerCase()):t.toLowerCase()});for(o=i.toString().toLowerCase().replace(/^\s+/,""),n=o.length-1;n>=0;n--)if(/\S/.test(o.charAt(n))){o=o.substring(0,n+1);break}for(a=[],s=o.length,c=0,l=0;c<=s;c++)if(u=o.charAt(c),m=c-l,u.match(/\s/)||c==s){if(m>0)for(p=t.segment(o.slice(l,c)).filter(function(e){return!!e}),f=l,n=0;n=C.limit)break;C.cursor++;continue}break}for(C.cursor=o,C.bra=o,C.eq_s(1,"y")?(C.ket=C.cursor,C.slice_from("Y")):C.cursor=o;;)if(e=C.cursor,C.in_grouping(q,97,232)){if(i=C.cursor,C.bra=i,C.eq_s(1,"i"))C.ket=C.cursor,C.in_grouping(q,97,232)&&(C.slice_from("I"),C.cursor=e);else if(C.cursor=i,C.eq_s(1,"y"))C.ket=C.cursor,C.slice_from("Y"),C.cursor=e;else if(n(e))break}else if(n(e))break}function n(r){return C.cursor=r,r>=C.limit||(C.cursor++,!1)}function o(){_=C.limit,d=_,t()||(_=C.cursor,_<3&&(_=3),t()||(d=C.cursor))}function t(){for(;!C.in_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}for(;!C.out_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}return!1}function s(){for(var r;;)if(C.bra=C.cursor,r=C.find_among(p,3))switch(C.ket=C.cursor,r){case 1:C.slice_from("y");break;case 2:C.slice_from("i");break;case 3:if(C.cursor>=C.limit)return;C.cursor++}}function u(){return _<=C.cursor}function c(){return d<=C.cursor}function a(){var r=C.limit-C.cursor;C.find_among_b(g,3)&&(C.cursor=C.limit-r,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del()))}function l(){var r;w=!1,C.ket=C.cursor,C.eq_s_b(1,"e")&&(C.bra=C.cursor,u()&&(r=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-r,C.slice_del(),w=!0,a())))}function m(){var r;u()&&(r=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-r,C.eq_s_b(3,"gem")||(C.cursor=C.limit-r,C.slice_del(),a())))}function f(){var r,e,i,n,o,t,s=C.limit-C.cursor;if(C.ket=C.cursor,r=C.find_among_b(h,5))switch(C.bra=C.cursor,r){case 1:u()&&C.slice_from("heid");break;case 2:m();break;case 3:u()&&C.out_grouping_b(j,97,232)&&C.slice_del()}if(C.cursor=C.limit-s,l(),C.cursor=C.limit-s,C.ket=C.cursor,C.eq_s_b(4,"heid")&&(C.bra=C.cursor,c()&&(e=C.limit-C.cursor,C.eq_s_b(1,"c")||(C.cursor=C.limit-e,C.slice_del(),C.ket=C.cursor,C.eq_s_b(2,"en")&&(C.bra=C.cursor,m())))),C.cursor=C.limit-s,C.ket=C.cursor,r=C.find_among_b(k,6))switch(C.bra=C.cursor,r){case 1:if(c()){if(C.slice_del(),i=C.limit-C.cursor,C.ket=C.cursor,C.eq_s_b(2,"ig")&&(C.bra=C.cursor,c()&&(n=C.limit-C.cursor,!C.eq_s_b(1,"e")))){C.cursor=C.limit-n,C.slice_del();break}C.cursor=C.limit-i,a()}break;case 2:c()&&(o=C.limit-C.cursor,C.eq_s_b(1,"e")||(C.cursor=C.limit-o,C.slice_del()));break;case 3:c()&&(C.slice_del(),l());break;case 4:c()&&C.slice_del();break;case 5:c()&&w&&C.slice_del()}C.cursor=C.limit-s,C.out_grouping_b(z,73,232)&&(t=C.limit-C.cursor,C.find_among_b(v,4)&&C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-t,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del())))}var d,_,w,b=[new e("",-1,6),new e("á",0,1),new e("ä",0,1),new e("é",0,2),new e("ë",0,2),new e("í",0,3),new e("ï",0,3),new e("ó",0,4),new e("ö",0,4),new e("ú",0,5),new e("ü",0,5)],p=[new e("",-1,3),new e("I",0,2),new e("Y",0,1)],g=[new e("dd",-1,-1),new e("kk",-1,-1),new e("tt",-1,-1)],h=[new e("ene",-1,2),new e("se",-1,3),new e("en",-1,2),new e("heden",2,1),new e("s",-1,3)],k=[new e("end",-1,1),new e("ig",-1,2),new e("ing",-1,1),new e("lijk",-1,3),new e("baar",-1,4),new e("bar",-1,5)],v=[new e("aa",-1,-1),new e("ee",-1,-1),new e("oo",-1,-1),new e("uu",-1,-1)],q=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],z=[1,0,0,17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],j=[17,67,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],C=new i;this.setCurrent=function(r){C.setCurrent(r)},this.getCurrent=function(){return C.getCurrent()},this.stem=function(){var e=C.cursor;return r(),C.cursor=e,o(),C.limit_backward=e,C.cursor=C.limit,f(),C.cursor=C.limit_backward,s(),!0}};return function(r){return"function"==typeof r.update?r.update(function(r){return n.setCurrent(r),n.stem(),n.getCurrent()}):(n.setCurrent(r),n.stem(),n.getCurrent())}}(),r.Pipeline.registerFunction(r.nl.stemmer,"stemmer-nl"),r.nl.stopWordFilter=r.generateStopWordFilter(" aan al alles als altijd andere ben bij daar dan dat de der deze die dit doch doen door dus een eens en er ge geen geweest haar had heb hebben heeft hem het hier hij hoe hun iemand iets ik in is ja je kan kon kunnen maar me meer men met mij mijn moet na naar niet niets nog nu of om omdat onder ons ook op over reeds te tegen toch toen tot u uit uw van veel voor want waren was wat werd wezen wie wil worden wordt zal ze zelf zich zij zijn zo zonder zou".split(" ")),r.Pipeline.registerFunction(r.nl.stopWordFilter,"stopWordFilter-nl")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.no.min.js b/assets/javascripts/lunr/min/lunr.no.min.js new file mode 100644 index 00000000..92bc7e4e --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.no.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Norwegian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.no=function(){this.pipeline.reset(),this.pipeline.add(e.no.trimmer,e.no.stopWordFilter,e.no.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.no.stemmer))},e.no.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.no.trimmer=e.trimmerSupport.generateTrimmer(e.no.wordCharacters),e.Pipeline.registerFunction(e.no.trimmer,"trimmer-no"),e.no.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(){var e,r=w.cursor+3;if(a=w.limit,0<=r||r<=w.limit){for(s=r;;){if(e=w.cursor,w.in_grouping(d,97,248)){w.cursor=e;break}if(e>=w.limit)return;w.cursor=e+1}for(;!w.out_grouping(d,97,248);){if(w.cursor>=w.limit)return;w.cursor++}a=w.cursor,a=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(m,29),w.limit_backward=r,e))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:n=w.limit-w.cursor,w.in_grouping_b(c,98,122)?w.slice_del():(w.cursor=w.limit-n,w.eq_s_b(1,"k")&&w.out_grouping_b(d,97,248)&&w.slice_del());break;case 3:w.slice_from("er")}}function t(){var e,r=w.limit-w.cursor;w.cursor>=a&&(e=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,w.find_among_b(u,2)?(w.bra=w.cursor,w.limit_backward=e,w.cursor=w.limit-r,w.cursor>w.limit_backward&&(w.cursor--,w.bra=w.cursor,w.slice_del())):w.limit_backward=e)}function o(){var e,r;w.cursor>=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(l,11),e?(w.bra=w.cursor,w.limit_backward=r,1==e&&w.slice_del()):w.limit_backward=r)}var s,a,m=[new r("a",-1,1),new r("e",-1,1),new r("ede",1,1),new r("ande",1,1),new r("ende",1,1),new r("ane",1,1),new r("ene",1,1),new r("hetene",6,1),new r("erte",1,3),new r("en",-1,1),new r("heten",9,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",12,1),new r("s",-1,2),new r("as",14,1),new r("es",14,1),new r("edes",16,1),new r("endes",16,1),new r("enes",16,1),new r("hetenes",19,1),new r("ens",14,1),new r("hetens",21,1),new r("ers",14,1),new r("ets",14,1),new r("et",-1,1),new r("het",25,1),new r("ert",-1,3),new r("ast",-1,1)],u=[new r("dt",-1,-1),new r("vt",-1,-1)],l=[new r("leg",-1,1),new r("eleg",0,1),new r("ig",-1,1),new r("eig",2,1),new r("lig",2,1),new r("elig",4,1),new r("els",-1,1),new r("lov",-1,1),new r("elov",7,1),new r("slov",7,1),new r("hetslov",9,1)],d=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],c=[119,125,149,1],w=new n;this.setCurrent=function(e){w.setCurrent(e)},this.getCurrent=function(){return w.getCurrent()},this.stem=function(){var r=w.cursor;return e(),w.limit_backward=r,w.cursor=w.limit,i(),w.cursor=w.limit,t(),w.cursor=w.limit,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.no.stemmer,"stemmer-no"),e.no.stopWordFilter=e.generateStopWordFilter("alle at av bare begge ble blei bli blir blitt både båe da de deg dei deim deira deires dem den denne der dere deres det dette di din disse ditt du dykk dykkar då eg ein eit eitt eller elles en enn er et ett etter for fordi fra før ha hadde han hans har hennar henne hennes her hjå ho hoe honom hoss hossen hun hva hvem hver hvilke hvilken hvis hvor hvordan hvorfor i ikke ikkje ikkje ingen ingi inkje inn inni ja jeg kan kom korleis korso kun kunne kva kvar kvarhelst kven kvi kvifor man mange me med medan meg meget mellom men mi min mine mitt mot mykje ned no noe noen noka noko nokon nokor nokre nå når og også om opp oss over på samme seg selv si si sia sidan siden sin sine sitt sjøl skal skulle slik so som som somme somt så sånn til um upp ut uten var vart varte ved vere verte vi vil ville vore vors vort vår være være vært å".split(" ")),e.Pipeline.registerFunction(e.no.stopWordFilter,"stopWordFilter-no")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.pt.min.js b/assets/javascripts/lunr/min/lunr.pt.min.js new file mode 100644 index 00000000..6c16996d --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.pt.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Portuguese` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.pt=function(){this.pipeline.reset(),this.pipeline.add(e.pt.trimmer,e.pt.stopWordFilter,e.pt.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.pt.stemmer))},e.pt.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.pt.trimmer=e.trimmerSupport.generateTrimmer(e.pt.wordCharacters),e.Pipeline.registerFunction(e.pt.trimmer,"trimmer-pt"),e.pt.stemmer=function(){var r=e.stemmerSupport.Among,s=e.stemmerSupport.SnowballProgram,n=new function(){function e(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(k,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("a~");continue;case 2:z.slice_from("o~");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function n(){if(z.out_grouping(y,97,250)){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!0;z.cursor++}return!1}return!0}function i(){if(z.in_grouping(y,97,250))for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return g=z.cursor,!0}function o(){var e,r,s=z.cursor;if(z.in_grouping(y,97,250))if(e=z.cursor,n()){if(z.cursor=e,i())return}else g=z.cursor;if(z.cursor=s,z.out_grouping(y,97,250)){if(r=z.cursor,n()){if(z.cursor=r,!z.in_grouping(y,97,250)||z.cursor>=z.limit)return;z.cursor++}g=z.cursor}}function t(){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return!0}function a(){var e=z.cursor;g=z.limit,b=g,h=g,o(),z.cursor=e,t()&&(b=z.cursor,t()&&(h=z.cursor))}function u(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(q,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("ã");continue;case 2:z.slice_from("õ");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function w(){return g<=z.cursor}function m(){return b<=z.cursor}function c(){return h<=z.cursor}function l(){var e;if(z.ket=z.cursor,!(e=z.find_among_b(F,45)))return!1;switch(z.bra=z.cursor,e){case 1:if(!c())return!1;z.slice_del();break;case 2:if(!c())return!1;z.slice_from("log");break;case 3:if(!c())return!1;z.slice_from("u");break;case 4:if(!c())return!1;z.slice_from("ente");break;case 5:if(!m())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(j,4),e&&(z.bra=z.cursor,c()&&(z.slice_del(),1==e&&(z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del()))));break;case 6:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(C,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 7:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(P,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 8:if(!c())return!1;z.slice_del(),z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del());break;case 9:if(!w()||!z.eq_s_b(1,"e"))return!1;z.slice_from("ir")}return!0}function f(){var e,r;if(z.cursor>=g){if(r=z.limit_backward,z.limit_backward=g,z.ket=z.cursor,e=z.find_among_b(S,120))return z.bra=z.cursor,1==e&&z.slice_del(),z.limit_backward=r,!0;z.limit_backward=r}return!1}function d(){var e;z.ket=z.cursor,(e=z.find_among_b(W,7))&&(z.bra=z.cursor,1==e&&w()&&z.slice_del())}function v(e,r){if(z.eq_s_b(1,e)){z.bra=z.cursor;var s=z.limit-z.cursor;if(z.eq_s_b(1,r))return z.cursor=z.limit-s,w()&&z.slice_del(),!1}return!0}function p(){var e;if(z.ket=z.cursor,e=z.find_among_b(L,4))switch(z.bra=z.cursor,e){case 1:w()&&(z.slice_del(),z.ket=z.cursor,z.limit-z.cursor,v("u","g")&&v("i","c"));break;case 2:z.slice_from("c")}}function _(){if(!l()&&(z.cursor=z.limit,!f()))return z.cursor=z.limit,void d();z.cursor=z.limit,z.ket=z.cursor,z.eq_s_b(1,"i")&&(z.bra=z.cursor,z.eq_s_b(1,"c")&&(z.cursor=z.limit,w()&&z.slice_del()))}var h,b,g,k=[new r("",-1,3),new r("ã",0,1),new r("õ",0,2)],q=[new r("",-1,3),new r("a~",0,1),new r("o~",0,2)],j=[new r("ic",-1,-1),new r("ad",-1,-1),new r("os",-1,-1),new r("iv",-1,1)],C=[new r("ante",-1,1),new r("avel",-1,1),new r("ível",-1,1)],P=[new r("ic",-1,1),new r("abil",-1,1),new r("iv",-1,1)],F=[new r("ica",-1,1),new r("ância",-1,1),new r("ência",-1,4),new r("ira",-1,9),new r("adora",-1,1),new r("osa",-1,1),new r("ista",-1,1),new r("iva",-1,8),new r("eza",-1,1),new r("logía",-1,2),new r("idade",-1,7),new r("ante",-1,1),new r("mente",-1,6),new r("amente",12,5),new r("ável",-1,1),new r("ível",-1,1),new r("ución",-1,3),new r("ico",-1,1),new r("ismo",-1,1),new r("oso",-1,1),new r("amento",-1,1),new r("imento",-1,1),new r("ivo",-1,8),new r("aça~o",-1,1),new r("ador",-1,1),new r("icas",-1,1),new r("ências",-1,4),new r("iras",-1,9),new r("adoras",-1,1),new r("osas",-1,1),new r("istas",-1,1),new r("ivas",-1,8),new r("ezas",-1,1),new r("logías",-1,2),new r("idades",-1,7),new r("uciones",-1,3),new r("adores",-1,1),new r("antes",-1,1),new r("aço~es",-1,1),new r("icos",-1,1),new r("ismos",-1,1),new r("osos",-1,1),new r("amentos",-1,1),new r("imentos",-1,1),new r("ivos",-1,8)],S=[new r("ada",-1,1),new r("ida",-1,1),new r("ia",-1,1),new r("aria",2,1),new r("eria",2,1),new r("iria",2,1),new r("ara",-1,1),new r("era",-1,1),new r("ira",-1,1),new r("ava",-1,1),new r("asse",-1,1),new r("esse",-1,1),new r("isse",-1,1),new r("aste",-1,1),new r("este",-1,1),new r("iste",-1,1),new r("ei",-1,1),new r("arei",16,1),new r("erei",16,1),new r("irei",16,1),new r("am",-1,1),new r("iam",20,1),new r("ariam",21,1),new r("eriam",21,1),new r("iriam",21,1),new r("aram",20,1),new r("eram",20,1),new r("iram",20,1),new r("avam",20,1),new r("em",-1,1),new r("arem",29,1),new r("erem",29,1),new r("irem",29,1),new r("assem",29,1),new r("essem",29,1),new r("issem",29,1),new r("ado",-1,1),new r("ido",-1,1),new r("ando",-1,1),new r("endo",-1,1),new r("indo",-1,1),new r("ara~o",-1,1),new r("era~o",-1,1),new r("ira~o",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("ir",-1,1),new r("as",-1,1),new r("adas",47,1),new r("idas",47,1),new r("ias",47,1),new r("arias",50,1),new r("erias",50,1),new r("irias",50,1),new r("aras",47,1),new r("eras",47,1),new r("iras",47,1),new r("avas",47,1),new r("es",-1,1),new r("ardes",58,1),new r("erdes",58,1),new r("irdes",58,1),new r("ares",58,1),new r("eres",58,1),new r("ires",58,1),new r("asses",58,1),new r("esses",58,1),new r("isses",58,1),new r("astes",58,1),new r("estes",58,1),new r("istes",58,1),new r("is",-1,1),new r("ais",71,1),new r("eis",71,1),new r("areis",73,1),new r("ereis",73,1),new r("ireis",73,1),new r("áreis",73,1),new r("éreis",73,1),new r("íreis",73,1),new r("ásseis",73,1),new r("ésseis",73,1),new r("ísseis",73,1),new r("áveis",73,1),new r("íeis",73,1),new r("aríeis",84,1),new r("eríeis",84,1),new r("iríeis",84,1),new r("ados",-1,1),new r("idos",-1,1),new r("amos",-1,1),new r("áramos",90,1),new r("éramos",90,1),new r("íramos",90,1),new r("ávamos",90,1),new r("íamos",90,1),new r("aríamos",95,1),new r("eríamos",95,1),new r("iríamos",95,1),new r("emos",-1,1),new r("aremos",99,1),new r("eremos",99,1),new r("iremos",99,1),new r("ássemos",99,1),new r("êssemos",99,1),new r("íssemos",99,1),new r("imos",-1,1),new r("armos",-1,1),new r("ermos",-1,1),new r("irmos",-1,1),new r("ámos",-1,1),new r("arás",-1,1),new r("erás",-1,1),new r("irás",-1,1),new r("eu",-1,1),new r("iu",-1,1),new r("ou",-1,1),new r("ará",-1,1),new r("erá",-1,1),new r("irá",-1,1)],W=[new r("a",-1,1),new r("i",-1,1),new r("o",-1,1),new r("os",-1,1),new r("á",-1,1),new r("í",-1,1),new r("ó",-1,1)],L=[new r("e",-1,1),new r("ç",-1,2),new r("é",-1,1),new r("ê",-1,1)],y=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,3,19,12,2],z=new s;this.setCurrent=function(e){z.setCurrent(e)},this.getCurrent=function(){return z.getCurrent()},this.stem=function(){var r=z.cursor;return e(),z.cursor=r,a(),z.limit_backward=r,z.cursor=z.limit,_(),z.cursor=z.limit,p(),z.cursor=z.limit_backward,u(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.pt.stemmer,"stemmer-pt"),e.pt.stopWordFilter=e.generateStopWordFilter("a ao aos aquela aquelas aquele aqueles aquilo as até com como da das de dela delas dele deles depois do dos e ela elas ele eles em entre era eram essa essas esse esses esta estamos estas estava estavam este esteja estejam estejamos estes esteve estive estivemos estiver estivera estiveram estiverem estivermos estivesse estivessem estivéramos estivéssemos estou está estávamos estão eu foi fomos for fora foram forem formos fosse fossem fui fôramos fôssemos haja hajam hajamos havemos hei houve houvemos houver houvera houveram houverei houverem houveremos houveria houveriam houvermos houverá houverão houveríamos houvesse houvessem houvéramos houvéssemos há hão isso isto já lhe lhes mais mas me mesmo meu meus minha minhas muito na nas nem no nos nossa nossas nosso nossos num numa não nós o os ou para pela pelas pelo pelos por qual quando que quem se seja sejam sejamos sem serei seremos seria seriam será serão seríamos seu seus somos sou sua suas são só também te tem temos tenha tenham tenhamos tenho terei teremos teria teriam terá terão teríamos teu teus teve tinha tinham tive tivemos tiver tivera tiveram tiverem tivermos tivesse tivessem tivéramos tivéssemos tu tua tuas tém tínhamos um uma você vocês vos à às éramos".split(" ")),e.Pipeline.registerFunction(e.pt.stopWordFilter,"stopWordFilter-pt")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ro.min.js b/assets/javascripts/lunr/min/lunr.ro.min.js new file mode 100644 index 00000000..72771401 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ro.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Romanian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ro=function(){this.pipeline.reset(),this.pipeline.add(e.ro.trimmer,e.ro.stopWordFilter,e.ro.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ro.stemmer))},e.ro.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.ro.trimmer=e.trimmerSupport.generateTrimmer(e.ro.wordCharacters),e.Pipeline.registerFunction(e.ro.trimmer,"trimmer-ro"),e.ro.stemmer=function(){var i=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,n=new function(){function e(e,i){L.eq_s(1,e)&&(L.ket=L.cursor,L.in_grouping(W,97,259)&&L.slice_from(i))}function n(){for(var i,r;;){if(i=L.cursor,L.in_grouping(W,97,259)&&(r=L.cursor,L.bra=r,e("u","U"),L.cursor=r,e("i","I")),L.cursor=i,L.cursor>=L.limit)break;L.cursor++}}function t(){if(L.out_grouping(W,97,259)){for(;!L.in_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}return!0}function a(){if(L.in_grouping(W,97,259))for(;!L.out_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}function o(){var e,i,r=L.cursor;if(L.in_grouping(W,97,259)){if(e=L.cursor,!t())return void(h=L.cursor);if(L.cursor=e,!a())return void(h=L.cursor)}L.cursor=r,L.out_grouping(W,97,259)&&(i=L.cursor,t()&&(L.cursor=i,L.in_grouping(W,97,259)&&L.cursor=L.limit)return!1;L.cursor++}for(;!L.out_grouping(W,97,259);){if(L.cursor>=L.limit)return!1;L.cursor++}return!0}function c(){var e=L.cursor;h=L.limit,k=h,g=h,o(),L.cursor=e,u()&&(k=L.cursor,u()&&(g=L.cursor))}function s(){for(var e;;){if(L.bra=L.cursor,e=L.find_among(z,3))switch(L.ket=L.cursor,e){case 1:L.slice_from("i");continue;case 2:L.slice_from("u");continue;case 3:if(L.cursor>=L.limit)break;L.cursor++;continue}break}}function w(){return h<=L.cursor}function m(){return k<=L.cursor}function l(){return g<=L.cursor}function f(){var e,i;if(L.ket=L.cursor,(e=L.find_among_b(C,16))&&(L.bra=L.cursor,m()))switch(e){case 1:L.slice_del();break;case 2:L.slice_from("a");break;case 3:L.slice_from("e");break;case 4:L.slice_from("i");break;case 5:i=L.limit-L.cursor,L.eq_s_b(2,"ab")||(L.cursor=L.limit-i,L.slice_from("i"));break;case 6:L.slice_from("at");break;case 7:L.slice_from("aţi")}}function p(){var e,i=L.limit-L.cursor;if(L.ket=L.cursor,(e=L.find_among_b(P,46))&&(L.bra=L.cursor,m())){switch(e){case 1:L.slice_from("abil");break;case 2:L.slice_from("ibil");break;case 3:L.slice_from("iv");break;case 4:L.slice_from("ic");break;case 5:L.slice_from("at");break;case 6:L.slice_from("it")}return _=!0,L.cursor=L.limit-i,!0}return!1}function d(){var e,i;for(_=!1;;)if(i=L.limit-L.cursor,!p()){L.cursor=L.limit-i;break}if(L.ket=L.cursor,(e=L.find_among_b(F,62))&&(L.bra=L.cursor,l())){switch(e){case 1:L.slice_del();break;case 2:L.eq_s_b(1,"ţ")&&(L.bra=L.cursor,L.slice_from("t"));break;case 3:L.slice_from("ist")}_=!0}}function b(){var e,i,r;if(L.cursor>=h){if(i=L.limit_backward,L.limit_backward=h,L.ket=L.cursor,e=L.find_among_b(q,94))switch(L.bra=L.cursor,e){case 1:if(r=L.limit-L.cursor,!L.out_grouping_b(W,97,259)&&(L.cursor=L.limit-r,!L.eq_s_b(1,"u")))break;case 2:L.slice_del()}L.limit_backward=i}}function v(){var e;L.ket=L.cursor,(e=L.find_among_b(S,5))&&(L.bra=L.cursor,w()&&1==e&&L.slice_del())}var _,g,k,h,z=[new i("",-1,3),new i("I",0,1),new i("U",0,2)],C=[new i("ea",-1,3),new i("aţia",-1,7),new i("aua",-1,2),new i("iua",-1,4),new i("aţie",-1,7),new i("ele",-1,3),new i("ile",-1,5),new i("iile",6,4),new i("iei",-1,4),new i("atei",-1,6),new i("ii",-1,4),new i("ului",-1,1),new i("ul",-1,1),new i("elor",-1,3),new i("ilor",-1,4),new i("iilor",14,4)],P=[new i("icala",-1,4),new i("iciva",-1,4),new i("ativa",-1,5),new i("itiva",-1,6),new i("icale",-1,4),new i("aţiune",-1,5),new i("iţiune",-1,6),new i("atoare",-1,5),new i("itoare",-1,6),new i("ătoare",-1,5),new i("icitate",-1,4),new i("abilitate",-1,1),new i("ibilitate",-1,2),new i("ivitate",-1,3),new i("icive",-1,4),new i("ative",-1,5),new i("itive",-1,6),new i("icali",-1,4),new i("atori",-1,5),new i("icatori",18,4),new i("itori",-1,6),new i("ători",-1,5),new i("icitati",-1,4),new i("abilitati",-1,1),new i("ivitati",-1,3),new i("icivi",-1,4),new i("ativi",-1,5),new i("itivi",-1,6),new i("icităi",-1,4),new i("abilităi",-1,1),new i("ivităi",-1,3),new i("icităţi",-1,4),new i("abilităţi",-1,1),new i("ivităţi",-1,3),new i("ical",-1,4),new i("ator",-1,5),new i("icator",35,4),new i("itor",-1,6),new i("ător",-1,5),new i("iciv",-1,4),new i("ativ",-1,5),new i("itiv",-1,6),new i("icală",-1,4),new i("icivă",-1,4),new i("ativă",-1,5),new i("itivă",-1,6)],F=[new i("ica",-1,1),new i("abila",-1,1),new i("ibila",-1,1),new i("oasa",-1,1),new i("ata",-1,1),new i("ita",-1,1),new i("anta",-1,1),new i("ista",-1,3),new i("uta",-1,1),new i("iva",-1,1),new i("ic",-1,1),new i("ice",-1,1),new i("abile",-1,1),new i("ibile",-1,1),new i("isme",-1,3),new i("iune",-1,2),new i("oase",-1,1),new i("ate",-1,1),new i("itate",17,1),new i("ite",-1,1),new i("ante",-1,1),new i("iste",-1,3),new i("ute",-1,1),new i("ive",-1,1),new i("ici",-1,1),new i("abili",-1,1),new i("ibili",-1,1),new i("iuni",-1,2),new i("atori",-1,1),new i("osi",-1,1),new i("ati",-1,1),new i("itati",30,1),new i("iti",-1,1),new i("anti",-1,1),new i("isti",-1,3),new i("uti",-1,1),new i("işti",-1,3),new i("ivi",-1,1),new i("ităi",-1,1),new i("oşi",-1,1),new i("ităţi",-1,1),new i("abil",-1,1),new i("ibil",-1,1),new i("ism",-1,3),new i("ator",-1,1),new i("os",-1,1),new i("at",-1,1),new i("it",-1,1),new i("ant",-1,1),new i("ist",-1,3),new i("ut",-1,1),new i("iv",-1,1),new i("ică",-1,1),new i("abilă",-1,1),new i("ibilă",-1,1),new i("oasă",-1,1),new i("ată",-1,1),new i("ită",-1,1),new i("antă",-1,1),new i("istă",-1,3),new i("ută",-1,1),new i("ivă",-1,1)],q=[new i("ea",-1,1),new i("ia",-1,1),new i("esc",-1,1),new i("ăsc",-1,1),new i("ind",-1,1),new i("ând",-1,1),new i("are",-1,1),new i("ere",-1,1),new i("ire",-1,1),new i("âre",-1,1),new i("se",-1,2),new i("ase",10,1),new i("sese",10,2),new i("ise",10,1),new i("use",10,1),new i("âse",10,1),new i("eşte",-1,1),new i("ăşte",-1,1),new i("eze",-1,1),new i("ai",-1,1),new i("eai",19,1),new i("iai",19,1),new i("sei",-1,2),new i("eşti",-1,1),new i("ăşti",-1,1),new i("ui",-1,1),new i("ezi",-1,1),new i("âi",-1,1),new i("aşi",-1,1),new i("seşi",-1,2),new i("aseşi",29,1),new i("seseşi",29,2),new i("iseşi",29,1),new i("useşi",29,1),new i("âseşi",29,1),new i("işi",-1,1),new i("uşi",-1,1),new i("âşi",-1,1),new i("aţi",-1,2),new i("eaţi",38,1),new i("iaţi",38,1),new i("eţi",-1,2),new i("iţi",-1,2),new i("âţi",-1,2),new i("arăţi",-1,1),new i("serăţi",-1,2),new i("aserăţi",45,1),new i("seserăţi",45,2),new i("iserăţi",45,1),new i("userăţi",45,1),new i("âserăţi",45,1),new i("irăţi",-1,1),new i("urăţi",-1,1),new i("ârăţi",-1,1),new i("am",-1,1),new i("eam",54,1),new i("iam",54,1),new i("em",-1,2),new i("asem",57,1),new i("sesem",57,2),new i("isem",57,1),new i("usem",57,1),new i("âsem",57,1),new i("im",-1,2),new i("âm",-1,2),new i("ăm",-1,2),new i("arăm",65,1),new i("serăm",65,2),new i("aserăm",67,1),new i("seserăm",67,2),new i("iserăm",67,1),new i("userăm",67,1),new i("âserăm",67,1),new i("irăm",65,1),new i("urăm",65,1),new i("ârăm",65,1),new i("au",-1,1),new i("eau",76,1),new i("iau",76,1),new i("indu",-1,1),new i("ându",-1,1),new i("ez",-1,1),new i("ească",-1,1),new i("ară",-1,1),new i("seră",-1,2),new i("aseră",84,1),new i("seseră",84,2),new i("iseră",84,1),new i("useră",84,1),new i("âseră",84,1),new i("iră",-1,1),new i("ură",-1,1),new i("âră",-1,1),new i("ează",-1,1)],S=[new i("a",-1,1),new i("e",-1,1),new i("ie",1,1),new i("i",-1,1),new i("ă",-1,1)],W=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,2,32,0,0,4],L=new r;this.setCurrent=function(e){L.setCurrent(e)},this.getCurrent=function(){return L.getCurrent()},this.stem=function(){var e=L.cursor;return n(),L.cursor=e,c(),L.limit_backward=e,L.cursor=L.limit,f(),L.cursor=L.limit,d(),L.cursor=L.limit,_||(L.cursor=L.limit,b(),L.cursor=L.limit),v(),L.cursor=L.limit_backward,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.ro.stemmer,"stemmer-ro"),e.ro.stopWordFilter=e.generateStopWordFilter("acea aceasta această aceea acei aceia acel acela acele acelea acest acesta aceste acestea aceşti aceştia acolo acord acum ai aia aibă aici al ale alea altceva altcineva am ar are asemenea asta astea astăzi asupra au avea avem aveţi azi aş aşadar aţi bine bucur bună ca care caut ce cel ceva chiar cinci cine cineva contra cu cum cumva curând curînd când cât câte câtva câţi cînd cît cîte cîtva cîţi că căci cărei căror cărui către da dacă dar datorită dată dau de deci deja deoarece departe deşi din dinaintea dintr- dintre doi doilea două drept după dă ea ei el ele eram este eu eşti face fata fi fie fiecare fii fim fiu fiţi frumos fără graţie halbă iar ieri la le li lor lui lângă lîngă mai mea mei mele mereu meu mi mie mine mult multă mulţi mulţumesc mâine mîine mă ne nevoie nici nicăieri nimeni nimeri nimic nişte noastre noastră noi noroc nostru nouă noştri nu opt ori oricare orice oricine oricum oricând oricât oricînd oricît oriunde patra patru patrulea pe pentru peste pic poate pot prea prima primul prin puţin puţina puţină până pînă rog sa sale sau se spate spre sub sunt suntem sunteţi sută sînt sîntem sînteţi să săi său ta tale te timp tine toate toată tot totuşi toţi trei treia treilea tu tăi tău un una unde undeva unei uneia unele uneori unii unor unora unu unui unuia unul vi voastre voastră voi vostru vouă voştri vreme vreo vreun vă zece zero zi zice îi îl îmi împotriva în înainte înaintea încotro încât încît între întrucât întrucît îţi ăla ălea ăsta ăstea ăştia şapte şase şi ştiu ţi ţie".split(" ")),e.Pipeline.registerFunction(e.ro.stopWordFilter,"stopWordFilter-ro")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ru.min.js b/assets/javascripts/lunr/min/lunr.ru.min.js new file mode 100644 index 00000000..186cc485 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ru.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Russian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,n){"function"==typeof define&&define.amd?define(n):"object"==typeof exports?module.exports=n():n()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ru=function(){this.pipeline.reset(),this.pipeline.add(e.ru.trimmer,e.ru.stopWordFilter,e.ru.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ru.stemmer))},e.ru.wordCharacters="Ѐ-҄҇-ԯᴫᵸⷠ-ⷿꙀ-ꚟ︮︯",e.ru.trimmer=e.trimmerSupport.generateTrimmer(e.ru.wordCharacters),e.Pipeline.registerFunction(e.ru.trimmer,"trimmer-ru"),e.ru.stemmer=function(){var n=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,t=new function(){function e(){for(;!W.in_grouping(S,1072,1103);){if(W.cursor>=W.limit)return!1;W.cursor++}return!0}function t(){for(;!W.out_grouping(S,1072,1103);){if(W.cursor>=W.limit)return!1;W.cursor++}return!0}function w(){b=W.limit,_=b,e()&&(b=W.cursor,t()&&e()&&t()&&(_=W.cursor))}function i(){return _<=W.cursor}function u(e,n){var r,t;if(W.ket=W.cursor,r=W.find_among_b(e,n)){switch(W.bra=W.cursor,r){case 1:if(t=W.limit-W.cursor,!W.eq_s_b(1,"а")&&(W.cursor=W.limit-t,!W.eq_s_b(1,"я")))return!1;case 2:W.slice_del()}return!0}return!1}function o(){return u(h,9)}function s(e,n){var r;return W.ket=W.cursor,!!(r=W.find_among_b(e,n))&&(W.bra=W.cursor,1==r&&W.slice_del(),!0)}function c(){return s(g,26)}function m(){return!!c()&&(u(C,8),!0)}function f(){return s(k,2)}function l(){return u(P,46)}function a(){s(v,36)}function p(){var e;W.ket=W.cursor,(e=W.find_among_b(F,2))&&(W.bra=W.cursor,i()&&1==e&&W.slice_del())}function d(){var e;if(W.ket=W.cursor,e=W.find_among_b(q,4))switch(W.bra=W.cursor,e){case 1:if(W.slice_del(),W.ket=W.cursor,!W.eq_s_b(1,"н"))break;W.bra=W.cursor;case 2:if(!W.eq_s_b(1,"н"))break;case 3:W.slice_del()}}var _,b,h=[new n("в",-1,1),new n("ив",0,2),new n("ыв",0,2),new n("вши",-1,1),new n("ивши",3,2),new n("ывши",3,2),new n("вшись",-1,1),new n("ившись",6,2),new n("ывшись",6,2)],g=[new n("ее",-1,1),new n("ие",-1,1),new n("ое",-1,1),new n("ые",-1,1),new n("ими",-1,1),new n("ыми",-1,1),new n("ей",-1,1),new n("ий",-1,1),new n("ой",-1,1),new n("ый",-1,1),new n("ем",-1,1),new n("им",-1,1),new n("ом",-1,1),new n("ым",-1,1),new n("его",-1,1),new n("ого",-1,1),new n("ему",-1,1),new n("ому",-1,1),new n("их",-1,1),new n("ых",-1,1),new n("ею",-1,1),new n("ою",-1,1),new n("ую",-1,1),new n("юю",-1,1),new n("ая",-1,1),new n("яя",-1,1)],C=[new n("ем",-1,1),new n("нн",-1,1),new n("вш",-1,1),new n("ивш",2,2),new n("ывш",2,2),new n("щ",-1,1),new n("ющ",5,1),new n("ующ",6,2)],k=[new n("сь",-1,1),new n("ся",-1,1)],P=[new n("ла",-1,1),new n("ила",0,2),new n("ыла",0,2),new n("на",-1,1),new n("ена",3,2),new n("ете",-1,1),new n("ите",-1,2),new n("йте",-1,1),new n("ейте",7,2),new n("уйте",7,2),new n("ли",-1,1),new n("или",10,2),new n("ыли",10,2),new n("й",-1,1),new n("ей",13,2),new n("уй",13,2),new n("л",-1,1),new n("ил",16,2),new n("ыл",16,2),new n("ем",-1,1),new n("им",-1,2),new n("ым",-1,2),new n("н",-1,1),new n("ен",22,2),new n("ло",-1,1),new n("ило",24,2),new n("ыло",24,2),new n("но",-1,1),new n("ено",27,2),new n("нно",27,1),new n("ет",-1,1),new n("ует",30,2),new n("ит",-1,2),new n("ыт",-1,2),new n("ют",-1,1),new n("уют",34,2),new n("ят",-1,2),new n("ны",-1,1),new n("ены",37,2),new n("ть",-1,1),new n("ить",39,2),new n("ыть",39,2),new n("ешь",-1,1),new n("ишь",-1,2),new n("ю",-1,2),new n("ую",44,2)],v=[new n("а",-1,1),new n("ев",-1,1),new n("ов",-1,1),new n("е",-1,1),new n("ие",3,1),new n("ье",3,1),new n("и",-1,1),new n("еи",6,1),new n("ии",6,1),new n("ами",6,1),new n("ями",6,1),new n("иями",10,1),new n("й",-1,1),new n("ей",12,1),new n("ией",13,1),new n("ий",12,1),new n("ой",12,1),new n("ам",-1,1),new n("ем",-1,1),new n("ием",18,1),new n("ом",-1,1),new n("ям",-1,1),new n("иям",21,1),new n("о",-1,1),new n("у",-1,1),new n("ах",-1,1),new n("ях",-1,1),new n("иях",26,1),new n("ы",-1,1),new n("ь",-1,1),new n("ю",-1,1),new n("ию",30,1),new n("ью",30,1),new n("я",-1,1),new n("ия",33,1),new n("ья",33,1)],F=[new n("ост",-1,1),new n("ость",-1,1)],q=[new n("ейше",-1,1),new n("н",-1,2),new n("ейш",-1,1),new n("ь",-1,3)],S=[33,65,8,232],W=new r;this.setCurrent=function(e){W.setCurrent(e)},this.getCurrent=function(){return W.getCurrent()},this.stem=function(){return w(),W.cursor=W.limit,!(W.cursor=i&&(e-=i,t[e>>3]&1<<(7&e)))return this.cursor++,!0}return!1},in_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e<=s&&e>=i&&(e-=i,t[e>>3]&1<<(7&e)))return this.cursor--,!0}return!1},out_grouping:function(t,i,s){if(this.cursors||e>3]&1<<(7&e)))return this.cursor++,!0}return!1},out_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e>s||e>3]&1<<(7&e)))return this.cursor--,!0}return!1},eq_s:function(t,i){if(this.limit-this.cursor>1),f=0,l=o0||e==s||c)break;c=!0}}for(;;){var _=t[s];if(o>=_.s_size){if(this.cursor=n+_.s_size,!_.method)return _.result;var b=_.method();if(this.cursor=n+_.s_size,b)return _.result}if((s=_.substring_i)<0)return 0}},find_among_b:function(t,i){for(var s=0,e=i,n=this.cursor,u=this.limit_backward,o=0,h=0,c=!1;;){for(var a=s+(e-s>>1),f=0,l=o=0;m--){if(n-l==u){f=-1;break}if(f=r.charCodeAt(n-1-l)-_.s[m])break;l++}if(f<0?(e=a,h=l):(s=a,o=l),e-s<=1){if(s>0||e==s||c)break;c=!0}}for(;;){var _=t[s];if(o>=_.s_size){if(this.cursor=n-_.s_size,!_.method)return _.result;var b=_.method();if(this.cursor=n-_.s_size,b)return _.result}if((s=_.substring_i)<0)return 0}},replace_s:function(t,i,s){var e=s.length-(i-t),n=r.substring(0,t),u=r.substring(i);return r=n+s+u,this.limit+=e,this.cursor>=i?this.cursor+=e:this.cursor>t&&(this.cursor=t),e},slice_check:function(){if(this.bra<0||this.bra>this.ket||this.ket>this.limit||this.limit>r.length)throw"faulty slice operation"},slice_from:function(r){this.slice_check(),this.replace_s(this.bra,this.ket,r)},slice_del:function(){this.slice_from("")},insert:function(r,t,i){var s=this.replace_s(r,t,i);r<=this.bra&&(this.bra+=s),r<=this.ket&&(this.ket+=s)},slice_to:function(){return this.slice_check(),r.substring(this.bra,this.ket)},eq_v_b:function(r){return this.eq_s_b(r.length,r)}}}},r.trimmerSupport={generateTrimmer:function(r){var t=new RegExp("^[^"+r+"]+"),i=new RegExp("[^"+r+"]+$");return function(r){return"function"==typeof r.update?r.update(function(r){return r.replace(t,"").replace(i,"")}):r.replace(t,"").replace(i,"")}}}}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.sv.min.js b/assets/javascripts/lunr/min/lunr.sv.min.js new file mode 100644 index 00000000..3e5eb640 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.sv.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Swedish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.sv=function(){this.pipeline.reset(),this.pipeline.add(e.sv.trimmer,e.sv.stopWordFilter,e.sv.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.sv.stemmer))},e.sv.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.sv.trimmer=e.trimmerSupport.generateTrimmer(e.sv.wordCharacters),e.Pipeline.registerFunction(e.sv.trimmer,"trimmer-sv"),e.sv.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,t=new function(){function e(){var e,r=w.cursor+3;if(o=w.limit,0<=r||r<=w.limit){for(a=r;;){if(e=w.cursor,w.in_grouping(l,97,246)){w.cursor=e;break}if(w.cursor=e,w.cursor>=w.limit)return;w.cursor++}for(;!w.out_grouping(l,97,246);){if(w.cursor>=w.limit)return;w.cursor++}o=w.cursor,o=o&&(w.limit_backward=o,w.cursor=w.limit,w.ket=w.cursor,e=w.find_among_b(u,37),w.limit_backward=r,e))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:w.in_grouping_b(d,98,121)&&w.slice_del()}}function i(){var e=w.limit_backward;w.cursor>=o&&(w.limit_backward=o,w.cursor=w.limit,w.find_among_b(c,7)&&(w.cursor=w.limit,w.ket=w.cursor,w.cursor>w.limit_backward&&(w.bra=--w.cursor,w.slice_del())),w.limit_backward=e)}function s(){var e,r;if(w.cursor>=o){if(r=w.limit_backward,w.limit_backward=o,w.cursor=w.limit,w.ket=w.cursor,e=w.find_among_b(m,5))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:w.slice_from("lös");break;case 3:w.slice_from("full")}w.limit_backward=r}}var a,o,u=[new r("a",-1,1),new r("arna",0,1),new r("erna",0,1),new r("heterna",2,1),new r("orna",0,1),new r("ad",-1,1),new r("e",-1,1),new r("ade",6,1),new r("ande",6,1),new r("arne",6,1),new r("are",6,1),new r("aste",6,1),new r("en",-1,1),new r("anden",12,1),new r("aren",12,1),new r("heten",12,1),new r("ern",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",18,1),new r("or",-1,1),new r("s",-1,2),new r("as",21,1),new r("arnas",22,1),new r("ernas",22,1),new r("ornas",22,1),new r("es",21,1),new r("ades",26,1),new r("andes",26,1),new r("ens",21,1),new r("arens",29,1),new r("hetens",29,1),new r("erns",21,1),new r("at",-1,1),new r("andet",-1,1),new r("het",-1,1),new r("ast",-1,1)],c=[new r("dd",-1,-1),new r("gd",-1,-1),new r("nn",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1),new r("tt",-1,-1)],m=[new r("ig",-1,1),new r("lig",0,1),new r("els",-1,1),new r("fullt",-1,3),new r("löst",-1,2)],l=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,24,0,32],d=[119,127,149],w=new n;this.setCurrent=function(e){w.setCurrent(e)},this.getCurrent=function(){return w.getCurrent()},this.stem=function(){var r=w.cursor;return e(),w.limit_backward=r,w.cursor=w.limit,t(),w.cursor=w.limit,i(),w.cursor=w.limit,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return t.setCurrent(e),t.stem(),t.getCurrent()}):(t.setCurrent(e),t.stem(),t.getCurrent())}}(),e.Pipeline.registerFunction(e.sv.stemmer,"stemmer-sv"),e.sv.stopWordFilter=e.generateStopWordFilter("alla allt att av blev bli blir blivit de dem den denna deras dess dessa det detta dig din dina ditt du där då efter ej eller en er era ert ett från för ha hade han hans har henne hennes hon honom hur här i icke ingen inom inte jag ju kan kunde man med mellan men mig min mina mitt mot mycket ni nu när någon något några och om oss på samma sedan sig sin sina sitta själv skulle som så sådan sådana sådant till under upp ut utan vad var vara varför varit varje vars vart vem vi vid vilka vilkas vilken vilket vår våra vårt än är åt över".split(" ")),e.Pipeline.registerFunction(e.sv.stopWordFilter,"stopWordFilter-sv")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.tr.min.js b/assets/javascripts/lunr/min/lunr.tr.min.js new file mode 100644 index 00000000..563f6ec1 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.tr.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Turkish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(r,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(r.lunr)}(this,function(){return function(r){if(void 0===r)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===r.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");r.tr=function(){this.pipeline.reset(),this.pipeline.add(r.tr.trimmer,r.tr.stopWordFilter,r.tr.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(r.tr.stemmer))},r.tr.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",r.tr.trimmer=r.trimmerSupport.generateTrimmer(r.tr.wordCharacters),r.Pipeline.registerFunction(r.tr.trimmer,"trimmer-tr"),r.tr.stemmer=function(){var i=r.stemmerSupport.Among,e=r.stemmerSupport.SnowballProgram,n=new function(){function r(r,i,e){for(;;){var n=Dr.limit-Dr.cursor;if(Dr.in_grouping_b(r,i,e)){Dr.cursor=Dr.limit-n;break}if(Dr.cursor=Dr.limit-n,Dr.cursor<=Dr.limit_backward)return!1;Dr.cursor--}return!0}function n(){var i,e;i=Dr.limit-Dr.cursor,r(Wr,97,305);for(var n=0;nDr.limit_backward&&(Dr.cursor--,e=Dr.limit-Dr.cursor,i()))?(Dr.cursor=Dr.limit-e,!0):(Dr.cursor=Dr.limit-n,r()?(Dr.cursor=Dr.limit-n,!1):(Dr.cursor=Dr.limit-n,!(Dr.cursor<=Dr.limit_backward)&&(Dr.cursor--,!!i()&&(Dr.cursor=Dr.limit-n,!0))))}function u(r){return t(r,function(){return Dr.in_grouping_b(Wr,97,305)})}function o(){return u(function(){return Dr.eq_s_b(1,"n")})}function s(){return u(function(){return Dr.eq_s_b(1,"s")})}function c(){return u(function(){return Dr.eq_s_b(1,"y")})}function l(){return t(function(){return Dr.in_grouping_b(Lr,105,305)},function(){return Dr.out_grouping_b(Wr,97,305)})}function a(){return Dr.find_among_b(ur,10)&&l()}function m(){return n()&&Dr.in_grouping_b(Lr,105,305)&&s()}function d(){return Dr.find_among_b(or,2)}function f(){return n()&&Dr.in_grouping_b(Lr,105,305)&&c()}function b(){return n()&&Dr.find_among_b(sr,4)}function w(){return n()&&Dr.find_among_b(cr,4)&&o()}function _(){return n()&&Dr.find_among_b(lr,2)&&c()}function k(){return n()&&Dr.find_among_b(ar,2)}function p(){return n()&&Dr.find_among_b(mr,4)}function g(){return n()&&Dr.find_among_b(dr,2)}function y(){return n()&&Dr.find_among_b(fr,4)}function z(){return n()&&Dr.find_among_b(br,2)}function v(){return n()&&Dr.find_among_b(wr,2)&&c()}function h(){return Dr.eq_s_b(2,"ki")}function q(){return n()&&Dr.find_among_b(_r,2)&&o()}function C(){return n()&&Dr.find_among_b(kr,4)&&c()}function P(){return n()&&Dr.find_among_b(pr,4)}function F(){return n()&&Dr.find_among_b(gr,4)&&c()}function S(){return Dr.find_among_b(yr,4)}function W(){return n()&&Dr.find_among_b(zr,2)}function L(){return n()&&Dr.find_among_b(vr,4)}function x(){return n()&&Dr.find_among_b(hr,8)}function A(){return Dr.find_among_b(qr,2)}function E(){return n()&&Dr.find_among_b(Cr,32)&&c()}function j(){return Dr.find_among_b(Pr,8)&&c()}function T(){return n()&&Dr.find_among_b(Fr,4)&&c()}function Z(){return Dr.eq_s_b(3,"ken")&&c()}function B(){var r=Dr.limit-Dr.cursor;return!(T()||(Dr.cursor=Dr.limit-r,E()||(Dr.cursor=Dr.limit-r,j()||(Dr.cursor=Dr.limit-r,Z()))))}function D(){if(A()){var r=Dr.limit-Dr.cursor;if(S()||(Dr.cursor=Dr.limit-r,W()||(Dr.cursor=Dr.limit-r,C()||(Dr.cursor=Dr.limit-r,P()||(Dr.cursor=Dr.limit-r,F()||(Dr.cursor=Dr.limit-r))))),T())return!1}return!0}function G(){if(W()){Dr.bra=Dr.cursor,Dr.slice_del();var r=Dr.limit-Dr.cursor;return Dr.ket=Dr.cursor,x()||(Dr.cursor=Dr.limit-r,E()||(Dr.cursor=Dr.limit-r,j()||(Dr.cursor=Dr.limit-r,T()||(Dr.cursor=Dr.limit-r)))),nr=!1,!1}return!0}function H(){if(!L())return!0;var r=Dr.limit-Dr.cursor;return!E()&&(Dr.cursor=Dr.limit-r,!j())}function I(){var r,i=Dr.limit-Dr.cursor;return!(S()||(Dr.cursor=Dr.limit-i,F()||(Dr.cursor=Dr.limit-i,P()||(Dr.cursor=Dr.limit-i,C()))))||(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,T()||(Dr.cursor=Dr.limit-r),!1)}function J(){var r,i=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,nr=!0,B()&&(Dr.cursor=Dr.limit-i,D()&&(Dr.cursor=Dr.limit-i,G()&&(Dr.cursor=Dr.limit-i,H()&&(Dr.cursor=Dr.limit-i,I()))))){if(Dr.cursor=Dr.limit-i,!x())return;Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,r=Dr.limit-Dr.cursor,S()||(Dr.cursor=Dr.limit-r,W()||(Dr.cursor=Dr.limit-r,C()||(Dr.cursor=Dr.limit-r,P()||(Dr.cursor=Dr.limit-r,F()||(Dr.cursor=Dr.limit-r))))),T()||(Dr.cursor=Dr.limit-r)}Dr.bra=Dr.cursor,Dr.slice_del()}function K(){var r,i,e,n;if(Dr.ket=Dr.cursor,h()){if(r=Dr.limit-Dr.cursor,p())return Dr.bra=Dr.cursor,Dr.slice_del(),i=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,W()?(Dr.bra=Dr.cursor,Dr.slice_del(),K()):(Dr.cursor=Dr.limit-i,a()&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()))),!0;if(Dr.cursor=Dr.limit-r,w()){if(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,e=Dr.limit-Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else{if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,!a()&&(Dr.cursor=Dr.limit-e,!m()&&(Dr.cursor=Dr.limit-e,!K())))return!0;Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())}return!0}if(Dr.cursor=Dr.limit-r,g()){if(n=Dr.limit-Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else if(Dr.cursor=Dr.limit-n,m())Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K());else if(Dr.cursor=Dr.limit-n,!K())return!1;return!0}}return!1}function M(r){if(Dr.ket=Dr.cursor,!g()&&(Dr.cursor=Dr.limit-r,!k()))return!1;var i=Dr.limit-Dr.cursor;if(d())Dr.bra=Dr.cursor,Dr.slice_del();else if(Dr.cursor=Dr.limit-i,m())Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K());else if(Dr.cursor=Dr.limit-i,!K())return!1;return!0}function N(r){if(Dr.ket=Dr.cursor,!z()&&(Dr.cursor=Dr.limit-r,!b()))return!1;var i=Dr.limit-Dr.cursor;return!(!m()&&(Dr.cursor=Dr.limit-i,!d()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()),!0)}function O(){var r,i=Dr.limit-Dr.cursor;return Dr.ket=Dr.cursor,!(!w()&&(Dr.cursor=Dr.limit-i,!v()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,!(!W()||(Dr.bra=Dr.cursor,Dr.slice_del(),!K()))||(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!(a()||(Dr.cursor=Dr.limit-r,m()||(Dr.cursor=Dr.limit-r,K())))||(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()),!0)))}function Q(){var r,i,e=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,!p()&&(Dr.cursor=Dr.limit-e,!f()&&(Dr.cursor=Dr.limit-e,!_())))return!1;if(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,r=Dr.limit-Dr.cursor,a())Dr.bra=Dr.cursor,Dr.slice_del(),i=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,W()||(Dr.cursor=Dr.limit-i);else if(Dr.cursor=Dr.limit-r,!W())return!0;return Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,K(),!0}function R(){var r,i,e=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,W())return Dr.bra=Dr.cursor,Dr.slice_del(),void K();if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,q())if(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else{if(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!a()&&(Dr.cursor=Dr.limit-r,!m())){if(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!W())return;if(Dr.bra=Dr.cursor,Dr.slice_del(),!K())return}Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())}else if(Dr.cursor=Dr.limit-e,!M(e)&&(Dr.cursor=Dr.limit-e,!N(e))){if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,y())return Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,i=Dr.limit-Dr.cursor,void(a()?(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())):(Dr.cursor=Dr.limit-i,W()?(Dr.bra=Dr.cursor,Dr.slice_del(),K()):(Dr.cursor=Dr.limit-i,K())));if(Dr.cursor=Dr.limit-e,!O()){if(Dr.cursor=Dr.limit-e,d())return Dr.bra=Dr.cursor,void Dr.slice_del();Dr.cursor=Dr.limit-e,K()||(Dr.cursor=Dr.limit-e,Q()||(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,(a()||(Dr.cursor=Dr.limit-e,m()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()))))}}}function U(){var r;if(Dr.ket=Dr.cursor,r=Dr.find_among_b(Sr,4))switch(Dr.bra=Dr.cursor,r){case 1:Dr.slice_from("p");break;case 2:Dr.slice_from("ç");break;case 3:Dr.slice_from("t");break;case 4:Dr.slice_from("k")}}function V(){for(;;){var r=Dr.limit-Dr.cursor;if(Dr.in_grouping_b(Wr,97,305)){Dr.cursor=Dr.limit-r;break}if(Dr.cursor=Dr.limit-r,Dr.cursor<=Dr.limit_backward)return!1;Dr.cursor--}return!0}function X(r,i,e){if(Dr.cursor=Dr.limit-r,V()){var n=Dr.limit-Dr.cursor;if(!Dr.eq_s_b(1,i)&&(Dr.cursor=Dr.limit-n,!Dr.eq_s_b(1,e)))return!0;Dr.cursor=Dr.limit-r;var t=Dr.cursor;return Dr.insert(Dr.cursor,Dr.cursor,e),Dr.cursor=t,!1}return!0}function Y(){var r=Dr.limit-Dr.cursor;(Dr.eq_s_b(1,"d")||(Dr.cursor=Dr.limit-r,Dr.eq_s_b(1,"g")))&&X(r,"a","ı")&&X(r,"e","i")&&X(r,"o","u")&&X(r,"ö","ü")}function $(){for(var r,i=Dr.cursor,e=2;;){for(r=Dr.cursor;!Dr.in_grouping(Wr,97,305);){if(Dr.cursor>=Dr.limit)return Dr.cursor=r,!(e>0)&&(Dr.cursor=i,!0);Dr.cursor++}e--}}function rr(r,i,e){for(;!Dr.eq_s(i,e);){if(Dr.cursor>=Dr.limit)return!0;Dr.cursor++}return(tr=i)!=Dr.limit||(Dr.cursor=r,!1)}function ir(){var r=Dr.cursor;return!rr(r,2,"ad")||(Dr.cursor=r,!rr(r,5,"soyad"))}function er(){var r=Dr.cursor;return!ir()&&(Dr.limit_backward=r,Dr.cursor=Dr.limit,Y(),Dr.cursor=Dr.limit,U(),!0)}var nr,tr,ur=[new i("m",-1,-1),new i("n",-1,-1),new i("miz",-1,-1),new i("niz",-1,-1),new i("muz",-1,-1),new i("nuz",-1,-1),new i("müz",-1,-1),new i("nüz",-1,-1),new i("mız",-1,-1),new i("nız",-1,-1)],or=[new i("leri",-1,-1),new i("ları",-1,-1)],sr=[new i("ni",-1,-1),new i("nu",-1,-1),new i("nü",-1,-1),new i("nı",-1,-1)],cr=[new i("in",-1,-1),new i("un",-1,-1),new i("ün",-1,-1),new i("ın",-1,-1)],lr=[new i("a",-1,-1),new i("e",-1,-1)],ar=[new i("na",-1,-1),new i("ne",-1,-1)],mr=[new i("da",-1,-1),new i("ta",-1,-1),new i("de",-1,-1),new i("te",-1,-1)],dr=[new i("nda",-1,-1),new i("nde",-1,-1)],fr=[new i("dan",-1,-1),new i("tan",-1,-1),new i("den",-1,-1),new i("ten",-1,-1)],br=[new i("ndan",-1,-1),new i("nden",-1,-1)],wr=[new i("la",-1,-1),new i("le",-1,-1)],_r=[new i("ca",-1,-1),new i("ce",-1,-1)],kr=[new i("im",-1,-1),new i("um",-1,-1),new i("üm",-1,-1),new i("ım",-1,-1)],pr=[new i("sin",-1,-1),new i("sun",-1,-1),new i("sün",-1,-1),new i("sın",-1,-1)],gr=[new i("iz",-1,-1),new i("uz",-1,-1),new i("üz",-1,-1),new i("ız",-1,-1)],yr=[new i("siniz",-1,-1),new i("sunuz",-1,-1),new i("sünüz",-1,-1),new i("sınız",-1,-1)],zr=[new i("lar",-1,-1),new i("ler",-1,-1)],vr=[new i("niz",-1,-1),new i("nuz",-1,-1),new i("nüz",-1,-1),new i("nız",-1,-1)],hr=[new i("dir",-1,-1),new i("tir",-1,-1),new i("dur",-1,-1),new i("tur",-1,-1),new i("dür",-1,-1),new i("tür",-1,-1),new i("dır",-1,-1),new i("tır",-1,-1)],qr=[new i("casına",-1,-1),new i("cesine",-1,-1)],Cr=[new i("di",-1,-1),new i("ti",-1,-1),new i("dik",-1,-1),new i("tik",-1,-1),new i("duk",-1,-1),new i("tuk",-1,-1),new i("dük",-1,-1),new i("tük",-1,-1),new i("dık",-1,-1),new i("tık",-1,-1),new i("dim",-1,-1),new i("tim",-1,-1),new i("dum",-1,-1),new i("tum",-1,-1),new i("düm",-1,-1),new i("tüm",-1,-1),new i("dım",-1,-1),new i("tım",-1,-1),new i("din",-1,-1),new i("tin",-1,-1),new i("dun",-1,-1),new i("tun",-1,-1),new i("dün",-1,-1),new i("tün",-1,-1),new i("dın",-1,-1),new i("tın",-1,-1),new i("du",-1,-1),new i("tu",-1,-1),new i("dü",-1,-1),new i("tü",-1,-1),new i("dı",-1,-1),new i("tı",-1,-1)],Pr=[new i("sa",-1,-1),new i("se",-1,-1),new i("sak",-1,-1),new i("sek",-1,-1),new i("sam",-1,-1),new i("sem",-1,-1),new i("san",-1,-1),new i("sen",-1,-1)],Fr=[new i("miş",-1,-1),new i("muş",-1,-1),new i("müş",-1,-1),new i("mış",-1,-1)],Sr=[new i("b",-1,1),new i("c",-1,2),new i("d",-1,3),new i("ğ",-1,4)],Wr=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,32,8,0,0,0,0,0,0,1],Lr=[1,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,0,0,0,0,0,0,1],xr=[1,64,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],Ar=[17,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,130],Er=[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],jr=[17],Tr=[65],Zr=[65],Br=[["a",xr,97,305],["e",Ar,101,252],["ı",Er,97,305],["i",jr,101,105],["o",Tr,111,117],["ö",Zr,246,252],["u",Tr,111,117]],Dr=new e;this.setCurrent=function(r){Dr.setCurrent(r)},this.getCurrent=function(){return Dr.getCurrent()},this.stem=function(){return!!($()&&(Dr.limit_backward=Dr.cursor,Dr.cursor=Dr.limit,J(),Dr.cursor=Dr.limit,nr&&(R(),Dr.cursor=Dr.limit_backward,er())))}};return function(r){return"function"==typeof r.update?r.update(function(r){return n.setCurrent(r),n.stem(),n.getCurrent()}):(n.setCurrent(r),n.stem(),n.getCurrent())}}(),r.Pipeline.registerFunction(r.tr.stemmer,"stemmer-tr"),r.tr.stopWordFilter=r.generateStopWordFilter("acaba altmış altı ama ancak arada aslında ayrıca bana bazı belki ben benden beni benim beri beş bile bin bir biri birkaç birkez birçok birşey birşeyi biz bizden bize bizi bizim bu buna bunda bundan bunlar bunları bunların bunu bunun burada böyle böylece da daha dahi de defa değil diye diğer doksan dokuz dolayı dolayısıyla dört edecek eden ederek edilecek ediliyor edilmesi ediyor elli en etmesi etti ettiği ettiğini eğer gibi göre halen hangi hatta hem henüz hep hepsi her herhangi herkesin hiç hiçbir iki ile ilgili ise itibaren itibariyle için işte kadar karşın katrilyon kendi kendilerine kendini kendisi kendisine kendisini kez ki kim kimden kime kimi kimse kırk milyar milyon mu mü mı nasıl ne neden nedenle nerde nerede nereye niye niçin o olan olarak oldu olduklarını olduğu olduğunu olmadı olmadığı olmak olması olmayan olmaz olsa olsun olup olur olursa oluyor on ona ondan onlar onlardan onları onların onu onun otuz oysa pek rağmen sadece sanki sekiz seksen sen senden seni senin siz sizden sizi sizin tarafından trilyon tüm var vardı ve veya ya yani yapacak yapmak yaptı yaptıkları yaptığı yaptığını yapılan yapılması yapıyor yedi yerine yetmiş yine yirmi yoksa yüz zaten çok çünkü öyle üzere üç şey şeyden şeyi şeyler şu şuna şunda şundan şunları şunu şöyle".split(" ")),r.Pipeline.registerFunction(r.tr.stopWordFilter,"stopWordFilter-tr")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.vi.min.js b/assets/javascripts/lunr/min/lunr.vi.min.js new file mode 100644 index 00000000..22aed28c --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.vi.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.vi=function(){this.pipeline.reset(),this.pipeline.add(e.vi.stopWordFilter,e.vi.trimmer)},e.vi.wordCharacters="[A-Za-ẓ̀͐́͑̉̃̓ÂâÊêÔôĂ-ăĐ-đƠ-ơƯ-ư]",e.vi.trimmer=e.trimmerSupport.generateTrimmer(e.vi.wordCharacters),e.Pipeline.registerFunction(e.vi.trimmer,"trimmer-vi"),e.vi.stopWordFilter=e.generateStopWordFilter("là cái nhưng mà".split(" "))}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/tinyseg.min.js b/assets/javascripts/lunr/tinyseg.min.js new file mode 100644 index 00000000..302befbb --- /dev/null +++ b/assets/javascripts/lunr/tinyseg.min.js @@ -0,0 +1 @@ +!function(_,t){"function"==typeof define&&define.amd?define(t):"object"==typeof exports?module.exports=t():t()(_.lunr)}(this,(function(){return function(_){function t(){var _={"[一二三四五六七八九十百千万億兆]":"M","[一-龠々〆ヵヶ]":"H","[ぁ-ん]":"I","[ァ-ヴーア-ン゙ー]":"K","[a-zA-Za-zA-Z]":"A","[0-90-9]":"N"};for(var t in this.chartype_=[],_){var H=new RegExp(t);this.chartype_.push([H,_[t]])}return this.BIAS__=-332,this.BC1__={HH:6,II:2461,KH:406,OH:-1378},this.BC2__={AA:-3267,AI:2744,AN:-878,HH:-4070,HM:-1711,HN:4012,HO:3761,IA:1327,IH:-1184,II:-1332,IK:1721,IO:5492,KI:3831,KK:-8741,MH:-3132,MK:3334,OO:-2920},this.BC3__={HH:996,HI:626,HK:-721,HN:-1307,HO:-836,IH:-301,KK:2762,MK:1079,MM:4034,OA:-1652,OH:266},this.BP1__={BB:295,OB:304,OO:-125,UB:352},this.BP2__={BO:60,OO:-1762},this.BQ1__={BHH:1150,BHM:1521,BII:-1158,BIM:886,BMH:1208,BNH:449,BOH:-91,BOO:-2597,OHI:451,OIH:-296,OKA:1851,OKH:-1020,OKK:904,OOO:2965},this.BQ2__={BHH:118,BHI:-1159,BHM:466,BIH:-919,BKK:-1720,BKO:864,OHH:-1139,OHM:-181,OIH:153,UHI:-1146},this.BQ3__={BHH:-792,BHI:2664,BII:-299,BKI:419,BMH:937,BMM:8335,BNN:998,BOH:775,OHH:2174,OHM:439,OII:280,OKH:1798,OKI:-793,OKO:-2242,OMH:-2402,OOO:11699},this.BQ4__={BHH:-3895,BIH:3761,BII:-4654,BIK:1348,BKK:-1806,BMI:-3385,BOO:-12396,OAH:926,OHH:266,OHK:-2036,ONN:-973},this.BW1__={",と":660,",同":727,"B1あ":1404,"B1同":542,"、と":660,"、同":727,"」と":1682,"あっ":1505,"いう":1743,"いっ":-2055,"いる":672,"うし":-4817,"うん":665,"から":3472,"がら":600,"こう":-790,"こと":2083,"こん":-1262,"さら":-4143,"さん":4573,"した":2641,"して":1104,"すで":-3399,"そこ":1977,"それ":-871,"たち":1122,"ため":601,"った":3463,"つい":-802,"てい":805,"てき":1249,"でき":1127,"です":3445,"では":844,"とい":-4915,"とみ":1922,"どこ":3887,"ない":5713,"なっ":3015,"など":7379,"なん":-1113,"にし":2468,"には":1498,"にも":1671,"に対":-912,"の一":-501,"の中":741,"ませ":2448,"まで":1711,"まま":2600,"まる":-2155,"やむ":-1947,"よっ":-2565,"れた":2369,"れで":-913,"をし":1860,"を見":731,"亡く":-1886,"京都":2558,"取り":-2784,"大き":-2604,"大阪":1497,"平方":-2314,"引き":-1336,"日本":-195,"本当":-2423,"毎日":-2113,"目指":-724,"B1あ":1404,"B1同":542,"」と":1682},this.BW2__={"..":-11822,11:-669,"――":-5730,"−−":-13175,"いう":-1609,"うか":2490,"かし":-1350,"かも":-602,"から":-7194,"かれ":4612,"がい":853,"がら":-3198,"きた":1941,"くな":-1597,"こと":-8392,"この":-4193,"させ":4533,"され":13168,"さん":-3977,"しい":-1819,"しか":-545,"した":5078,"して":972,"しな":939,"その":-3744,"たい":-1253,"たた":-662,"ただ":-3857,"たち":-786,"たと":1224,"たは":-939,"った":4589,"って":1647,"っと":-2094,"てい":6144,"てき":3640,"てく":2551,"ては":-3110,"ても":-3065,"でい":2666,"でき":-1528,"でし":-3828,"です":-4761,"でも":-4203,"とい":1890,"とこ":-1746,"とと":-2279,"との":720,"とみ":5168,"とも":-3941,"ない":-2488,"なが":-1313,"など":-6509,"なの":2614,"なん":3099,"にお":-1615,"にし":2748,"にな":2454,"によ":-7236,"に対":-14943,"に従":-4688,"に関":-11388,"のか":2093,"ので":-7059,"のに":-6041,"のの":-6125,"はい":1073,"はが":-1033,"はず":-2532,"ばれ":1813,"まし":-1316,"まで":-6621,"まれ":5409,"めて":-3153,"もい":2230,"もの":-10713,"らか":-944,"らし":-1611,"らに":-1897,"りし":651,"りま":1620,"れた":4270,"れて":849,"れば":4114,"ろう":6067,"われ":7901,"を通":-11877,"んだ":728,"んな":-4115,"一人":602,"一方":-1375,"一日":970,"一部":-1051,"上が":-4479,"会社":-1116,"出て":2163,"分の":-7758,"同党":970,"同日":-913,"大阪":-2471,"委員":-1250,"少な":-1050,"年度":-8669,"年間":-1626,"府県":-2363,"手権":-1982,"新聞":-4066,"日新":-722,"日本":-7068,"日米":3372,"曜日":-601,"朝鮮":-2355,"本人":-2697,"東京":-1543,"然と":-1384,"社会":-1276,"立て":-990,"第に":-1612,"米国":-4268,"11":-669},this.BW3__={"あた":-2194,"あり":719,"ある":3846,"い.":-1185,"い。":-1185,"いい":5308,"いえ":2079,"いく":3029,"いた":2056,"いっ":1883,"いる":5600,"いわ":1527,"うち":1117,"うと":4798,"えと":1454,"か.":2857,"か。":2857,"かけ":-743,"かっ":-4098,"かに":-669,"から":6520,"かり":-2670,"が,":1816,"が、":1816,"がき":-4855,"がけ":-1127,"がっ":-913,"がら":-4977,"がり":-2064,"きた":1645,"けど":1374,"こと":7397,"この":1542,"ころ":-2757,"さい":-714,"さを":976,"し,":1557,"し、":1557,"しい":-3714,"した":3562,"して":1449,"しな":2608,"しま":1200,"す.":-1310,"す。":-1310,"する":6521,"ず,":3426,"ず、":3426,"ずに":841,"そう":428,"た.":8875,"た。":8875,"たい":-594,"たの":812,"たり":-1183,"たる":-853,"だ.":4098,"だ。":4098,"だっ":1004,"った":-4748,"って":300,"てい":6240,"てお":855,"ても":302,"です":1437,"でに":-1482,"では":2295,"とう":-1387,"とし":2266,"との":541,"とも":-3543,"どう":4664,"ない":1796,"なく":-903,"など":2135,"に,":-1021,"に、":-1021,"にし":1771,"にな":1906,"には":2644,"の,":-724,"の、":-724,"の子":-1e3,"は,":1337,"は、":1337,"べき":2181,"まし":1113,"ます":6943,"まっ":-1549,"まで":6154,"まれ":-793,"らし":1479,"られ":6820,"るる":3818,"れ,":854,"れ、":854,"れた":1850,"れて":1375,"れば":-3246,"れる":1091,"われ":-605,"んだ":606,"んで":798,"カ月":990,"会議":860,"入り":1232,"大会":2217,"始め":1681,"市":965,"新聞":-5055,"日,":974,"日、":974,"社会":2024,"カ月":990},this.TC1__={AAA:1093,HHH:1029,HHM:580,HII:998,HOH:-390,HOM:-331,IHI:1169,IOH:-142,IOI:-1015,IOM:467,MMH:187,OOI:-1832},this.TC2__={HHO:2088,HII:-1023,HMM:-1154,IHI:-1965,KKH:703,OII:-2649},this.TC3__={AAA:-294,HHH:346,HHI:-341,HII:-1088,HIK:731,HOH:-1486,IHH:128,IHI:-3041,IHO:-1935,IIH:-825,IIM:-1035,IOI:-542,KHH:-1216,KKA:491,KKH:-1217,KOK:-1009,MHH:-2694,MHM:-457,MHO:123,MMH:-471,NNH:-1689,NNO:662,OHO:-3393},this.TC4__={HHH:-203,HHI:1344,HHK:365,HHM:-122,HHN:182,HHO:669,HIH:804,HII:679,HOH:446,IHH:695,IHO:-2324,IIH:321,III:1497,IIO:656,IOO:54,KAK:4845,KKA:3386,KKK:3065,MHH:-405,MHI:201,MMH:-241,MMM:661,MOM:841},this.TQ1__={BHHH:-227,BHHI:316,BHIH:-132,BIHH:60,BIII:1595,BNHH:-744,BOHH:225,BOOO:-908,OAKK:482,OHHH:281,OHIH:249,OIHI:200,OIIH:-68},this.TQ2__={BIHH:-1401,BIII:-1033,BKAK:-543,BOOO:-5591},this.TQ3__={BHHH:478,BHHM:-1073,BHIH:222,BHII:-504,BIIH:-116,BIII:-105,BMHI:-863,BMHM:-464,BOMH:620,OHHH:346,OHHI:1729,OHII:997,OHMH:481,OIHH:623,OIIH:1344,OKAK:2792,OKHH:587,OKKA:679,OOHH:110,OOII:-685},this.TQ4__={BHHH:-721,BHHM:-3604,BHII:-966,BIIH:-607,BIII:-2181,OAAA:-2763,OAKK:180,OHHH:-294,OHHI:2446,OHHO:480,OHIH:-1573,OIHH:1935,OIHI:-493,OIIH:626,OIII:-4007,OKAK:-8156},this.TW1__={"につい":-4681,"東京都":2026},this.TW2__={"ある程":-2049,"いった":-1256,"ころが":-2434,"しょう":3873,"その後":-4430,"だって":-1049,"ていた":1833,"として":-4657,"ともに":-4517,"もので":1882,"一気に":-792,"初めて":-1512,"同時に":-8097,"大きな":-1255,"対して":-2721,"社会党":-3216},this.TW3__={"いただ":-1734,"してい":1314,"として":-4314,"につい":-5483,"にとっ":-5989,"に当た":-6247,"ので,":-727,"ので、":-727,"のもの":-600,"れから":-3752,"十二月":-2287},this.TW4__={"いう.":8576,"いう。":8576,"からな":-2348,"してい":2958,"たが,":1516,"たが、":1516,"ている":1538,"という":1349,"ました":5543,"ません":1097,"ようと":-4258,"よると":5865},this.UC1__={A:484,K:93,M:645,O:-505},this.UC2__={A:819,H:1059,I:409,M:3987,N:5775,O:646},this.UC3__={A:-1370,I:2311},this.UC4__={A:-2643,H:1809,I:-1032,K:-3450,M:3565,N:3876,O:6646},this.UC5__={H:313,I:-1238,K:-799,M:539,O:-831},this.UC6__={H:-506,I:-253,K:87,M:247,O:-387},this.UP1__={O:-214},this.UP2__={B:69,O:935},this.UP3__={B:189},this.UQ1__={BH:21,BI:-12,BK:-99,BN:142,BO:-56,OH:-95,OI:477,OK:410,OO:-2422},this.UQ2__={BH:216,BI:113,OK:1759},this.UQ3__={BA:-479,BH:42,BI:1913,BK:-7198,BM:3160,BN:6427,BO:14761,OI:-827,ON:-3212},this.UW1__={",":156,"、":156,"「":-463,"あ":-941,"う":-127,"が":-553,"き":121,"こ":505,"で":-201,"と":-547,"ど":-123,"に":-789,"の":-185,"は":-847,"も":-466,"や":-470,"よ":182,"ら":-292,"り":208,"れ":169,"を":-446,"ん":-137,"・":-135,"主":-402,"京":-268,"区":-912,"午":871,"国":-460,"大":561,"委":729,"市":-411,"日":-141,"理":361,"生":-408,"県":-386,"都":-718,"「":-463,"・":-135},this.UW2__={",":-829,"、":-829,"〇":892,"「":-645,"」":3145,"あ":-538,"い":505,"う":134,"お":-502,"か":1454,"が":-856,"く":-412,"こ":1141,"さ":878,"ざ":540,"し":1529,"す":-675,"せ":300,"そ":-1011,"た":188,"だ":1837,"つ":-949,"て":-291,"で":-268,"と":-981,"ど":1273,"な":1063,"に":-1764,"の":130,"は":-409,"ひ":-1273,"べ":1261,"ま":600,"も":-1263,"や":-402,"よ":1639,"り":-579,"る":-694,"れ":571,"を":-2516,"ん":2095,"ア":-587,"カ":306,"キ":568,"ッ":831,"三":-758,"不":-2150,"世":-302,"中":-968,"主":-861,"事":492,"人":-123,"会":978,"保":362,"入":548,"初":-3025,"副":-1566,"北":-3414,"区":-422,"大":-1769,"天":-865,"太":-483,"子":-1519,"学":760,"実":1023,"小":-2009,"市":-813,"年":-1060,"強":1067,"手":-1519,"揺":-1033,"政":1522,"文":-1355,"新":-1682,"日":-1815,"明":-1462,"最":-630,"朝":-1843,"本":-1650,"東":-931,"果":-665,"次":-2378,"民":-180,"気":-1740,"理":752,"発":529,"目":-1584,"相":-242,"県":-1165,"立":-763,"第":810,"米":509,"自":-1353,"行":838,"西":-744,"見":-3874,"調":1010,"議":1198,"込":3041,"開":1758,"間":-1257,"「":-645,"」":3145,"ッ":831,"ア":-587,"カ":306,"キ":568},this.UW3__={",":4889,1:-800,"−":-1723,"、":4889,"々":-2311,"〇":5827,"」":2670,"〓":-3573,"あ":-2696,"い":1006,"う":2342,"え":1983,"お":-4864,"か":-1163,"が":3271,"く":1004,"け":388,"げ":401,"こ":-3552,"ご":-3116,"さ":-1058,"し":-395,"す":584,"せ":3685,"そ":-5228,"た":842,"ち":-521,"っ":-1444,"つ":-1081,"て":6167,"で":2318,"と":1691,"ど":-899,"な":-2788,"に":2745,"の":4056,"は":4555,"ひ":-2171,"ふ":-1798,"へ":1199,"ほ":-5516,"ま":-4384,"み":-120,"め":1205,"も":2323,"や":-788,"よ":-202,"ら":727,"り":649,"る":5905,"れ":2773,"わ":-1207,"を":6620,"ん":-518,"ア":551,"グ":1319,"ス":874,"ッ":-1350,"ト":521,"ム":1109,"ル":1591,"ロ":2201,"ン":278,"・":-3794,"一":-1619,"下":-1759,"世":-2087,"両":3815,"中":653,"主":-758,"予":-1193,"二":974,"人":2742,"今":792,"他":1889,"以":-1368,"低":811,"何":4265,"作":-361,"保":-2439,"元":4858,"党":3593,"全":1574,"公":-3030,"六":755,"共":-1880,"円":5807,"再":3095,"分":457,"初":2475,"別":1129,"前":2286,"副":4437,"力":365,"動":-949,"務":-1872,"化":1327,"北":-1038,"区":4646,"千":-2309,"午":-783,"協":-1006,"口":483,"右":1233,"各":3588,"合":-241,"同":3906,"和":-837,"員":4513,"国":642,"型":1389,"場":1219,"外":-241,"妻":2016,"学":-1356,"安":-423,"実":-1008,"家":1078,"小":-513,"少":-3102,"州":1155,"市":3197,"平":-1804,"年":2416,"広":-1030,"府":1605,"度":1452,"建":-2352,"当":-3885,"得":1905,"思":-1291,"性":1822,"戸":-488,"指":-3973,"政":-2013,"教":-1479,"数":3222,"文":-1489,"新":1764,"日":2099,"旧":5792,"昨":-661,"時":-1248,"曜":-951,"最":-937,"月":4125,"期":360,"李":3094,"村":364,"東":-805,"核":5156,"森":2438,"業":484,"氏":2613,"民":-1694,"決":-1073,"法":1868,"海":-495,"無":979,"物":461,"特":-3850,"生":-273,"用":914,"町":1215,"的":7313,"直":-1835,"省":792,"県":6293,"知":-1528,"私":4231,"税":401,"立":-960,"第":1201,"米":7767,"系":3066,"約":3663,"級":1384,"統":-4229,"総":1163,"線":1255,"者":6457,"能":725,"自":-2869,"英":785,"見":1044,"調":-562,"財":-733,"費":1777,"車":1835,"軍":1375,"込":-1504,"通":-1136,"選":-681,"郎":1026,"郡":4404,"部":1200,"金":2163,"長":421,"開":-1432,"間":1302,"関":-1282,"雨":2009,"電":-1045,"非":2066,"駅":1620,"1":-800,"」":2670,"・":-3794,"ッ":-1350,"ア":551,"グ":1319,"ス":874,"ト":521,"ム":1109,"ル":1591,"ロ":2201,"ン":278},this.UW4__={",":3930,".":3508,"―":-4841,"、":3930,"。":3508,"〇":4999,"「":1895,"」":3798,"〓":-5156,"あ":4752,"い":-3435,"う":-640,"え":-2514,"お":2405,"か":530,"が":6006,"き":-4482,"ぎ":-3821,"く":-3788,"け":-4376,"げ":-4734,"こ":2255,"ご":1979,"さ":2864,"し":-843,"じ":-2506,"す":-731,"ず":1251,"せ":181,"そ":4091,"た":5034,"だ":5408,"ち":-3654,"っ":-5882,"つ":-1659,"て":3994,"で":7410,"と":4547,"な":5433,"に":6499,"ぬ":1853,"ね":1413,"の":7396,"は":8578,"ば":1940,"ひ":4249,"び":-4134,"ふ":1345,"へ":6665,"べ":-744,"ほ":1464,"ま":1051,"み":-2082,"む":-882,"め":-5046,"も":4169,"ゃ":-2666,"や":2795,"ょ":-1544,"よ":3351,"ら":-2922,"り":-9726,"る":-14896,"れ":-2613,"ろ":-4570,"わ":-1783,"を":13150,"ん":-2352,"カ":2145,"コ":1789,"セ":1287,"ッ":-724,"ト":-403,"メ":-1635,"ラ":-881,"リ":-541,"ル":-856,"ン":-3637,"・":-4371,"ー":-11870,"一":-2069,"中":2210,"予":782,"事":-190,"井":-1768,"人":1036,"以":544,"会":950,"体":-1286,"作":530,"側":4292,"先":601,"党":-2006,"共":-1212,"内":584,"円":788,"初":1347,"前":1623,"副":3879,"力":-302,"動":-740,"務":-2715,"化":776,"区":4517,"協":1013,"参":1555,"合":-1834,"和":-681,"員":-910,"器":-851,"回":1500,"国":-619,"園":-1200,"地":866,"場":-1410,"塁":-2094,"士":-1413,"多":1067,"大":571,"子":-4802,"学":-1397,"定":-1057,"寺":-809,"小":1910,"屋":-1328,"山":-1500,"島":-2056,"川":-2667,"市":2771,"年":374,"庁":-4556,"後":456,"性":553,"感":916,"所":-1566,"支":856,"改":787,"政":2182,"教":704,"文":522,"方":-856,"日":1798,"時":1829,"最":845,"月":-9066,"木":-485,"来":-442,"校":-360,"業":-1043,"氏":5388,"民":-2716,"気":-910,"沢":-939,"済":-543,"物":-735,"率":672,"球":-1267,"生":-1286,"産":-1101,"田":-2900,"町":1826,"的":2586,"目":922,"省":-3485,"県":2997,"空":-867,"立":-2112,"第":788,"米":2937,"系":786,"約":2171,"経":1146,"統":-1169,"総":940,"線":-994,"署":749,"者":2145,"能":-730,"般":-852,"行":-792,"規":792,"警":-1184,"議":-244,"谷":-1e3,"賞":730,"車":-1481,"軍":1158,"輪":-1433,"込":-3370,"近":929,"道":-1291,"選":2596,"郎":-4866,"都":1192,"野":-1100,"銀":-2213,"長":357,"間":-2344,"院":-2297,"際":-2604,"電":-878,"領":-1659,"題":-792,"館":-1984,"首":1749,"高":2120,"「":1895,"」":3798,"・":-4371,"ッ":-724,"ー":-11870,"カ":2145,"コ":1789,"セ":1287,"ト":-403,"メ":-1635,"ラ":-881,"リ":-541,"ル":-856,"ン":-3637},this.UW5__={",":465,".":-299,1:-514,E2:-32768,"]":-2762,"、":465,"。":-299,"「":363,"あ":1655,"い":331,"う":-503,"え":1199,"お":527,"か":647,"が":-421,"き":1624,"ぎ":1971,"く":312,"げ":-983,"さ":-1537,"し":-1371,"す":-852,"だ":-1186,"ち":1093,"っ":52,"つ":921,"て":-18,"で":-850,"と":-127,"ど":1682,"な":-787,"に":-1224,"の":-635,"は":-578,"べ":1001,"み":502,"め":865,"ゃ":3350,"ょ":854,"り":-208,"る":429,"れ":504,"わ":419,"を":-1264,"ん":327,"イ":241,"ル":451,"ン":-343,"中":-871,"京":722,"会":-1153,"党":-654,"務":3519,"区":-901,"告":848,"員":2104,"大":-1296,"学":-548,"定":1785,"嵐":-1304,"市":-2991,"席":921,"年":1763,"思":872,"所":-814,"挙":1618,"新":-1682,"日":218,"月":-4353,"査":932,"格":1356,"機":-1508,"氏":-1347,"田":240,"町":-3912,"的":-3149,"相":1319,"省":-1052,"県":-4003,"研":-997,"社":-278,"空":-813,"統":1955,"者":-2233,"表":663,"語":-1073,"議":1219,"選":-1018,"郎":-368,"長":786,"間":1191,"題":2368,"館":-689,"1":-514,"E2":-32768,"「":363,"イ":241,"ル":451,"ン":-343},this.UW6__={",":227,".":808,1:-270,E1:306,"、":227,"。":808,"あ":-307,"う":189,"か":241,"が":-73,"く":-121,"こ":-200,"じ":1782,"す":383,"た":-428,"っ":573,"て":-1014,"で":101,"と":-105,"な":-253,"に":-149,"の":-417,"は":-236,"も":-206,"り":187,"る":-135,"を":195,"ル":-673,"ン":-496,"一":-277,"中":201,"件":-800,"会":624,"前":302,"区":1792,"員":-1212,"委":798,"学":-960,"市":887,"広":-695,"後":535,"業":-697,"相":753,"社":-507,"福":974,"空":-822,"者":1811,"連":463,"郎":1082,"1":-270,"E1":306,"ル":-673,"ン":-496},this}t.prototype.ctype_=function(_){for(var t in this.chartype_)if(_.match(this.chartype_[t][0]))return this.chartype_[t][1];return"O"},t.prototype.ts_=function(_){return _||0},t.prototype.segment=function(_){if(null==_||null==_||""==_)return[];var t=[],H=["B3","B2","B1"],s=["O","O","O"],h=_.split("");for(K=0;K0&&(t.push(i),i="",N="B"),I=O,O=B,B=N,i+=H[K]}return t.push(i),t},_.TinySegmenter=t}})); \ No newline at end of file diff --git a/assets/javascripts/vendor.f81b9e8b.min.js b/assets/javascripts/vendor.f81b9e8b.min.js new file mode 100644 index 00000000..bed58dc1 --- /dev/null +++ b/assets/javascripts/vendor.f81b9e8b.min.js @@ -0,0 +1,31 @@ +(window.webpackJsonp=window.webpackJsonp||[]).push([[1],[function(t,e,n){"use strict";n.d(e,"f",(function(){return i})),n.d(e,"a",(function(){return o})),n.d(e,"e",(function(){return s})),n.d(e,"g",(function(){return u})),n.d(e,"k",(function(){return c})),n.d(e,"h",(function(){return a})),n.d(e,"i",(function(){return f})),n.d(e,"j",(function(){return h})),n.d(e,"d",(function(){return l})),n.d(e,"b",(function(){return p})),n.d(e,"c",(function(){return d})); +/*! ***************************************************************************** +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the Apache License, Version 2.0 (the "License"); you may not use +this file except in compliance with the License. You may obtain a copy of the +License at http://www.apache.org/licenses/LICENSE-2.0 + +THIS CODE IS PROVIDED ON AN *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED +WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE, +MERCHANTABLITY OR NON-INFRINGEMENT. + +See the Apache Version 2.0 License for specific language governing permissions +and limitations under the License. +***************************************************************************** */ +var r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)};function i(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)}var o=function(){return(o=Object.assign||function(t){for(var e,n=1,r=arguments.length;n0&&i[i.length-1])||6!==o[0]&&2!==o[0])){s=0;continue}if(3===o[0]&&(!i||o[1]>i[0]&&o[1]=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")}function a(t,e){var n="function"==typeof Symbol&&t[Symbol.iterator];if(!n)return t;var r,i,o=n.call(t),s=[];try{for(;(void 0===e||e-- >0)&&!(r=o.next()).done;)s.push(r.value)}catch(t){i={error:t}}finally{try{r&&!r.done&&(n=o.return)&&n.call(o)}finally{if(i)throw i.error}}return s}function f(){for(var t=[],e=0;e1||u(t,e)}))})}function u(t,e){try{(n=i[t](e)).value instanceof l?Promise.resolve(n.value.v).then(c,a):f(o[0][2],n)}catch(t){f(o[0][3],t)}var n}function c(t){u("next",t)}function a(t){u("throw",t)}function f(t,e){t(e),o.shift(),o.length&&u(o[0][0],o[0][1])}}function d(t){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var e,n=t[Symbol.asyncIterator];return n?n.call(t):(t=c(t),e={},r("next"),r("throw"),r("return"),e[Symbol.asyncIterator]=function(){return this},e);function r(n){e[n]=t[n]&&function(e){return new Promise((function(r,i){(function(t,e,n,r){Promise.resolve(r).then((function(e){t({value:e,done:n})}),e)})(r,i,(e=t[n](e)).done,e.value)}))}}}},,,function(t,e,n){"use strict";n.d(e,"a",(function(){return f}));var r=n(0),i=n(15),o=n(40),s=n(8),u=n(32),c=n(12),a=n(22),f=function(t){function e(n,r,i){var s=t.call(this)||this;switch(s.syncErrorValue=null,s.syncErrorThrown=!1,s.syncErrorThrowable=!1,s.isStopped=!1,arguments.length){case 0:s.destination=o.a;break;case 1:if(!n){s.destination=o.a;break}if("object"==typeof n){n instanceof e?(s.syncErrorThrowable=n.syncErrorThrowable,s.destination=n,n.add(s)):(s.syncErrorThrowable=!0,s.destination=new h(s,n));break}default:s.syncErrorThrowable=!0,s.destination=new h(s,n,r,i)}return s}return Object(r.f)(e,t),e.prototype[u.a]=function(){return this},e.create=function(t,n,r){var i=new e(t,n,r);return i.syncErrorThrowable=!1,i},e.prototype.next=function(t){this.isStopped||this._next(t)},e.prototype.error=function(t){this.isStopped||(this.isStopped=!0,this._error(t))},e.prototype.complete=function(){this.isStopped||(this.isStopped=!0,this._complete())},e.prototype.unsubscribe=function(){this.closed||(this.isStopped=!0,t.prototype.unsubscribe.call(this))},e.prototype._next=function(t){this.destination.next(t)},e.prototype._error=function(t){this.destination.error(t),this.unsubscribe()},e.prototype._complete=function(){this.destination.complete(),this.unsubscribe()},e.prototype._unsubscribeAndRecycle=function(){var t=this._parentOrParents;return this._parentOrParents=null,this.unsubscribe(),this.closed=!1,this.isStopped=!1,this._parentOrParents=t,this},e}(s.a),h=function(t){function e(e,n,r,s){var u,c=t.call(this)||this;c._parentSubscriber=e;var a=c;return Object(i.a)(n)?u=n:n&&(u=n.next,r=n.error,s=n.complete,n!==o.a&&(a=Object.create(n),Object(i.a)(a.unsubscribe)&&c.add(a.unsubscribe.bind(a)),a.unsubscribe=c.unsubscribe.bind(c))),c._context=a,c._next=u,c._error=r,c._complete=s,c}return Object(r.f)(e,t),e.prototype.next=function(t){if(!this.isStopped&&this._next){var e=this._parentSubscriber;c.a.useDeprecatedSynchronousErrorHandling&&e.syncErrorThrowable?this.__tryOrSetError(e,this._next,t)&&this.unsubscribe():this.__tryOrUnsub(this._next,t)}},e.prototype.error=function(t){if(!this.isStopped){var e=this._parentSubscriber,n=c.a.useDeprecatedSynchronousErrorHandling;if(this._error)n&&e.syncErrorThrowable?(this.__tryOrSetError(e,this._error,t),this.unsubscribe()):(this.__tryOrUnsub(this._error,t),this.unsubscribe());else if(e.syncErrorThrowable)n?(e.syncErrorValue=t,e.syncErrorThrown=!0):Object(a.a)(t),this.unsubscribe();else{if(this.unsubscribe(),n)throw t;Object(a.a)(t)}}},e.prototype.complete=function(){var t=this;if(!this.isStopped){var e=this._parentSubscriber;if(this._complete){var n=function(){return t._complete.call(t._context)};c.a.useDeprecatedSynchronousErrorHandling&&e.syncErrorThrowable?(this.__tryOrSetError(e,n),this.unsubscribe()):(this.__tryOrUnsub(n),this.unsubscribe())}else this.unsubscribe()}},e.prototype.__tryOrUnsub=function(t,e){try{t.call(this._context,e)}catch(t){if(this.unsubscribe(),c.a.useDeprecatedSynchronousErrorHandling)throw t;Object(a.a)(t)}},e.prototype.__tryOrSetError=function(t,e,n){if(!c.a.useDeprecatedSynchronousErrorHandling)throw new Error("bad call");try{e.call(this._context,n)}catch(e){return c.a.useDeprecatedSynchronousErrorHandling?(t.syncErrorValue=e,t.syncErrorThrown=!0,!0):(Object(a.a)(e),!0)}return!1},e.prototype._unsubscribe=function(){var t=this._parentSubscriber;this._context=null,this._parentSubscriber=null,t.unsubscribe()},e}(f)},,,function(t,e,n){"use strict";n.d(e,"a",(function(){return l}));var r=n(3);var i=n(32),o=n(40);var s=n(17),u=n(45),c=n(12),a=n(0),f=function(){var t=this;this.resolve=null,this.reject=null,this.promise=new Promise((function(e,n){t.resolve=e,t.reject=n}))};function h(t){return function(t){return Object(a.b)(this,arguments,(function(){var e,n,r,i,o,s,u,c;return Object(a.g)(this,(function(h){switch(h.label){case 0:e=[],n=[],r=!1,i=null,o=!1,s=t.subscribe({next:function(t){e.length>0?e.shift().resolve({value:t,done:!1}):n.push(t)},error:function(t){for(r=!0,i=t;e.length>0;)e.shift().reject(t)},complete:function(){for(o=!0;e.length>0;)e.shift().resolve({value:void 0,done:!0})}}),h.label=1;case 1:h.trys.push([1,16,17,18]),h.label=2;case 2:return n.length>0?[4,Object(a.d)(n.shift())]:[3,5];case 3:return[4,h.sent()];case 4:return h.sent(),[3,14];case 5:return o?[4,Object(a.d)(void 0)]:[3,7];case 6:return[2,h.sent()];case 7:if(!r)return[3,8];throw i;case 8:return u=new f,e.push(u),[4,Object(a.d)(u.promise)];case 9:return(c=h.sent()).done?[4,Object(a.d)(void 0)]:[3,11];case 10:return[2,h.sent()];case 11:return[4,Object(a.d)(c.value)];case 12:return[4,h.sent()];case 13:h.sent(),h.label=14;case 14:return[3,2];case 15:return[3,18];case 16:throw h.sent();case 17:return s.unsubscribe(),[7];case 18:return[2]}}))}))}(t)}var l=function(){function t(t){this._isScalar=!1,t&&(this._subscribe=t)}return t.prototype.lift=function(e){var n=new t;return n.source=this,n.operator=e,n},t.prototype.subscribe=function(t,e,n){var s=this.operator,u=function(t,e,n){if(t){if(t instanceof r.a)return t;if(t[i.a])return t[i.a]()}return t||e||n?new r.a(t,e,n):new r.a(o.a)}(t,e,n);if(s?u.add(s.call(u,this.source)):u.add(this.source||c.a.useDeprecatedSynchronousErrorHandling&&!u.syncErrorThrowable?this._subscribe(u):this._trySubscribe(u)),c.a.useDeprecatedSynchronousErrorHandling&&u.syncErrorThrowable&&(u.syncErrorThrowable=!1,u.syncErrorThrown))throw u.syncErrorValue;return u},t.prototype._trySubscribe=function(t){try{return this._subscribe(t)}catch(e){c.a.useDeprecatedSynchronousErrorHandling&&(t.syncErrorThrown=!0,t.syncErrorValue=e),!function(t){for(;t;){var e=t,n=e.closed,i=e.destination,o=e.isStopped;if(n||o)return!1;t=i&&i instanceof r.a?i:null}return!0}(t)?console.warn(e):t.error(e)}},t.prototype.forEach=function(t,e){var n=this;return new(e=p(e))((function(e,r){var i;i=n.subscribe((function(e){try{t(e)}catch(t){r(t),i&&i.unsubscribe()}}),r,e)}))},t.prototype._subscribe=function(t){var e=this.source;return e&&e.subscribe(t)},t.prototype[s.a]=function(){return this},t.prototype.pipe=function(){for(var t=[],e=0;e0?this._next(e.shift()):0===this.active&&this.hasCompleted&&this.destination.complete()},e}(o.a),h=n(49);function l(t){return void 0===t&&(t=Number.POSITIVE_INFINITY),function t(e,n,r){return void 0===r&&(r=Number.POSITIVE_INFINITY),"function"==typeof n?function(i){return i.pipe(t((function(t,r){return Object(c.a)(e(t,r)).pipe(Object(u.a)((function(e,i){return n(t,e,r,i)})))}),r))}:("number"==typeof n&&(r=n),function(t){return t.lift(new a(e,r))})}(h.a,t)}},function(t,e,n){"use strict";n.d(e,"b",(function(){return s})),n.d(e,"a",(function(){return c}));var r=n(0),i=n(3),o=n(41);function s(t,e){return void 0===e&&(e=0),function(n){return n.lift(new u(t,e))}}var u=function(){function t(t,e){void 0===e&&(e=0),this.scheduler=t,this.delay=e}return t.prototype.call=function(t,e){return e.subscribe(new c(t,this.scheduler,this.delay))},t}(),c=function(t){function e(e,n,r){void 0===r&&(r=0);var i=t.call(this,e)||this;return i.scheduler=n,i.delay=r,i}return Object(r.f)(e,t),e.dispatch=function(t){var e=t.notification,n=t.destination;e.observe(n),this.unsubscribe()},e.prototype.scheduleMessage=function(t){this.destination.add(this.scheduler.schedule(e.dispatch,this.delay,new a(t,this.destination)))},e.prototype._next=function(t){this.scheduleMessage(o.a.createNext(t))},e.prototype._error=function(t){this.scheduleMessage(o.a.createError(t)),this.unsubscribe()},e.prototype._complete=function(){this.scheduleMessage(o.a.createComplete()),this.unsubscribe()},e}(i.a),a=function(t,e){this.notification=t,this.destination=e}},,function(t,e,n){ +/*! + * clipboard.js v2.0.6 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */ +var r;r=function(){return function(t){var e={};function n(r){if(e[r])return e[r].exports;var i=e[r]={i:r,l:!1,exports:{}};return t[r].call(i.exports,i,i.exports,n),i.l=!0,i.exports}return n.m=t,n.c=e,n.d=function(t,e,r){n.o(t,e)||Object.defineProperty(t,e,{enumerable:!0,get:r})},n.r=function(t){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(t,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(t,"__esModule",{value:!0})},n.t=function(t,e){if(1&e&&(t=n(t)),8&e)return t;if(4&e&&"object"==typeof t&&t&&t.__esModule)return t;var r=Object.create(null);if(n.r(r),Object.defineProperty(r,"default",{enumerable:!0,value:t}),2&e&&"string"!=typeof t)for(var i in t)n.d(r,i,function(e){return t[e]}.bind(null,i));return r},n.n=function(t){var e=t&&t.__esModule?function(){return t.default}:function(){return t};return n.d(e,"a",e),e},n.o=function(t,e){return Object.prototype.hasOwnProperty.call(t,e)},n.p="",n(n.s=6)}([function(t,e){t.exports=function(t){var e;if("SELECT"===t.nodeName)t.focus(),e=t.value;else if("INPUT"===t.nodeName||"TEXTAREA"===t.nodeName){var n=t.hasAttribute("readonly");n||t.setAttribute("readonly",""),t.select(),t.setSelectionRange(0,t.value.length),n||t.removeAttribute("readonly"),e=t.value}else{t.hasAttribute("contenteditable")&&t.focus();var r=window.getSelection(),i=document.createRange();i.selectNodeContents(t),r.removeAllRanges(),r.addRange(i),e=r.toString()}return e}},function(t,e){function n(){}n.prototype={on:function(t,e,n){var r=this.e||(this.e={});return(r[t]||(r[t]=[])).push({fn:e,ctx:n}),this},once:function(t,e,n){var r=this;function i(){r.off(t,i),e.apply(n,arguments)}return i._=e,this.on(t,i,n)},emit:function(t){for(var e=[].slice.call(arguments,1),n=((this.e||(this.e={}))[t]||[]).slice(),r=0,i=n.length;r0&&void 0!==arguments[0]?arguments[0]:{};this.action=t.action,this.container=t.container,this.emitter=t.emitter,this.target=t.target,this.text=t.text,this.trigger=t.trigger,this.selectedText=""}},{key:"initSelection",value:function(){this.text?this.selectFake():this.target&&this.selectTarget()}},{key:"selectFake",value:function(){var t=this,e="rtl"==document.documentElement.getAttribute("dir");this.removeFake(),this.fakeHandlerCallback=function(){return t.removeFake()},this.fakeHandler=this.container.addEventListener("click",this.fakeHandlerCallback)||!0,this.fakeElem=document.createElement("textarea"),this.fakeElem.style.fontSize="12pt",this.fakeElem.style.border="0",this.fakeElem.style.padding="0",this.fakeElem.style.margin="0",this.fakeElem.style.position="absolute",this.fakeElem.style[e?"right":"left"]="-9999px";var n=window.pageYOffset||document.documentElement.scrollTop;this.fakeElem.style.top=n+"px",this.fakeElem.setAttribute("readonly",""),this.fakeElem.value=this.text,this.container.appendChild(this.fakeElem),this.selectedText=i()(this.fakeElem),this.copyText()}},{key:"removeFake",value:function(){this.fakeHandler&&(this.container.removeEventListener("click",this.fakeHandlerCallback),this.fakeHandler=null,this.fakeHandlerCallback=null),this.fakeElem&&(this.container.removeChild(this.fakeElem),this.fakeElem=null)}},{key:"selectTarget",value:function(){this.selectedText=i()(this.target),this.copyText()}},{key:"copyText",value:function(){var t=void 0;try{t=document.execCommand(this.action)}catch(e){t=!1}this.handleResult(t)}},{key:"handleResult",value:function(t){this.emitter.emit(t?"success":"error",{action:this.action,text:this.selectedText,trigger:this.trigger,clearSelection:this.clearSelection.bind(this)})}},{key:"clearSelection",value:function(){this.trigger&&this.trigger.focus(),document.activeElement.blur(),window.getSelection().removeAllRanges()}},{key:"destroy",value:function(){this.removeFake()}},{key:"action",set:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:"copy";if(this._action=t,"copy"!==this._action&&"cut"!==this._action)throw new Error('Invalid "action" value, use either "copy" or "cut"')},get:function(){return this._action}},{key:"target",set:function(t){if(void 0!==t){if(!t||"object"!==(void 0===t?"undefined":o(t))||1!==t.nodeType)throw new Error('Invalid "target" value, use a valid Element');if("copy"===this.action&&t.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if("cut"===this.action&&(t.hasAttribute("readonly")||t.hasAttribute("disabled")))throw new Error('Invalid "target" attribute. You can\'t cut text from elements with "readonly" or "disabled" attributes');this._target=t}},get:function(){return this._target}}]),t}(),c=n(1),a=n.n(c),f=n(2),h=n.n(f),l="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t},p=function(){function t(t,e){for(var n=0;n0&&void 0!==arguments[0]?arguments[0]:{};this.action="function"==typeof t.action?t.action:this.defaultAction,this.target="function"==typeof t.target?t.target:this.defaultTarget,this.text="function"==typeof t.text?t.text:this.defaultText,this.container="object"===l(t.container)?t.container:document.body}},{key:"listenClick",value:function(t){var e=this;this.listener=h()(t,"click",(function(t){return e.onClick(t)}))}},{key:"onClick",value:function(t){var e=t.delegateTarget||t.currentTarget;this.clipboardAction&&(this.clipboardAction=null),this.clipboardAction=new u({action:this.action(e),target:this.target(e),text:this.text(e),container:this.container,trigger:e,emitter:this})}},{key:"defaultAction",value:function(t){return b("action",t)}},{key:"defaultTarget",value:function(t){var e=b("target",t);if(e)return document.querySelector(e)}},{key:"defaultText",value:function(t){return b("text",t)}},{key:"destroy",value:function(){this.listener.destroy(),this.clipboardAction&&(this.clipboardAction.destroy(),this.clipboardAction=null)}}],[{key:"isSupported",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:["copy","cut"],e="string"==typeof t?[t]:t,n=!!document.queryCommandSupported;return e.forEach((function(t){n=n&&!!document.queryCommandSupported(t)})),n}}]),e}(a.a);function b(t,e){var n="data-clipboard-"+t;if(e.hasAttribute(n))return e.getAttribute(n)}e.default=d}]).default},t.exports=r()},function(t,e,n){"use strict";n.d(e,"a",(function(){return f}));var r=n(0),i=n(25),o=n(24),s=n(11),u=n(10),c=n(35),a={};function f(){for(var t=[],e=0;e0},t.prototype.connect_=function(){r&&!this.connected_&&(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),u?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},t.prototype.disconnect_=function(){r&&this.connected_&&(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},t.prototype.onTransitionEnd_=function(t){var e=t.propertyName,n=void 0===e?"":e;s.some((function(t){return!!~n.indexOf(t)}))&&this.refresh()},t.getInstance=function(){return this.instance_||(this.instance_=new t),this.instance_},t.instance_=null,t}(),a=function(t,e){for(var n=0,r=Object.keys(e);n0},t}(),g="undefined"!=typeof WeakMap?new WeakMap:new n,O=function t(e){if(!(this instanceof t))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var n=c.getInstance(),r=new _(e,n,this);g.set(this,r)};["observe","unobserve","disconnect"].forEach((function(t){O.prototype[t]=function(){var e;return(e=g.get(this))[t].apply(e,arguments)}}));var x=void 0!==i.ResizeObserver?i.ResizeObserver:O;e.a=x}).call(this,n(60))},function(t,e,n){"use strict"; +/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var r=/["'&<>]/;t.exports=function(t){var e,n=""+t,i=r.exec(n);if(!i)return n;var o="",s=0,u=0;for(s=i.index;s0?t.prototype.schedule.call(this,e,n):(this.delay=n,this.state=e,this.scheduler.flush(this),this)},e.prototype.execute=function(e,n){return n>0||this.closed?t.prototype.execute.call(this,e,n):this._execute(e,n)},e.prototype.requestAsyncId=function(e,n,r){return void 0===r&&(r=0),null!==r&&r>0||null===r&&this.delay>0?t.prototype.requestAsyncId.call(this,e,n,r):e.flush(this)},e}(n(38).a),s=new(function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return Object(r.f)(e,t),e}(n(37).a))(o),u=n(8),c=n(56),a=n(19),f=n(48),h=function(t){function e(e,n,r){void 0===e&&(e=Number.POSITIVE_INFINITY),void 0===n&&(n=Number.POSITIVE_INFINITY);var i=t.call(this)||this;return i.scheduler=r,i._events=[],i._infiniteTimeWindow=!1,i._bufferSize=e<1?1:e,i._windowTime=n<1?1:n,n===Number.POSITIVE_INFINITY?(i._infiniteTimeWindow=!0,i.next=i.nextInfiniteTimeWindow):i.next=i.nextTimeWindow,i}return Object(r.f)(e,t),e.prototype.nextInfiniteTimeWindow=function(e){var n=this._events;n.push(e),n.length>this._bufferSize&&n.shift(),t.prototype.next.call(this,e)},e.prototype.nextTimeWindow=function(e){this._events.push(new l(this._getNow(),e)),this._trimBufferThenGetEvents(),t.prototype.next.call(this,e)},e.prototype._subscribe=function(t){var e,n=this._infiniteTimeWindow,r=n?this._events:this._trimBufferThenGetEvents(),i=this.scheduler,o=r.length;if(this.closed)throw new a.a;if(this.isStopped||this.hasError?e=u.a.EMPTY:(this.observers.push(t),e=new f.a(this,t)),i&&t.add(t=new c.a(t,i)),n)for(var s=0;se&&(o=Math.max(o,i-e)),o>0&&r.splice(0,o),r},e}(i.a),l=function(t,e){this.time=t,this.value=e}},function(t,e,n){"use strict";var r=n(21);function i(t,e){return Object.prototype.hasOwnProperty.call(e,t)}var o=Object.prototype.toString,s=function(){return"[object Arguments]"===o.call(arguments)?function(t){return"[object Arguments]"===o.call(t)}:function(t){return i("callee",t)}}(),u=!{toString:null}.propertyIsEnumerable("toString"),c=["constructor","valueOf","isPrototypeOf","toString","propertyIsEnumerable","hasOwnProperty","toLocaleString"],a=function(){return arguments.propertyIsEnumerable("length")}(),f=function(t,e){for(var n=0;n=0;)i(e=c[n],t)&&!f(r,e)&&(r[r.length]=e),n-=1;return r})):Object(r.a)((function(t){return Object(t)!==t?[]:Object.keys(t)}));e.a=h},function(t,e,n){"use strict";n.d(e,"a",(function(){return u}));var r=n(0),i=n(3),o=n(16),s=n(15);function u(t,e,n){return function(r){return r.lift(new c(t,e,n))}}var c=function(){function t(t,e,n){this.nextOrObserver=t,this.error=e,this.complete=n}return t.prototype.call=function(t,e){return e.subscribe(new a(t,this.nextOrObserver,this.error,this.complete))},t}(),a=function(t){function e(e,n,r,i){var u=t.call(this,e)||this;return u._tapNext=o.a,u._tapError=o.a,u._tapComplete=o.a,u._tapError=r||o.a,u._tapComplete=i||o.a,Object(s.a)(n)?(u._context=u,u._tapNext=n):n&&(u._context=n,u._tapNext=n.next||o.a,u._tapError=n.error||o.a,u._tapComplete=n.complete||o.a),u}return Object(r.f)(e,t),e.prototype._next=function(t){try{this._tapNext.call(this._context,t)}catch(t){return void this.destination.error(t)}this.destination.next(t)},e.prototype._error=function(t){try{this._tapError.call(this._context,t)}catch(t){return void this.destination.error(t)}this.destination.error(t)},e.prototype._complete=function(){try{this._tapComplete.call(this._context)}catch(t){return void this.destination.error(t)}return this.destination.complete()},e}(i.a)},function(t,e,n){"use strict";n.d(e,"a",(function(){return o}));var r=n(0),i=n(3);function o(t,e){var n=!1;return arguments.length>=2&&(n=!0),function(r){return r.lift(new s(t,e,n))}}var s=function(){function t(t,e,n){void 0===n&&(n=!1),this.accumulator=t,this.seed=e,this.hasSeed=n}return t.prototype.call=function(t,e){return e.subscribe(new u(t,this.accumulator,this.seed,this.hasSeed))},t}(),u=function(t){function e(e,n,r,i){var o=t.call(this,e)||this;return o.accumulator=n,o._state=r,o._hasState=i,o.index=0,o}return Object(r.f)(e,t),e.prototype._next=function(t){var e=this.destination;if(this._hasState){var n=this.index++,r=void 0;try{r=this.accumulator(this._state,t,n)}catch(t){return void e.error(t)}this._state=r,e.next(r)}else this._state=t,this._hasState=!0,e.next(t)},e}(i.a)},function(t,e,n){"use strict";n.d(e,"a",(function(){return s}));var r=n(0),i=n(3),o=n(8);function s(t){return function(e){return e.lift(new u(t))}}var u=function(){function t(t){this.callback=t}return t.prototype.call=function(t,e){return e.subscribe(new c(t,this.callback))},t}(),c=function(t){function e(e,n){var r=t.call(this,e)||this;return r.add(new o.a(n)),r}return Object(r.f)(e,t),e}(i.a)},function(t,e,n){"use strict";n.d(e,"a",(function(){return o}));var r=n(0),i=function(t){function e(e,n){var r=t.call(this,e,n)||this;return r.scheduler=e,r.work=n,r}return Object(r.f)(e,t),e.prototype.requestAsyncId=function(e,n,r){return void 0===r&&(r=0),null!==r&&r>0?t.prototype.requestAsyncId.call(this,e,n,r):(e.actions.push(this),e.scheduled||(e.scheduled=requestAnimationFrame((function(){return e.flush(void 0)}))))},e.prototype.recycleAsyncId=function(e,n,r){if(void 0===r&&(r=0),null!==r&&r>0||null===r&&this.delay>0)return t.prototype.recycleAsyncId.call(this,e,n,r);0===e.actions.length&&(cancelAnimationFrame(n),e.scheduled=void 0)},e}(n(38).a),o=new(function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return Object(r.f)(e,t),e.prototype.flush=function(t){this.active=!0,this.scheduled=void 0;var e,n=this.actions,r=-1,i=n.length;t=t||n.shift();do{if(e=t.execute(t.state,t.delay))break}while(++r0){var s=o.indexOf(n);-1!==s&&o.splice(s,1)}},e.prototype.notifyComplete=function(){},e.prototype._next=function(t){if(0===this.toRespond.length){var e=Object(r.j)([t],this.values);this.project?this._tryProject(e):this.destination.next(e)}},e.prototype._tryProject=function(t){var e;try{e=this.project.apply(this,t)}catch(t){return void this.destination.error(t)}this.destination.next(e)},e}(i.a)},function(t,e,n){"use strict";n.d(e,"a",(function(){return o}));var r=n(0),i=n(3);function o(t,e){return void 0===e&&(e=null),function(n){return n.lift(new s(t,e))}}var s=function(){function t(t,e){this.bufferSize=t,this.startBufferEvery=e,this.subscriberClass=e&&t!==e?c:u}return t.prototype.call=function(t,e){return e.subscribe(new this.subscriberClass(t,this.bufferSize,this.startBufferEvery))},t}(),u=function(t){function e(e,n){var r=t.call(this,e)||this;return r.bufferSize=n,r.buffer=[],r}return Object(r.f)(e,t),e.prototype._next=function(t){var e=this.buffer;e.push(t),e.length==this.bufferSize&&(this.destination.next(e),this.buffer=[])},e.prototype._complete=function(){var e=this.buffer;e.length>0&&this.destination.next(e),t.prototype._complete.call(this)},e}(i.a),c=function(t){function e(e,n,r){var i=t.call(this,e)||this;return i.bufferSize=n,i.startBufferEvery=r,i.buffers=[],i.count=0,i}return Object(r.f)(e,t),e.prototype._next=function(t){var e=this.bufferSize,n=this.startBufferEvery,r=this.buffers,i=this.count;this.count++,i%n==0&&r.push([]);for(var o=r.length;o--;){var s=r[o];s.push(t),s.length===e&&(r.splice(o,1),this.destination.next(s))}},e.prototype._complete=function(){for(var e=this.buffers,n=this.destination;e.length>0;){var r=e.shift();r.length>0&&n.next(r)}t.prototype._complete.call(this)},e}(i.a)},function(t,e,n){"use strict";n.d(e,"a",(function(){return c}));var r=n(39),i=n(55);function o(){return Object(i.a)(1)}function s(){for(var t=[],e=0;e1?r.next(Array.prototype.slice.call(arguments)):r.next(t)}),r,n)}))}},function(t,e,n){"use strict";n.d(e,"a",(function(){return o}));var r=n(0),i=n(3);function o(t){return function(e){return e.lift(new s(t))}}var s=function(){function t(t){this.value=t}return t.prototype.call=function(t,e){return e.subscribe(new u(t,this.value))},t}(),u=function(t){function e(e,n){var r=t.call(this,e)||this;return r.value=n,r}return Object(r.f)(e,t),e.prototype._next=function(t){this.destination.next(this.value)},e}(i.a)},function(t,e,n){"use strict";n.d(e,"a",(function(){return u}));var r=n(6),i=n(25),o=n(55),s=n(35);function u(){for(var t=[],e=0;e1&&"number"==typeof t[t.length-1]&&(n=t.pop())):"number"==typeof c&&(n=t.pop()),!u&&1===t.length&&t[0]instanceof r.a?t[0]:Object(o.a)(n)(Object(s.a)(t,u))}},function(t,e,n){"use strict";n.d(e,"a",(function(){return u}));var r=n(6),i=n(24),o=n(15),s=n(9);function u(t,e,n){return n?u(t,e).pipe(Object(s.a)((function(t){return Object(i.a)(t)?n.apply(void 0,t):n(t)}))):new r.a((function(n){var r,i=function(){for(var t=[],e=0;ethis.total&&this.destination.next(t)},e}(i.a)},function(t,e,n){"use strict";n.d(e,"a",(function(){return u}));var r=n(0),i=n(11),o=n(28),s=n(10);function u(t){return function(e){var n=new c(t),r=e.lift(n);return n.caught=r}}var c=function(){function t(t){this.selector=t}return t.prototype.call=function(t,e){return e.subscribe(new a(t,this.selector,this.caught))},t}(),a=function(t){function e(e,n,r){var i=t.call(this,e)||this;return i.selector=n,i.caught=r,i}return Object(r.f)(e,t),e.prototype.error=function(e){if(!this.isStopped){var n=void 0;try{n=this.selector(e,this.caught)}catch(e){return void t.prototype.error.call(this,e)}this._unsubscribeAndRecycle();var r=new o.a(this,void 0,void 0);this.add(r);var i=Object(s.a)(this,n,void 0,void 0,r);i!==r&&this.add(i)}},e}(i.a)},function(t,e,n){"use strict";n.d(e,"a",(function(){return s}));var r=n(0),i=n(3),o=n(53);function s(t,e){return void 0===e&&(e=o.a),function(n){return n.lift(new u(t,e))}}var u=function(){function t(t,e){this.dueTime=t,this.scheduler=e}return t.prototype.call=function(t,e){return e.subscribe(new c(t,this.dueTime,this.scheduler))},t}(),c=function(t){function e(e,n,r){var i=t.call(this,e)||this;return i.dueTime=n,i.scheduler=r,i.debouncedSubscription=null,i.lastValue=null,i.hasValue=!1,i}return Object(r.f)(e,t),e.prototype._next=function(t){this.clearDebounce(),this.lastValue=t,this.hasValue=!0,this.add(this.debouncedSubscription=this.scheduler.schedule(a,this.dueTime,this))},e.prototype._complete=function(){this.debouncedNext(),this.destination.complete()},e.prototype.debouncedNext=function(){if(this.clearDebounce(),this.hasValue){var t=this.lastValue;this.lastValue=null,this.hasValue=!1,this.destination.next(t)}},e.prototype.clearDebounce=function(){var t=this.debouncedSubscription;null!==t&&(this.remove(t),t.unsubscribe(),this.debouncedSubscription=null)},e}(i.a);function a(t){t.debouncedNext()}},function(t,e,n){"use strict";n.d(e,"a",(function(){return o}));var r=n(76),i=n(18);function o(t,e,n){return void 0===e&&(e=i.a),void 0===n&&(n=i.a),Object(r.a)((function(){return t()?e:n}))}},function(t,e,n){"use strict";var r=n(21),i=n(78),o=Object(r.a)((function(t){for(var e=Object(i.a)(t),n=e.length,r=[],o=0;o1)this.connection=null;else{var n=this.connection,r=t._connection;this.connection=null,!r||n&&r!==n||r.unsubscribe()}}else this.connection=null},e}(s.a),l=function(t){function e(e,n){var r=t.call(this)||this;return r.source=e,r.subjectFactory=n,r._refCount=0,r._isComplete=!1,r}return Object(r.f)(e,t),e.prototype._subscribe=function(t){return this.getSubject().subscribe(t)},e.prototype.getSubject=function(){var t=this._subject;return t&&!t.isStopped||(this._subject=this.subjectFactory()),this._subject},e.prototype.connect=function(){var t=this._connection;return t||(this._isComplete=!1,(t=this._connection=new u.a).add(this.source.subscribe(new d(this.getSubject(),this))),t.closed&&(this._connection=null,t=u.a.EMPTY)),t},e.prototype.refCount=function(){return c()(this)},e}(o.a),p={operator:{value:null},_refCount:{value:0,writable:!0},_subject:{value:null,writable:!0},_connection:{value:null,writable:!0},_subscribe:{value:(a=l.prototype)._subscribe},_isComplete:{value:a._isComplete,writable:!0},getSubject:{value:a.getSubject},connect:{value:a.connect},refCount:{value:a.refCount}},d=function(t){function e(e,n){var r=t.call(this,e)||this;return r.connectable=n,r}return Object(r.f)(e,t),e.prototype._error=function(e){this._unsubscribe(),t.prototype._error.call(this,e)},e.prototype._complete=function(){this.connectable._isComplete=!0,this._unsubscribe(),t.prototype._complete.call(this)},e.prototype._unsubscribe=function(){var t=this.connectable;if(t){this.connectable=null;var e=t._connection;t._refCount=0,t._subject=null,t._connection=null,e&&e.unsubscribe()}},e}(i.b),b=(function(){function t(t){this.connectable=t}t.prototype.call=function(t,e){var n=this.connectable;n._refCount++;var r=new b(t,n),i=e.subscribe(r);return r.closed||(r.connection=n.connect()),i}}(),function(t){function e(e,n){var r=t.call(this,e)||this;return r.connectable=n,r}return Object(r.f)(e,t),e.prototype._unsubscribe=function(){var t=this.connectable;if(t){this.connectable=null;var e=t._refCount;if(e<=0)this.connection=null;else if(t._refCount=e-1,e>1)this.connection=null;else{var n=this.connection,r=t._connection;this.connection=null,!r||n&&r!==n||r.unsubscribe()}}else this.connection=null},e}(s.a));var v=function(){function t(t,e){this.subjectFactory=t,this.selector=e}return t.prototype.call=function(t,e){var n=this.selector,r=this.subjectFactory(),i=n(r).subscribe(t);return i.add(e.subscribe(r)),i},t}();function y(){return new i.a}function m(){return function(t){return c()((e=y,function(t){var r;if(r="function"==typeof e?e:function(){return e},"function"==typeof n)return t.lift(new v(r,n));var i=Object.create(t,p);return i.source=t,i.subjectFactory=r,i})(t));var e,n}}},function(t,e,n){"use strict";var r=n(21);function i(t){return t}var o=Object(r.a)(i);e.a=o},function(t,e,n){"use strict";n.d(e,"a",(function(){return g}));var r=n(0),i=n(13),o=n(6),s=n(3),u=n(9);function c(t,e){return new b({method:"GET",url:t,headers:e})}function a(t,e,n){return new b({method:"POST",url:t,body:e,headers:n})}function f(t,e){return new b({method:"DELETE",url:t,headers:e})}function h(t,e,n){return new b({method:"PUT",url:t,body:e,headers:n})}function l(t,e,n){return new b({method:"PATCH",url:t,body:e,headers:n})}var p=Object(u.a)((function(t,e){return t.response}));function d(t,e){return p(new b({method:"GET",url:t,responseType:"json",headers:e}))}var b=function(t){function e(e){var n=t.call(this)||this,r={async:!0,createXHR:function(){return this.crossDomain?function(){if(i.a.XMLHttpRequest)return new i.a.XMLHttpRequest;if(i.a.XDomainRequest)return new i.a.XDomainRequest;throw new Error("CORS is not supported by your browser")}():function(){if(i.a.XMLHttpRequest)return new i.a.XMLHttpRequest;var t=void 0;try{for(var e=["Msxml2.XMLHTTP","Microsoft.XMLHTTP","Msxml2.XMLHTTP.4.0"],n=0;n<3;n++)try{if(t=e[n],new i.a.ActiveXObject(t))break}catch(t){}return new i.a.ActiveXObject(t)}catch(t){throw new Error("XMLHttpRequest is not supported by your browser")}}()},crossDomain:!0,withCredentials:!1,headers:{},method:"GET",responseType:"json",timeout:0};if("string"==typeof e)r.url=e;else for(var o in e)e.hasOwnProperty(o)&&(r[o]=e[o]);return n.request=r,n}var n;return Object(r.f)(e,t),e.prototype._subscribe=function(t){return new v(t,this.request)},e.create=((n=function(t){return new e(t)}).get=c,n.post=a,n.delete=f,n.put=h,n.patch=l,n.getJSON=d,n),e}(o.a),v=function(t){function e(e,n){var r=t.call(this,e)||this;r.request=n,r.done=!1;var o=n.headers=n.headers||{};return n.crossDomain||r.getHeader(o,"X-Requested-With")||(o["X-Requested-With"]="XMLHttpRequest"),r.getHeader(o,"Content-Type")||i.a.FormData&&n.body instanceof i.a.FormData||void 0===n.body||(o["Content-Type"]="application/x-www-form-urlencoded; charset=UTF-8"),n.body=r.serializeBody(n.body,r.getHeader(n.headers,"Content-Type")),r.send(),r}return Object(r.f)(e,t),e.prototype.next=function(t){this.done=!0;var e,n=this.xhr,r=this.request,i=this.destination;try{e=new y(t,n,r)}catch(t){return i.error(t)}i.next(e)},e.prototype.send=function(){var t=this.request,e=this.request,n=e.user,r=e.method,i=e.url,o=e.async,s=e.password,u=e.headers,c=e.body;try{var a=this.xhr=t.createXHR();this.setupEvents(a,t),n?a.open(r,i,o,n,s):a.open(r,i,o),o&&(a.timeout=t.timeout,a.responseType=t.responseType),"withCredentials"in a&&(a.withCredentials=!!t.withCredentials),this.setHeaders(a,u),c?a.send(c):a.send()}catch(t){this.error(t)}},e.prototype.serializeBody=function(t,e){if(!t||"string"==typeof t)return t;if(i.a.FormData&&t instanceof i.a.FormData)return t;if(e){var n=e.indexOf(";");-1!==n&&(e=e.substring(0,n))}switch(e){case"application/x-www-form-urlencoded":return Object.keys(t).map((function(e){return encodeURIComponent(e)+"="+encodeURIComponent(t[e])})).join("&");case"application/json":return JSON.stringify(t);default:return t}},e.prototype.setHeaders=function(t,e){for(var n in e)e.hasOwnProperty(n)&&t.setRequestHeader(n,e[n])},e.prototype.getHeader=function(t,e){for(var n in t)if(n.toLowerCase()===e.toLowerCase())return t[n]},e.prototype.setupEvents=function(t,e){var n=e.progressSubscriber;function r(t){var e,n=r,i=n.subscriber,o=n.progressSubscriber,s=n.request;o&&o.error(t);try{e=new _(this,s)}catch(t){e=t}i.error(e)}if(t.ontimeout=r,r.request=e,r.subscriber=this,r.progressSubscriber=n,t.upload&&"withCredentials"in t){var o,s;if(n)o=function(t){o.progressSubscriber.next(t)},i.a.XDomainRequest?t.onprogress=o:t.upload.onprogress=o,o.progressSubscriber=n;s=function(t){var e,n=s,r=n.progressSubscriber,i=n.subscriber,o=n.request;r&&r.error(t);try{e=new m("ajax error",this,o)}catch(t){e=t}i.error(e)},t.onerror=s,s.request=e,s.subscriber=this,s.progressSubscriber=n}function u(t){}function c(t){var e=c,n=e.subscriber,r=e.progressSubscriber,i=e.request;if(4===this.readyState){var o=1223===this.status?204:this.status,s="text"===this.responseType?this.response||this.responseText:this.response;if(0===o&&(o=s?200:0),o<400)r&&r.complete(),n.next(t),n.complete();else{r&&r.error(t);var u=void 0;try{u=new m("ajax error "+o,this,i)}catch(t){u=t}n.error(u)}}}t.onreadystatechange=u,u.subscriber=this,u.progressSubscriber=n,u.request=e,t.onload=c,c.subscriber=this,c.progressSubscriber=n,c.request=e},e.prototype.unsubscribe=function(){var e=this.done,n=this.xhr;!e&&n&&4!==n.readyState&&"function"==typeof n.abort&&n.abort(),t.prototype.unsubscribe.call(this)},e}(s.a),y=function(t,e,n){this.originalEvent=t,this.xhr=e,this.request=n,this.status=e.status,this.responseType=e.responseType||n.responseType,this.response=w(this.responseType,e)},m=function(){function t(t,e,n){return Error.call(this),this.message=t,this.name="AjaxError",this.xhr=e,this.request=n,this.status=e.status,this.responseType=e.responseType||n.responseType,this.response=w(this.responseType,e),this}return t.prototype=Object.create(Error.prototype),t}();function w(t,e){switch(t){case"json":return function(t){return"response"in t?t.responseType?t.response:JSON.parse(t.response||t.responseText||"null"):JSON.parse(t.responseText||"null")}(e);case"xml":return e.responseXML;case"text":default:return"response"in e?e.response:e.responseText}}var _=function(){function t(t,e){return m.call(this,"ajax timeout",t,e),this.name="AjaxTimeoutError",this}return t.prototype=Object.create(m.prototype),t}(),g=b.create},function(t,e,n){"use strict";n.d(e,"a",(function(){return u}));var r=n(0),i=n(3),o=function(){function t(){return Error.call(this),this.message="argument out of range",this.name="ArgumentOutOfRangeError",this}return t.prototype=Object.create(Error.prototype),t}(),s=n(18);function u(t){return function(e){return 0===t?s.a:e.lift(new c(t))}}var c=function(){function t(t){if(this.total=t,this.total<0)throw new o}return t.prototype.call=function(t,e){return e.subscribe(new a(t,this.total))},t}(),a=function(t){function e(e,n){var r=t.call(this,e)||this;return r.total=n,r.count=0,r}return Object(r.f)(e,t),e.prototype._next=function(t){var e=this.total,n=++this.count;n<=e&&(this.destination.next(t),n===e&&(this.destination.complete(),this.unsubscribe()))},e}(i.a)},function(t,e,n){"use strict";n.d(e,"a",(function(){return u}));var r=n(0),i=n(53);var o=n(3),s=n(41);function u(t,e){void 0===e&&(e=i.a);var n,r=(n=t)instanceof Date&&!isNaN(+n)?+t-e.now():Math.abs(t);return function(t){return t.lift(new c(r,e))}}var c=function(){function t(t,e){this.delay=t,this.scheduler=e}return t.prototype.call=function(t,e){return e.subscribe(new a(t,this.delay,this.scheduler))},t}(),a=function(t){function e(e,n,r){var i=t.call(this,e)||this;return i.delay=n,i.scheduler=r,i.queue=[],i.active=!1,i.errored=!1,i}return Object(r.f)(e,t),e.dispatch=function(t){for(var e=t.source,n=e.queue,r=t.scheduler,i=t.destination;n.length>0&&n[0].time-r.now()<=0;)n.shift().notification.observe(i);if(n.length>0){var o=Math.max(0,n[0].time-r.now());this.schedule(t,o)}else e.isStopped?(e.destination.complete(),e.active=!1):(this.unsubscribe(),e.active=!1)},e.prototype._schedule=function(t){this.active=!0,this.destination.add(t.schedule(e.dispatch,this.delay,{source:this,destination:this.destination,scheduler:t}))},e.prototype.scheduleNotification=function(t){if(!0!==this.errored){var e=this.scheduler,n=new f(e.now()+this.delay,t);this.queue.push(n),!1===this.active&&this._schedule(e)}},e.prototype._next=function(t){this.scheduleNotification(s.a.createNext(t))},e.prototype._error=function(t){this.errored=!0,this.queue=[],this.destination.error(t),this.unsubscribe()},e.prototype._complete=function(){0===this.queue.length&&this.destination.complete(),this.unsubscribe()},e}(o.a),f=function(t,e){this.time=t,this.notification=e}}]]); +//# sourceMappingURL=vendor.f81b9e8b.min.js.map \ No newline at end of file diff --git a/assets/javascripts/vendor.f81b9e8b.min.js.map b/assets/javascripts/vendor.f81b9e8b.min.js.map new file mode 100644 index 00000000..44b16038 --- /dev/null +++ b/assets/javascripts/vendor.f81b9e8b.min.js.map @@ -0,0 +1 @@ +{"version":3,"sources":["webpack:///./node_modules/tslib/tslib.es6.js","webpack:///./node_modules/rxjs/dist/esm5/internal/Subscriber.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/deferred.js","webpack:///./node_modules/rxjs/dist/esm5/internal/asyncIteratorFrom.js","webpack:///./node_modules/rxjs/dist/esm5/internal/Observable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/toSubscriber.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/canReportError.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/UnsubscriptionError.js","webpack:///./node_modules/rxjs/dist/esm5/internal/Subscription.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/map.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/subscribeToResult.js","webpack:///./node_modules/rxjs/dist/esm5/internal/OuterSubscriber.js","webpack:///./node_modules/rxjs/dist/esm5/internal/config.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/root.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/isFunction.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/noop.js","webpack:///./node_modules/rxjs/dist/esm5/internal/symbol/observable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/empty.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/ObjectUnsubscribedError.js","webpack:///./node_modules/ramda/es/internal/_isPlaceholder.js","webpack:///./node_modules/ramda/es/internal/_curry1.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/hostReportError.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/isArray.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/isScheduler.js","webpack:///./node_modules/rxjs/dist/esm5/internal/Subject.js","webpack:///./node_modules/rxjs/dist/esm5/internal/symbol/iterator.js","webpack:///./node_modules/rxjs/dist/esm5/internal/InnerSubscriber.js","webpack:///./node_modules/rxjs/dist/esm5/internal/symbol/rxSubscriber.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/switchMap.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduled/scheduleArray.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/fromArray.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduled/scheduled.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/isInteropObservable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduled/scheduleObservable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduled/schedulePromise.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/isIterable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduled/scheduleIterable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduled/scheduleAsyncIterable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/from.js","webpack:///./node_modules/rxjs/dist/esm5/internal/Scheduler.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/AsyncScheduler.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/AsyncAction.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/Action.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/of.js","webpack:///./node_modules/rxjs/dist/esm5/internal/Observer.js","webpack:///./node_modules/rxjs/dist/esm5/internal/Notification.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/throwError.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/pipe.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/distinctUntilChanged.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/isObject.js","webpack:///./node_modules/rxjs/dist/esm5/internal/SubjectSubscription.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/identity.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/subscribeToArray.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/isArrayLike.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/isPromise.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/async.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/subscribeToAsyncIterable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/subscribeTo.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/subscribeToObservable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/subscribeToPromise.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/subscribeToIterable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/mergeMap.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/mergeAll.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/observeOn.js","webpack:///./node_modules/clipboard/dist/clipboard.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/combineLatest.js","webpack:///(webpack)/buildin/global.js","webpack:///./node_modules/resize-observer-polyfill/dist/ResizeObserver.es.js","webpack:///./node_modules/escape-html/index.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/defer.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/QueueAction.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/queue.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/QueueScheduler.js","webpack:///./node_modules/rxjs/dist/esm5/internal/ReplaySubject.js","webpack:///./node_modules/ramda/es/internal/_has.js","webpack:///./node_modules/ramda/es/internal/_isArguments.js","webpack:///./node_modules/ramda/es/keys.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/tap.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/scan.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/finalize.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/AnimationFrameAction.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/animationFrame.js","webpack:///./node_modules/rxjs/dist/esm5/internal/scheduler/AnimationFrameScheduler.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/shareReplay.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/distinctUntilKeyChanged.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/withLatestFrom.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/bufferCount.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/concatAll.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/concat.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/startWith.js","webpack:///./node_modules/ramda/es/reverse.js","webpack:///./node_modules/ramda/es/internal/_isString.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/fromEvent.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/mapTo.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/merge.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/fromEventPattern.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/filter.js","webpack:///./node_modules/rxjs/dist/esm5/internal/BehaviorSubject.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/pluck.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/throttle.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/switchMapTo.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/sample.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/never.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/skip.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/catchError.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/debounceTime.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/iif.js","webpack:///./node_modules/ramda/es/values.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/refCount.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/ConnectableObservable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/multicast.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/share.js","webpack:///./node_modules/ramda/es/internal/_identity.js","webpack:///./node_modules/ramda/es/identity.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/dom/AjaxObservable.js","webpack:///./node_modules/rxjs/dist/esm5/internal/observable/dom/ajax.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/ArgumentOutOfRangeError.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/take.js","webpack:///./node_modules/rxjs/dist/esm5/internal/operators/delay.js","webpack:///./node_modules/rxjs/dist/esm5/internal/util/isDate.js"],"names":["extendStatics","d","b","Object","setPrototypeOf","__proto__","Array","p","hasOwnProperty","__extends","__","this","constructor","prototype","create","__assign","assign","t","s","i","n","arguments","length","call","apply","__awaiter","thisArg","_arguments","P","generator","Promise","resolve","reject","fulfilled","value","step","next","e","rejected","result","done","then","__generator","body","f","y","g","_","label","sent","trys","ops","verb","Symbol","iterator","v","op","TypeError","pop","push","__values","o","m","__read","r","ar","error","__spread","concat","__spreadArrays","il","k","a","j","jl","__await","__asyncGenerator","asyncIterator","q","resume","fulfill","settle","shift","__asyncValues","Subscriber","_super","destinationOrNext","complete","_this","syncErrorValue","syncErrorThrown","syncErrorThrowable","isStopped","destination","add","SafeSubscriber","subscriber","_next","err","_error","_complete","unsubscribe","closed","_unsubscribeAndRecycle","_parentOrParents","_parentSubscriber","observerOrNext","context","bind","_context","useDeprecatedSynchronousErrorHandling","__tryOrSetError","__tryOrUnsub","wrappedComplete","fn","parent","Error","_unsubscribe","Deferred","promise","asyncIteratorFrom","source","deferreds","values","hasError","completed","subs","_a","subscribe","undefined","coroutine","Observable","_isScalar","_subscribe","lift","operator","observable","sink","nextOrObserver","rxSubscriber","toSubscriber","config","_trySubscribe","observer","closed_1","canReportError","console","warn","forEach","promiseCtor","getPromiseCtor","subscription","pipe","operations","_i","toPromise","x","UnsubscriptionError","UnsubscriptionErrorImpl","errors","message","map","toString","join","name","Subscription","_subscriptions","empty","remove","index","isFunction","flattenUnsubscriptionErrors","isArray","len","sub","isObject","teardown","EMPTY","tmp","indexOf","subscriptions","subscriptionIndex","splice","reduce","errs","project","MapOperator","MapSubscriber","count","subscribeToResult","outerSubscriber","outerValue","outerIndex","innerSubscriber","OuterSubscriber","notifyNext","innerValue","innerIndex","innerSub","notifyError","notifyComplete","_enable_super_gross_mode_that_will_cause_bad_things","stack","log","__window","window","__self","self","WorkerGlobalScope","_root","global","noop","ObjectUnsubscribedError","ObjectUnsubscribedErrorImpl","_isPlaceholder","_curry1","f1","hostReportError","setTimeout","isScheduler","schedule","SubjectSubscriber","Subject","observers","thrownError","subject","AnonymousSubject","copy","slice","asObservable","InnerSubscriber","Math","random","switchMap","resultSelector","ii","SwitchMapOperator","SwitchMapSubscriber","_innerSub","innerSubscription","scheduleArray","input","scheduler","fromArray","scheduled","isInteropObservable","scheduleObservable","isPromise","schedulePromise","isArrayLike","isIterable","return","scheduleIterable","scheduleAsyncIterable","from","subscribeTo","Scheduler","SchedulerAction","now","work","delay","state","Date","AsyncScheduler","delegate","actions","active","flush","action","execute","AsyncAction","pending","id","recycleAsyncId","requestAsyncId","setInterval","clearInterval","_execute","errored","errorValue","Action","of","args","NotificationKind","dispatch","Notification","kind","hasValue","observe","do","accept","toObservable","createNext","undefinedValueNotification","createError","createComplete","completeNotification","fns","pipeFromArray","prev","distinctUntilChanged","compare","keySelector","DistinctUntilChangedOperator","DistinctUntilChangedSubscriber","hasKey","key","SubjectSubscription","subscriberIndex","identity","subscribeToArray","array","async","subscribeToAsyncIterable","asyncIterable","asyncIterable_1","asyncIterable_1_1","e_1","e_1_1","_b","process","catch","obj","obs","iterable","item","MergeMapOperator","concurrent","Number","POSITIVE_INFINITY","MergeMapSubscriber","hasCompleted","buffer","_tryNext","ish","mergeAll","mergeMap","observeOn","ObserveOnOperator","ObserveOnSubscriber","arg","notification","scheduleMessage","ObserveOnMessage","factory","modules","installedModules","__webpack_require__","moduleId","exports","module","l","c","getter","defineProperty","enumerable","get","toStringTag","mode","__esModule","ns","object","property","element","selectedText","nodeName","focus","isReadOnly","hasAttribute","setAttribute","select","setSelectionRange","removeAttribute","selection","getSelection","range","document","createRange","selectNodeContents","removeAllRanges","addRange","E","on","callback","ctx","once","listener","off","emit","data","evtArr","evts","liveEvents","TinyEmitter","is","target","type","string","node","addEventListener","destroy","removeEventListener","listenNode","nodeList","listenNodeList","selector","listenSelector","HTMLElement","nodeType","String","closest","_delegate","useCapture","listenerFn","delegateTarget","elements","querySelectorAll","Element","matches","proto","matchesSelector","mozMatchesSelector","msMatchesSelector","oMatchesSelector","webkitMatchesSelector","parentNode","__webpack_exports__","src_select","select_default","_typeof","_createClass","defineProperties","props","descriptor","configurable","writable","Constructor","protoProps","staticProps","clipboard_action","ClipboardAction","options","instance","_classCallCheck","resolveOptions","initSelection","container","emitter","text","trigger","selectFake","selectTarget","isRTL","documentElement","getAttribute","removeFake","fakeHandlerCallback","fakeHandler","fakeElem","createElement","style","fontSize","border","padding","margin","position","yPosition","pageYOffset","scrollTop","top","appendChild","copyText","removeChild","succeeded","execCommand","handleResult","clearSelection","activeElement","blur","set","_action","_target","tiny_emitter","tiny_emitter_default","listen","listen_default","clipboard_typeof","clipboard_createClass","clipboard_Clipboard","_Emitter","Clipboard","clipboard_classCallCheck","ReferenceError","_possibleConstructorReturn","getPrototypeOf","listenClick","subClass","superClass","_inherits","defaultAction","defaultTarget","defaultText","_this2","onClick","currentTarget","clipboardAction","getAttributeValue","querySelector","support","queryCommandSupported","suffix","attribute","NONE","combineLatest","observables","CombineLatestOperator","CombineLatestSubscriber","toRespond","unused","oldVal","_tryResultSelector","Function","MapShim","Map","getIndex","arr","some","entry","class_1","__entries__","delete","entries","has","clear","isBrowser","global$1","requestAnimationFrame$1","requestAnimationFrame","transitionKeys","mutationObserverSupported","MutationObserver","ResizeObserverController","connected_","mutationEventsAdded_","mutationsObserver_","observers_","onTransitionEnd_","refresh","leadingCall","trailingCall","lastCallTime","resolvePending","proxy","timeoutCallback","timeStamp","throttle","addObserver","connect_","removeObserver","disconnect_","updateObservers_","activeObservers","filter","gatherActive","hasActive","broadcastActive","attributes","childList","characterData","subtree","disconnect","propertyName","getInstance","instance_","defineConfigurable","keys","getWindowOf","ownerDocument","defaultView","emptyRect","createRectInit","toFloat","parseFloat","getBordersSize","styles","positions","size","getHTMLElementContentRect","clientWidth","clientHeight","getComputedStyle","paddings","positions_1","getPaddings","horizPad","left","right","vertPad","bottom","width","height","boxSizing","round","isDocumentElement","vertScrollbar","horizScrollbar","abs","isSVGGraphicsElement","SVGGraphicsElement","SVGElement","getBBox","getContentRect","bbox","getSVGContentRect","ResizeObservation","broadcastWidth","broadcastHeight","contentRect_","isActive","rect","broadcastRect","ResizeObserverEntry","rectInit","Constr","contentRect","DOMRectReadOnly","ResizeObserverSPI","controller","callbackCtx","activeObservations_","observations_","callback_","controller_","callbackCtx_","observations","unobserve","clearActive","observation","WeakMap","ResizeObserver","method","matchHtmlRegExp","escape","str","match","exec","html","lastIndex","charCodeAt","substring","defer","observableFactory","QueueAction","queue","QueueScheduler","ReplaySubject","bufferSize","windowTime","_events","_infiniteTimeWindow","_bufferSize","_windowTime","nextInfiniteTimeWindow","nextTimeWindow","ReplayEvent","_getNow","_trimBufferThenGetEvents","eventsCount","spliceCount","time","max","_has","prop","hasEnumBug","propertyIsEnumerable","nonEnumerableProps","hasArgsEnumBug","contains","list","idx","nIdx","ks","checkArgsLength","tap","DoOperator","TapSubscriber","_tapNext","_tapError","_tapComplete","scan","accumulator","seed","hasSeed","ScanOperator","ScanSubscriber","_state","_hasState","finalize","FinallyOperator","FinallySubscriber","AnimationFrameAction","cancelAnimationFrame","animationFrame","AnimationFrameScheduler","shareReplay","configOrBufferSize","refCount","_c","useRefCount","isComplete","shareReplayOperator","distinctUntilKeyChanged","withLatestFrom","WithLatestFromOperator","WithLatestFromSubscriber","found","_tryProject","bufferCount","startBufferEvery","BufferCountOperator","subscriberClass","BufferSkipCountSubscriber","BufferCountSubscriber","buffers","concatAll","startWith","split","reverse","fromEvent","eventName","setupSubscription","sourceObj","handler","isEventTarget","source_1","isJQueryStyleEventEmitter","source_2","addListener","removeListener","isNodeStyleEventEmitter","source_3","mapTo","MapToOperator","MapToSubscriber","merge","last","fromEventPattern","addHandler","removeHandler","retValue","predicate","FilterOperator","FilterSubscriber","BehaviorSubject","_value","getValue","pluck","properties","currentProp","defaultThrottleConfig","leading","trailing","durationSelector","ThrottleOperator","ThrottleSubscriber","_leading","_trailing","_sendValue","_hasValue","_throttled","send","duration","tryDurationSelector","throttlingDone","switchMapTo","innerObservable","sample","notifier","SampleOperator","sampleSubscriber","SampleSubscriber","emitValue","NEVER","skip","SkipOperator","total","SkipSubscriber","catchError","CatchOperator","caught","CatchSubscriber","err2","debounceTime","dueTime","DebounceTimeOperator","DebounceTimeSubscriber","debouncedSubscription","lastValue","clearDebounce","dispatchNext","debouncedNext","iif","condition","trueResult","falseResult","vals","RefCountOperator","connectableProto","connectable","_refCount","refCounter","connection","connect","RefCountSubscriber","sharedConnection","_connection","ConnectableObservable","subjectFactory","_isComplete","getSubject","_subject","connectableObservableDescriptor","ConnectableSubscriber","MulticastOperator","shareSubjectFactory","share","subjectOrSubjectFactory","_identity","ajaxGet","url","headers","ajaxPost","ajaxDelete","ajaxPut","ajaxPatch","mapResponse","response","ajaxGetJSON","responseType","AjaxObservable","urlOrRequest","request","createXHR","crossDomain","root","XMLHttpRequest","XDomainRequest","getCORSRequest","progId","progIds","ActiveXObject","getXMLHttpRequest","withCredentials","timeout","post","put","patch","getJSON","AjaxSubscriber","getHeader","FormData","serializeBody","xhr","AjaxResponse","user","password","setupEvents","open","setHeaders","contentType","splitIndex","encodeURIComponent","JSON","stringify","setRequestHeader","headerName","toLowerCase","progressSubscriber","xhrTimeout","AjaxTimeoutError","ontimeout","upload","xhrProgress_1","xhrError_1","onprogress","AjaxError","onerror","xhrReadyStateChange","xhrLoad","readyState","status_1","status","responseText","onreadystatechange","onload","abort","originalEvent","parseXhrResponse","AjaxErrorImpl","parse","parseJson","responseXML","AjaxTimeoutErrorImpl","ajax","ArgumentOutOfRangeError","ArgumentOutOfRangeErrorImpl","take","TakeOperator","TakeSubscriber","delayFor","isNaN","DelayOperator","DelaySubscriber","delay_1","_schedule","scheduleNotification","DelayMessage"],"mappings":"sFAAA;;;;;;;;;;;;;;;AAgBA,IAAIA,EAAgB,SAASC,EAAGC,GAI5B,OAHAF,EAAgBG,OAAOC,gBAClB,CAAEC,UAAW,cAAgBC,OAAS,SAAUL,EAAGC,GAAKD,EAAEI,UAAYH,IACvE,SAAUD,EAAGC,GAAK,IAAK,IAAIK,KAAKL,EAAOA,EAAEM,eAAeD,KAAIN,EAAEM,GAAKL,EAAEK,MACpDN,EAAGC,IAGrB,SAASO,EAAUR,EAAGC,GAEzB,SAASQ,IAAOC,KAAKC,YAAcX,EADnCD,EAAcC,EAAGC,GAEjBD,EAAEY,UAAkB,OAANX,EAAaC,OAAOW,OAAOZ,IAAMQ,EAAGG,UAAYX,EAAEW,UAAW,IAAIH,GAG5E,IAAIK,EAAW,WAQlB,OAPAA,EAAWZ,OAAOa,QAAU,SAAkBC,GAC1C,IAAK,IAAIC,EAAGC,EAAI,EAAGC,EAAIC,UAAUC,OAAQH,EAAIC,EAAGD,IAE5C,IAAK,IAAIZ,KADTW,EAAIG,UAAUF,GACOhB,OAAOU,UAAUL,eAAee,KAAKL,EAAGX,KAAIU,EAAEV,GAAKW,EAAEX,IAE9E,OAAOU,IAEKO,MAAMb,KAAMU,YA8BzB,SAASI,EAAUC,EAASC,EAAYC,EAAGC,GAE9C,OAAO,IAAKD,IAAMA,EAAIE,WAAU,SAAUC,EAASC,GAC/C,SAASC,EAAUC,GAAS,IAAMC,EAAKN,EAAUO,KAAKF,IAAW,MAAOG,GAAKL,EAAOK,IACpF,SAASC,EAASJ,GAAS,IAAMC,EAAKN,EAAiB,MAAEK,IAAW,MAAOG,GAAKL,EAAOK,IACvF,SAASF,EAAKI,GAJlB,IAAeL,EAIaK,EAAOC,KAAOT,EAAQQ,EAAOL,QAJ1CA,EAIyDK,EAAOL,MAJhDA,aAAiBN,EAAIM,EAAQ,IAAIN,GAAE,SAAUG,GAAWA,EAAQG,OAITO,KAAKR,EAAWK,GAClGH,GAAMN,EAAYA,EAAUL,MAAME,EAASC,GAAc,KAAKS,WAI/D,SAASM,EAAYhB,EAASiB,GACjC,IAAsGC,EAAGC,EAAG5B,EAAG6B,EAA3GC,EAAI,CAAEC,MAAO,EAAGC,KAAM,WAAa,GAAW,EAAPhC,EAAE,GAAQ,MAAMA,EAAE,GAAI,OAAOA,EAAE,IAAOiC,KAAM,GAAIC,IAAK,IAChG,OAAOL,EAAI,CAAEV,KAAMgB,EAAK,GAAI,MAASA,EAAK,GAAI,OAAUA,EAAK,IAAwB,mBAAXC,SAA0BP,EAAEO,OAAOC,UAAY,WAAa,OAAO3C,OAAUmC,EACvJ,SAASM,EAAKhC,GAAK,OAAO,SAAUmC,GAAK,OACzC,SAAcC,GACV,GAAIZ,EAAG,MAAM,IAAIa,UAAU,mCAC3B,KAAOV,GAAG,IACN,GAAIH,EAAI,EAAGC,IAAM5B,EAAY,EAARuC,EAAG,GAASX,EAAU,OAAIW,EAAG,GAAKX,EAAS,SAAO5B,EAAI4B,EAAU,SAAM5B,EAAEM,KAAKsB,GAAI,GAAKA,EAAET,SAAWnB,EAAIA,EAAEM,KAAKsB,EAAGW,EAAG,KAAKhB,KAAM,OAAOvB,EAE3J,OADI4B,EAAI,EAAG5B,IAAGuC,EAAK,CAAS,EAARA,EAAG,GAAQvC,EAAEiB,QACzBsB,EAAG,IACP,KAAK,EAAG,KAAK,EAAGvC,EAAIuC,EAAI,MACxB,KAAK,EAAc,OAAXT,EAAEC,QAAgB,CAAEd,MAAOsB,EAAG,GAAIhB,MAAM,GAChD,KAAK,EAAGO,EAAEC,QAASH,EAAIW,EAAG,GAAIA,EAAK,CAAC,GAAI,SACxC,KAAK,EAAGA,EAAKT,EAAEI,IAAIO,MAAOX,EAAEG,KAAKQ,MAAO,SACxC,QACI,KAAMzC,EAAI8B,EAAEG,MAAMjC,EAAIA,EAAEK,OAAS,GAAKL,EAAEA,EAAEK,OAAS,KAAkB,IAAVkC,EAAG,IAAsB,IAAVA,EAAG,IAAW,CAAET,EAAI,EAAG,SACjG,GAAc,IAAVS,EAAG,MAAcvC,GAAMuC,EAAG,GAAKvC,EAAE,IAAMuC,EAAG,GAAKvC,EAAE,IAAM,CAAE8B,EAAEC,MAAQQ,EAAG,GAAI,MAC9E,GAAc,IAAVA,EAAG,IAAYT,EAAEC,MAAQ/B,EAAE,GAAI,CAAE8B,EAAEC,MAAQ/B,EAAE,GAAIA,EAAIuC,EAAI,MAC7D,GAAIvC,GAAK8B,EAAEC,MAAQ/B,EAAE,GAAI,CAAE8B,EAAEC,MAAQ/B,EAAE,GAAI8B,EAAEI,IAAIQ,KAAKH,GAAK,MACvDvC,EAAE,IAAI8B,EAAEI,IAAIO,MAChBX,EAAEG,KAAKQ,MAAO,SAEtBF,EAAKb,EAAKpB,KAAKG,EAASqB,GAC1B,MAAOV,GAAKmB,EAAK,CAAC,EAAGnB,GAAIQ,EAAI,EAAK,QAAUD,EAAI3B,EAAI,EACtD,GAAY,EAARuC,EAAG,GAAQ,MAAMA,EAAG,GAAI,MAAO,CAAEtB,MAAOsB,EAAG,GAAKA,EAAG,QAAK,EAAQhB,MAAM,GArB9BL,CAAK,CAACf,EAAGmC,MA6BtD,SAASK,EAASC,GACrB,IAAI3C,EAAsB,mBAAXmC,QAAyBA,OAAOC,SAAUQ,EAAI5C,GAAK2C,EAAE3C,GAAIC,EAAI,EAC5E,GAAI2C,EAAG,OAAOA,EAAEvC,KAAKsC,GACrB,GAAIA,GAAyB,iBAAbA,EAAEvC,OAAqB,MAAO,CAC1Cc,KAAM,WAEF,OADIyB,GAAK1C,GAAK0C,EAAEvC,SAAQuC,OAAI,GACrB,CAAE3B,MAAO2B,GAAKA,EAAE1C,KAAMqB,MAAOqB,KAG5C,MAAM,IAAIJ,UAAUvC,EAAI,0BAA4B,mCAGjD,SAAS6C,EAAOF,EAAGzC,GACtB,IAAI0C,EAAsB,mBAAXT,QAAyBQ,EAAER,OAAOC,UACjD,IAAKQ,EAAG,OAAOD,EACf,IAAmBG,EAAY3B,EAA3BlB,EAAI2C,EAAEvC,KAAKsC,GAAOI,EAAK,GAC3B,IACI,WAAc,IAAN7C,GAAgBA,KAAM,MAAQ4C,EAAI7C,EAAEiB,QAAQI,MAAMyB,EAAGN,KAAKK,EAAE9B,OAExE,MAAOgC,GAAS7B,EAAI,CAAE6B,MAAOA,GAC7B,QACI,IACQF,IAAMA,EAAExB,OAASsB,EAAI3C,EAAU,SAAI2C,EAAEvC,KAAKJ,GAElD,QAAU,GAAIkB,EAAG,MAAMA,EAAE6B,OAE7B,OAAOD,EAGJ,SAASE,IACZ,IAAK,IAAIF,EAAK,GAAI9C,EAAI,EAAGA,EAAIE,UAAUC,OAAQH,IAC3C8C,EAAKA,EAAGG,OAAOL,EAAO1C,UAAUF,KACpC,OAAO8C,EAGJ,SAASI,IACZ,IAAK,IAAInD,EAAI,EAAGC,EAAI,EAAGmD,EAAKjD,UAAUC,OAAQH,EAAImD,EAAInD,IAAKD,GAAKG,UAAUF,GAAGG,OACxE,IAAI0C,EAAI1D,MAAMY,GAAIqD,EAAI,EAA3B,IAA8BpD,EAAI,EAAGA,EAAImD,EAAInD,IACzC,IAAK,IAAIqD,EAAInD,UAAUF,GAAIsD,EAAI,EAAGC,EAAKF,EAAElD,OAAQmD,EAAIC,EAAID,IAAKF,IAC1DP,EAAEO,GAAKC,EAAEC,GACjB,OAAOT,EAGJ,SAASW,EAAQpB,GACpB,OAAO5C,gBAAgBgE,GAAWhE,KAAK4C,EAAIA,EAAG5C,MAAQ,IAAIgE,EAAQpB,GAG/D,SAASqB,EAAiBlD,EAASC,EAAYE,GAClD,IAAKwB,OAAOwB,cAAe,MAAM,IAAIpB,UAAU,wCAC/C,IAAoDtC,EAAhD2B,EAAIjB,EAAUL,MAAME,EAASC,GAAc,IAAQmD,EAAI,GAC3D,OAAO3D,EAAI,GAAIiC,EAAK,QAASA,EAAK,SAAUA,EAAK,UAAWjC,EAAEkC,OAAOwB,eAAiB,WAAc,OAAOlE,MAASQ,EACpH,SAASiC,EAAKhC,GAAS0B,EAAE1B,KAAID,EAAEC,GAAK,SAAUmC,GAAK,OAAO,IAAIzB,SAAQ,SAAU0C,EAAGtE,GAAK4E,EAAEnB,KAAK,CAACvC,EAAGmC,EAAGiB,EAAGtE,IAAM,GAAK6E,EAAO3D,EAAGmC,QAC9H,SAASwB,EAAO3D,EAAGmC,GAAK,KACVS,EADqBlB,EAAE1B,GAAGmC,IACnBrB,iBAAiByC,EAAU7C,QAAQC,QAAQiC,EAAE9B,MAAMqB,GAAGd,KAAKuC,EAAShD,GAAUiD,EAAOH,EAAE,GAAG,GAAId,GADpE,MAAO3B,GAAK4C,EAAOH,EAAE,GAAG,GAAIzC,GAC3E,IAAc2B,EACd,SAASgB,EAAQ9C,GAAS6C,EAAO,OAAQ7C,GACzC,SAASF,EAAOE,GAAS6C,EAAO,QAAS7C,GACzC,SAAS+C,EAAOrC,EAAGW,GAASX,EAAEW,GAAIuB,EAAEI,QAASJ,EAAExD,QAAQyD,EAAOD,EAAE,GAAG,GAAIA,EAAE,GAAG,KASzE,SAASK,EAActB,GAC1B,IAAKR,OAAOwB,cAAe,MAAM,IAAIpB,UAAU,wCAC/C,IAAiCtC,EAA7B2C,EAAID,EAAER,OAAOwB,eACjB,OAAOf,EAAIA,EAAEvC,KAAKsC,IAAMA,EAAqCD,EAASC,GAA2B1C,EAAI,GAAIiC,EAAK,QAASA,EAAK,SAAUA,EAAK,UAAWjC,EAAEkC,OAAOwB,eAAiB,WAAc,OAAOlE,MAASQ,GAC9M,SAASiC,EAAKhC,GAAKD,EAAEC,GAAKyC,EAAEzC,IAAM,SAAUmC,GAAK,OAAO,IAAIzB,SAAQ,SAAUC,EAASC,IACvF,SAAgBD,EAASC,EAAQ/B,EAAGsD,GAAKzB,QAAQC,QAAQwB,GAAGd,MAAK,SAASc,GAAKxB,EAAQ,CAAEG,MAAOqB,EAAGf,KAAMvC,MAAS+B,IADJiD,CAAOlD,EAASC,GAA7BuB,EAAIM,EAAEzC,GAAGmC,IAA8Bf,KAAMe,EAAErB,c,+BClLpJ,4FAOIkD,EAAc,SAAUC,GAExB,SAASD,EAAWE,EAAmBpB,EAAOqB,GAC1C,IAAIC,EAAQH,EAAO9D,KAAKZ,OAASA,KAKjC,OAJA6E,EAAMC,eAAiB,KACvBD,EAAME,iBAAkB,EACxBF,EAAMG,oBAAqB,EAC3BH,EAAMI,WAAY,EACVvE,UAAUC,QACd,KAAK,EACDkE,EAAMK,YAAc,IACpB,MACJ,KAAK,EACD,IAAKP,EAAmB,CACpBE,EAAMK,YAAc,IACpB,MAEJ,GAAiC,iBAAtBP,EAAgC,CACnCA,aAA6BF,GAC7BI,EAAMG,mBAAqBL,EAAkBK,mBAC7CH,EAAMK,YAAcP,EACpBA,EAAkBQ,IAAIN,KAGtBA,EAAMG,oBAAqB,EAC3BH,EAAMK,YAAc,IAAIE,EAAeP,EAAOF,IAElD,MAER,QACIE,EAAMG,oBAAqB,EAC3BH,EAAMK,YAAc,IAAIE,EAAeP,EAAOF,EAAmBpB,EAAOqB,GAGhF,OAAOC,EAoDX,OArFA,YAAUJ,EAAYC,GAmCtBD,EAAWvE,UAAU,KAAsB,WAAc,OAAOF,MAChEyE,EAAWtE,OAAS,SAAUsB,EAAM8B,EAAOqB,GACvC,IAAIS,EAAa,IAAIZ,EAAWhD,EAAM8B,EAAOqB,GAE7C,OADAS,EAAWL,oBAAqB,EACzBK,GAEXZ,EAAWvE,UAAUuB,KAAO,SAAUF,GAC7BvB,KAAKiF,WACNjF,KAAKsF,MAAM/D,IAGnBkD,EAAWvE,UAAUqD,MAAQ,SAAUgC,GAC9BvF,KAAKiF,YACNjF,KAAKiF,WAAY,EACjBjF,KAAKwF,OAAOD,KAGpBd,EAAWvE,UAAU0E,SAAW,WACvB5E,KAAKiF,YACNjF,KAAKiF,WAAY,EACjBjF,KAAKyF,cAGbhB,EAAWvE,UAAUwF,YAAc,WAC3B1F,KAAK2F,SAGT3F,KAAKiF,WAAY,EACjBP,EAAOxE,UAAUwF,YAAY9E,KAAKZ,QAEtCyE,EAAWvE,UAAUoF,MAAQ,SAAU/D,GACnCvB,KAAKkF,YAAYzD,KAAKF,IAE1BkD,EAAWvE,UAAUsF,OAAS,SAAUD,GACpCvF,KAAKkF,YAAY3B,MAAMgC,GACvBvF,KAAK0F,eAETjB,EAAWvE,UAAUuF,UAAY,WAC7BzF,KAAKkF,YAAYN,WACjB5E,KAAK0F,eAETjB,EAAWvE,UAAU0F,uBAAyB,WAC1C,IAAIC,EAAmB7F,KAAK6F,iBAM5B,OALA7F,KAAK6F,iBAAmB,KACxB7F,KAAK0F,cACL1F,KAAK2F,QAAS,EACd3F,KAAKiF,WAAY,EACjBjF,KAAK6F,iBAAmBA,EACjB7F,MAEJyE,EAtFM,CAuFf,KAEEW,EAAkB,SAAUV,GAE5B,SAASU,EAAeU,EAAmBC,EAAgBxC,EAAOqB,GAC9D,IAEInD,EAFAoD,EAAQH,EAAO9D,KAAKZ,OAASA,KACjC6E,EAAMiB,kBAAoBA,EAE1B,IAAIE,EAAUnB,EAoBd,OAnBI,YAAWkB,GACXtE,EAAOsE,EAEFA,IACLtE,EAAOsE,EAAetE,KACtB8B,EAAQwC,EAAexC,MACvBqB,EAAWmB,EAAenB,SACtBmB,IAAmB,MACnBC,EAAUxG,OAAOW,OAAO4F,GACpB,YAAWC,EAAQN,cACnBb,EAAMM,IAAIa,EAAQN,YAAYO,KAAKD,IAEvCA,EAAQN,YAAcb,EAAMa,YAAYO,KAAKpB,KAGrDA,EAAMqB,SAAWF,EACjBnB,EAAMS,MAAQ7D,EACdoD,EAAMW,OAASjC,EACfsB,EAAMY,UAAYb,EACXC,EA0GX,OAnIA,YAAUO,EAAgBV,GA2B1BU,EAAelF,UAAUuB,KAAO,SAAUF,GACtC,IAAKvB,KAAKiF,WAAajF,KAAKsF,MAAO,CAC/B,IAAIQ,EAAoB9F,KAAK8F,kBACxB,IAAOK,uCAA0CL,EAAkBd,mBAG/DhF,KAAKoG,gBAAgBN,EAAmB9F,KAAKsF,MAAO/D,IACzDvB,KAAK0F,cAHL1F,KAAKqG,aAAarG,KAAKsF,MAAO/D,KAO1C6D,EAAelF,UAAUqD,MAAQ,SAAUgC,GACvC,IAAKvF,KAAKiF,UAAW,CACjB,IAAIa,EAAoB9F,KAAK8F,kBACzBK,EAAwC,IAAOA,sCACnD,GAAInG,KAAKwF,OACAW,GAA0CL,EAAkBd,oBAK7DhF,KAAKoG,gBAAgBN,EAAmB9F,KAAKwF,OAAQD,GACrDvF,KAAK0F,gBALL1F,KAAKqG,aAAarG,KAAKwF,OAAQD,GAC/BvF,KAAK0F,oBAOR,GAAKI,EAAkBd,mBAQpBmB,GACAL,EAAkBhB,eAAiBS,EACnCO,EAAkBf,iBAAkB,GAGpC,YAAgBQ,GAEpBvF,KAAK0F,kBAfuC,CAE5C,GADA1F,KAAK0F,cACDS,EACA,MAAMZ,EAEV,YAAgBA,MAc5BH,EAAelF,UAAU0E,SAAW,WAChC,IAAIC,EAAQ7E,KACZ,IAAKA,KAAKiF,UAAW,CACjB,IAAIa,EAAoB9F,KAAK8F,kBAC7B,GAAI9F,KAAKyF,UAAW,CAChB,IAAIa,EAAkB,WAAc,OAAOzB,EAAMY,UAAU7E,KAAKiE,EAAMqB,WACjE,IAAOC,uCAA0CL,EAAkBd,oBAKpEhF,KAAKoG,gBAAgBN,EAAmBQ,GACxCtG,KAAK0F,gBALL1F,KAAKqG,aAAaC,GAClBtG,KAAK0F,oBAQT1F,KAAK0F,gBAIjBN,EAAelF,UAAUmG,aAAe,SAAUE,EAAIhF,GAClD,IACIgF,EAAG3F,KAAKZ,KAAKkG,SAAU3E,GAE3B,MAAOgE,GAEH,GADAvF,KAAK0F,cACD,IAAOS,sCACP,MAAMZ,EAGN,YAAgBA,KAI5BH,EAAelF,UAAUkG,gBAAkB,SAAUI,EAAQD,EAAIhF,GAC7D,IAAK,IAAO4E,sCACR,MAAM,IAAIM,MAAM,YAEpB,IACIF,EAAG3F,KAAKZ,KAAKkG,SAAU3E,GAE3B,MAAOgE,GACH,OAAI,IAAOY,uCACPK,EAAO1B,eAAiBS,EACxBiB,EAAOzB,iBAAkB,GAClB,IAGP,YAAgBQ,IACT,GAGf,OAAO,GAEXH,EAAelF,UAAUwG,aAAe,WACpC,IAAIZ,EAAoB9F,KAAK8F,kBAC7B9F,KAAKkG,SAAW,KAChBlG,KAAK8F,kBAAoB,KACzBA,EAAkBJ,eAEfN,EApIU,CAqInBX,I,mICrOEkC,EACA,WACI,IAAI9B,EAAQ7E,KACZA,KAAKoB,QAAU,KACfpB,KAAKqB,OAAS,KACdrB,KAAK4G,QAAU,IAAIzF,SAAQ,SAAU0C,EAAGtE,GACpCsF,EAAMzD,QAAUyC,EAChBgB,EAAMxD,OAAS9B,MCLpB,SAASsH,EAAkBC,GAC9B,OAEJ,SAAmBA,GACf,OAAO,YAAiB9G,KAAMU,WAAW,WACrC,IAAIqG,EAAWC,EAAQC,EAAU1D,EAAO2D,EAAWC,EAAM7H,EAAGsC,EAC5D,OAAO,YAAY5B,MAAM,SAAUoH,GAC/B,OAAQA,EAAG/E,OACP,KAAK,EACD0E,EAAY,GACZC,EAAS,GACTC,GAAW,EACX1D,EAAQ,KACR2D,GAAY,EACZC,EAAOL,EAAOO,UAAU,CACpB5F,KAAM,SAAUF,GACRwF,EAAUpG,OAAS,EACnBoG,EAAUxC,QAAQnD,QAAQ,CAAEG,MAAOA,EAAOM,MAAM,IAGhDmF,EAAOhE,KAAKzB,IAGpBgC,MAAO,SAAUgC,GAGb,IAFA0B,GAAW,EACX1D,EAAQgC,EACDwB,EAAUpG,OAAS,GACtBoG,EAAUxC,QAAQlD,OAAOkE,IAGjCX,SAAU,WAEN,IADAsC,GAAY,EACLH,EAAUpG,OAAS,GACtBoG,EAAUxC,QAAQnD,QAAQ,CAAEG,WAAO+F,EAAWzF,MAAM,OAIhEuF,EAAG/E,MAAQ,EACf,KAAK,EACD+E,EAAG7E,KAAKS,KAAK,CAAC,EAAG,GAAI,GAAI,KACzBoE,EAAG/E,MAAQ,EACf,KAAK,EAED,OAAM2E,EAAOrG,OAAS,EACf,CAAC,EAAG,YAAQqG,EAAOzC,UADO,CAAC,EAAG,GAEzC,KAAK,EAAG,MAAO,CAAC,EAAG6C,EAAG9E,QACtB,KAAK,EAED,OADA8E,EAAG9E,OACI,CAAC,EAAG,IACf,KAAK,EACD,OAAK4E,EACE,CAAC,EAAG,iBAAQ,IADI,CAAC,EAAG,GAE/B,KAAK,EAAG,MAAO,CAAC,EAAGE,EAAG9E,QACtB,KAAK,EACD,IAAK2E,EAAU,MAAO,CAAC,EAAG,GAC1B,MAAM1D,EACV,KAAK,EAGD,OAFAjE,EAAI,IAAIqH,EACRI,EAAU/D,KAAK1D,GACR,CAAC,EAAG,YAAQA,EAAEsH,UACzB,KAAK,EAED,OADAhF,EAASwF,EAAG9E,QACAT,KACL,CAAC,EAAG,iBAAQ,IADM,CAAC,EAAG,IAEjC,KAAK,GAAI,MAAO,CAAC,EAAGuF,EAAG9E,QACvB,KAAK,GAAI,MAAO,CAAC,EAAG,YAAQV,EAAOL,QACnC,KAAK,GAAI,MAAO,CAAC,EAAG6F,EAAG9E,QACvB,KAAK,GACD8E,EAAG9E,OACH8E,EAAG/E,MAAQ,GACf,KAAK,GAAI,MAAO,CAAC,EAAG,GACpB,KAAK,GAAI,MAAO,CAAC,EAAG,IACpB,KAAK,GAED,MADQ+E,EAAG9E,OAEf,KAAK,GAED,OADA6E,EAAKzB,cACE,CAAC,GACZ,KAAK,GAAI,MAAO,CAAC,UA7EtB6B,CAAUT,GCGrB,IAAI,EAAc,WACd,SAASU,EAAWH,GAChBrH,KAAKyH,WAAY,EACbJ,IACArH,KAAK0H,WAAaL,GA6F1B,OA1FAG,EAAWtH,UAAUyH,KAAO,SAAUC,GAClC,IAAIC,EAAa,IAAIL,EAGrB,OAFAK,EAAWf,OAAS9G,KACpB6H,EAAWD,SAAWA,EACfC,GAEXL,EAAWtH,UAAUmH,UAAY,SAAUtB,EAAgBxC,EAAOqB,GAC9D,IAAIgD,EAAW5H,KAAK4H,SAChBE,EClBL,SAAsBC,EAAgBxE,EAAOqB,GAChD,GAAImD,EAAgB,CAChB,GAAIA,aAA0BtD,EAAA,EAC1B,OAAOsD,EAEX,GAAIA,EAAeC,EAAA,GACf,OAAOD,EAAeC,EAAA,KAG9B,OAAKD,GAAmBxE,GAAUqB,EAG3B,IAAIH,EAAA,EAAWsD,EAAgBxE,EAAOqB,GAFlC,IAAIH,EAAA,EAAW,KDQXwD,CAAalC,EAAgBxC,EAAOqB,GAS/C,GARIgD,EACAE,EAAK3C,IAAIyC,EAAShH,KAAKkH,EAAM9H,KAAK8G,SAGlCgB,EAAK3C,IAAInF,KAAK8G,QAAWoB,EAAA,EAAO/B,wCAA0C2B,EAAK9C,mBAC3EhF,KAAK0H,WAAWI,GAChB9H,KAAKmI,cAAcL,IAEvBI,EAAA,EAAO/B,uCACH2B,EAAK9C,qBACL8C,EAAK9C,oBAAqB,EACtB8C,EAAK/C,iBACL,MAAM+C,EAAKhD,eAIvB,OAAOgD,GAEXN,EAAWtH,UAAUiI,cAAgB,SAAUL,GAC3C,IACI,OAAO9H,KAAK0H,WAAWI,GAE3B,MAAOvC,GACC2C,EAAA,EAAO/B,wCACP2B,EAAK/C,iBAAkB,EACvB+C,EAAKhD,eAAiBS,IE9C/B,SAAwB6C,GAC3B,KAAOA,GAAU,CACb,IAAIhB,EAAKgB,EAAUC,EAAWjB,EAAGzB,OAAQT,EAAckC,EAAGlC,YAAaD,EAAYmC,EAAGnC,UACtF,GAAIoD,GAAYpD,EACZ,OAAO,EAGPmD,EADKlD,GAAeA,aAAuBT,EAAA,EAChCS,EAGA,KAGnB,OAAO,EFmCKoD,CAAeR,GAIfS,QAAQC,KAAKjD,GAHbuC,EAAKvE,MAAMgC,KAOvBiC,EAAWtH,UAAUuI,QAAU,SAAUhH,EAAMiH,GAC3C,IAAI7D,EAAQ7E,KAEZ,OAAO,IADP0I,EAAcC,EAAeD,KACN,SAAUtH,EAASC,GACtC,IAAIuH,EACJA,EAAe/D,EAAMwC,WAAU,SAAU9F,GACrC,IACIE,EAAKF,GAET,MAAOgE,GACHlE,EAAOkE,GACHqD,GACAA,EAAalD,iBAGtBrE,EAAQD,OAGnBoG,EAAWtH,UAAUwH,WAAa,SAAUrC,GACxC,IAAIyB,EAAS9G,KAAK8G,OAClB,OAAOA,GAAUA,EAAOO,UAAUhC,IAEtCmC,EAAWtH,UAAU,KAAqB,WACtC,OAAOF,MAEXwH,EAAWtH,UAAU2I,KAAO,WAExB,IADA,IAAIC,EAAa,GACRC,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpCD,EAAWC,GAAMrI,UAAUqI,GAE/B,OAA0B,IAAtBD,EAAWnI,OACJX,KAEJ,OAAA6I,EAAA,GAAcC,EAAd,CAA0B9I,OAErCwH,EAAWtH,UAAU8I,UAAY,SAAUN,GACvC,IAAI7D,EAAQ7E,KAEZ,OAAO,IADP0I,EAAcC,EAAeD,KACN,SAAUtH,EAASC,GACtC,IAAIE,EACJsD,EAAMwC,WAAU,SAAU4B,GAAK,OAAO1H,EAAQ0H,KAAM,SAAU1D,GAAO,OAAOlE,EAAOkE,MAAS,WAAc,OAAOnE,EAAQG,UAGjIiG,EAAWrH,OAAS,SAAUkH,GAC1B,OAAO,IAAIG,EAAWH,IAEnBG,EAjGM,GAoGjB,SAASmB,EAAeD,GAIpB,GAHKA,IACDA,EAAcR,EAAA,EAAO/G,SAAWA,UAE/BuH,EACD,MAAM,IAAIjC,MAAM,yBAEpB,OAAOiC,EAGHhG,QAAUA,OAAOwB,gBACjB,EAAWhE,UAAUwC,OAAOwB,eAAiB,WACzC,OAAO2C,EAAkB7G,S,4FG1G1BkJ,EAZmB,WAC1B,SAASC,EAAwBC,GAM7B,OALA3C,MAAM7F,KAAKZ,MACXA,KAAKqJ,QAAUD,EACXA,EAAOzI,OAAS,4CAA8CyI,EAAOE,KAAI,SAAU/D,EAAK/E,GAAK,OAAOA,EAAI,EAAI,KAAO+E,EAAIgE,cAAeC,KAAK,QAAU,GACzJxJ,KAAKyJ,KAAO,sBACZzJ,KAAKoJ,OAASA,EACPpJ,KAGX,OADAmJ,EAAwBjJ,UAAYV,OAAOW,OAAOsG,MAAMvG,WACjDiJ,EAVmB,GCI1B,EAAgB,WAChB,SAASO,EAAahE,GAClB1F,KAAK2F,QAAS,EACd3F,KAAK6F,iBAAmB,KACxB7F,KAAK2J,eAAiB,KAClBjE,IACA1F,KAAK0G,aAAehB,GAkHN,IAAUkE,EAIhC,OAnHAF,EAAaxJ,UAAUwF,YAAc,WACjC,IAAI0D,EACJ,IAAIpJ,KAAK2F,OAAT,CAGA,IAAeE,EAAN7F,KAA4B6F,iBAAkBa,EAA9C1G,KAAgE0G,aAAciD,EAA9E3J,KAAkG2J,eAI3G,GAHA3J,KAAK2F,QAAS,EACd3F,KAAK6F,iBAAmB,KACxB7F,KAAK2J,eAAiB,KAClB9D,aAA4B6D,EAC5B7D,EAAiBgE,OAAO7J,WAEvB,GAAyB,OAArB6F,EACL,IAAK,IAAIiE,EAAQ,EAAGA,EAAQjE,EAAiBlF,SAAUmJ,EAAO,CAC3CjE,EAAiBiE,GACvBD,OAAO7J,MAGxB,GAAI,OAAA+J,EAAA,GAAWrD,GACX,IACIA,EAAa9F,KAAKZ,MAEtB,MAAO0B,GACH0H,EAAS1H,aAAawH,EAAsBc,EAA4BtI,EAAE0H,QAAU,CAAC1H,GAG7F,GAAI,OAAAuI,EAAA,GAAQN,GACR,CAAIG,GAAS,EAEb,IAFA,IACII,EAAMP,EAAehJ,SAChBmJ,EAAQI,GAAK,CAClB,IAAIC,EAAMR,EAAeG,GACzB,GAAI,OAAAM,EAAA,GAASD,GACT,IACIA,EAAIzE,cAER,MAAOhE,GACH0H,EAASA,GAAU,GACf1H,aAAawH,EACbE,EAASA,EAAO3F,OAAOuG,EAA4BtI,EAAE0H,SAGrDA,EAAOpG,KAAKtB,KAMhC,GAAI0H,EACA,MAAM,IAAIF,EAAoBE,KAGtCM,EAAaxJ,UAAUiF,IAAM,SAAUkF,GACnC,IAAIzB,EAAeyB,EACnB,IAAKA,EACD,OAAOX,EAAaY,MAExB,cAAeD,GACX,IAAK,WACDzB,EAAe,IAAIc,EAAaW,GACpC,IAAK,SACD,GAAIzB,IAAiB5I,MAAQ4I,EAAajD,QAA8C,mBAA7BiD,EAAalD,YACpE,OAAOkD,EAEN,GAAI5I,KAAK2F,OAEV,OADAiD,EAAalD,cACNkD,EAEN,KAAMA,aAAwBc,GAAe,CAC9C,IAAIa,EAAM3B,GACVA,EAAe,IAAIc,GACNC,eAAiB,CAACY,GAEnC,MACJ,QACI,MAAM,IAAI9D,MAAM,yBAA2B4D,EAAW,2BAG9D,IAAIxE,EAAmB+C,EAAa/C,iBACpC,GAAyB,OAArBA,EACA+C,EAAa/C,iBAAmB7F,UAE/B,GAAI6F,aAA4B6D,EAAc,CAC/C,GAAI7D,IAAqB7F,KACrB,OAAO4I,EAEXA,EAAa/C,iBAAmB,CAACA,EAAkB7F,UAElD,KAAwC,IAApC6F,EAAiB2E,QAAQxK,MAI9B,OAAO4I,EAHP/C,EAAiB7C,KAAKhD,MAK1B,IAAIyK,EAAgBzK,KAAK2J,eAOzB,OANsB,OAAlBc,EACAzK,KAAK2J,eAAiB,CAACf,GAGvB6B,EAAczH,KAAK4F,GAEhBA,GAEXc,EAAaxJ,UAAU2J,OAAS,SAAUjB,GACtC,IAAI6B,EAAgBzK,KAAK2J,eACzB,GAAIc,EAAe,CACf,IAAIC,EAAoBD,EAAcD,QAAQ5B,IACnB,IAAvB8B,GACAD,EAAcE,OAAOD,EAAmB,KAIpDhB,EAAaY,QAAmBV,EAG9B,IAAIF,GAFI/D,QAAS,EACRiE,GAEJF,EA5HQ,GA+HnB,SAASM,EAA4BZ,GACjC,OAAOA,EAAOwB,QAAO,SAAUC,EAAMtF,GAAO,OAAOsF,EAAKpH,OAAQ8B,aAAe2D,EAAuB3D,EAAI6D,OAAS7D,KAAS,M,6BCpIhI,oDAEO,SAAS+D,EAAIwB,EAAS/J,GACzB,OAAO,SAAsB+F,GACzB,GAAuB,mBAAZgE,EACP,MAAM,IAAIhI,UAAU,8DAExB,OAAOgE,EAAOa,KAAK,IAAIoD,EAAYD,EAAS/J,KAGpD,IAAIgK,EAAe,WACf,SAASA,EAAYD,EAAS/J,GAC1Bf,KAAK8K,QAAUA,EACf9K,KAAKe,QAAUA,EAKnB,OAHAgK,EAAY7K,UAAUU,KAAO,SAAUyE,EAAYyB,GAC/C,OAAOA,EAAOO,UAAU,IAAI2D,EAAc3F,EAAYrF,KAAK8K,QAAS9K,KAAKe,WAEtEgK,EARO,GAWdC,EAAiB,SAAUtG,GAE3B,SAASsG,EAAc9F,EAAa4F,EAAS/J,GACzC,IAAI8D,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAI9C,OAHA6E,EAAMiG,QAAUA,EAChBjG,EAAMoG,MAAQ,EACdpG,EAAM9D,QAAUA,GAAW8D,EACpBA,EAaX,OAnBA,YAAUmG,EAAetG,GAQzBsG,EAAc9K,UAAUoF,MAAQ,SAAU/D,GACtC,IAAIK,EACJ,IACIA,EAAS5B,KAAK8K,QAAQlK,KAAKZ,KAAKe,QAASQ,EAAOvB,KAAKiL,SAEzD,MAAO1F,GAEH,YADAvF,KAAKkF,YAAY3B,MAAMgC,GAG3BvF,KAAKkF,YAAYzD,KAAKG,IAEnBoJ,EApBS,CAqBlB,M,6BC1CF,6DAGO,SAASE,EAAkBC,EAAiBvJ,EAAQwJ,EAAYC,EAAYC,GAE/E,QADwB,IAApBA,IAA8BA,EAAkB,IAAI,IAAgBH,EAAiBC,EAAYC,KACjGC,EAAgB3F,OAGpB,OAAI/D,aAAkB,IACXA,EAAOyF,UAAUiE,GAErB,YAAY1J,EAAZ,CAAoB0J,K,6BCX/B,6CAEIC,EAAmB,SAAU7G,GAE7B,SAAS6G,IACL,OAAkB,OAAX7G,GAAmBA,EAAO7D,MAAMb,KAAMU,YAAcV,KAW/D,OAbA,YAAUuL,EAAiB7G,GAI3B6G,EAAgBrL,UAAUsL,WAAa,SAAUJ,EAAYK,EAAYJ,EAAYK,EAAYC,GAC7F3L,KAAKkF,YAAYzD,KAAKgK,IAE1BF,EAAgBrL,UAAU0L,YAAc,SAAUrI,EAAOoI,GACrD3L,KAAKkF,YAAY3B,MAAMA,IAE3BgI,EAAgBrL,UAAU2L,eAAiB,SAAUF,GACjD3L,KAAKkF,YAAYN,YAEd2G,EAdW,CAFtB,KAiBE,I,6BCjBF,sCAAIO,GAAsD,EAC/C5D,EAAS,CAChB/G,aAASmG,EACT,0CAA0C/F,GACtC,GAAIA,EAAO,CACP,IAAIgC,EAAQ,IAAIkD,MAChB8B,QAAQC,KAAK,gGAAkGjF,EAAMwI,YAEhHD,GACLvD,QAAQyD,IAAI,wDAEhBF,EAAsDvK,GAE1D,4CACI,OAAOuK,K,8BCdf,kDAAIG,EAA6B,oBAAXC,QAA0BA,OAC5CC,EAAyB,oBAATC,MAAqD,oBAAtBC,mBAC/CD,gBAAgBC,mBAAqBD,KAErCE,EAAQL,QADqB,IAAXM,GAA0BA,GACZJ,GACpC,WACI,IAAKG,EACD,MAAM,IAAI7F,MAAM,iEAFxB,K,gDCLO,SAASsD,EAAWd,GACvB,MAAoB,mBAANA,EADlB,mC,6BCAO,SAASuD,KAAhB,mC,6BCAA,kCAAO,IAAI3E,EAAqD,mBAAXnF,QAAyBA,OAAOmF,YAAc,gB,6BCAnG,6CACWyC,EAAQ,IAAI,KAAW,SAAUjF,GAAc,OAAOA,EAAWT,e,6BCD5E,sCAUW6H,EAVuB,WAC9B,SAASC,IAIL,OAHAjG,MAAM7F,KAAKZ,MACXA,KAAKqJ,QAAU,sBACfrJ,KAAKyJ,KAAO,0BACLzJ,KAGX,OADA0M,EAA4BxM,UAAYV,OAAOW,OAAOsG,MAAMvG,WACrDwM,EARuB,I,8BCAnB,SAASC,EAAe9I,GACrC,OAAY,MAALA,GAA0B,iBAANA,IAAoD,IAAlCA,EAAE,4BCSlC,SAAS+I,EAAQrG,GAC9B,OAAO,SAASsG,EAAGhJ,GACjB,OAAyB,IAArBnD,UAAUC,QAAgBgM,EAAe9I,GACpCgJ,EAEAtG,EAAG1F,MAAMb,KAAMU,Y,gECfrB,SAASoM,EAAgBvH,GAC5BwH,YAAW,WAAc,MAAMxH,IAAQ,GAD3C,mC,8BCAA,kCAAO,IAAI0E,EAAgCtK,MAAMsK,SAAW,SAAWhB,GAAK,OAAOA,GAAyB,iBAAbA,EAAEtI,S,6BCA1F,SAASqM,EAAYzL,GACxB,OAAOA,GAAmC,mBAAnBA,EAAM0L,SADjC,mC,6BCAA,4HAOIC,EAAqB,SAAUxI,GAE/B,SAASwI,EAAkBhI,GACvB,IAAIL,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAE9C,OADA6E,EAAMK,YAAcA,EACbL,EAEX,OANA,YAAUqI,EAAmBxI,GAMtBwI,EAPa,CAQtB,KAEEC,EAAW,SAAUzI,GAErB,SAASyI,IACL,IAAItI,EAAQH,EAAO9D,KAAKZ,OAASA,KAMjC,OALA6E,EAAMuI,UAAY,GAClBvI,EAAMc,QAAS,EACfd,EAAMI,WAAY,EAClBJ,EAAMoC,UAAW,EACjBpC,EAAMwI,YAAc,KACbxI,EAyFX,OAjGA,YAAUsI,EAASzI,GAUnByI,EAAQjN,UAAU,KAAsB,WACpC,OAAO,IAAIgN,EAAkBlN,OAEjCmN,EAAQjN,UAAUyH,KAAO,SAAUC,GAC/B,IAAI0F,EAAU,IAAIC,EAAiBvN,KAAMA,MAEzC,OADAsN,EAAQ1F,SAAWA,EACZ0F,GAEXH,EAAQjN,UAAUuB,KAAO,SAAUF,GAC/B,GAAIvB,KAAK2F,OACL,MAAM,IAAI,IAEd,IAAK3F,KAAKiF,UAIN,IAHA,IAAImI,EAAYpN,KAAKoN,UACjBlD,EAAMkD,EAAUzM,OAChB6M,EAAOJ,EAAUK,QACZjN,EAAI,EAAGA,EAAI0J,EAAK1J,IACrBgN,EAAKhN,GAAGiB,KAAKF,IAIzB4L,EAAQjN,UAAUqD,MAAQ,SAAUgC,GAChC,GAAIvF,KAAK2F,OACL,MAAM,IAAI,IAEd3F,KAAKiH,UAAW,EAChBjH,KAAKqN,YAAc9H,EACnBvF,KAAKiF,WAAY,EAIjB,IAHA,IAAImI,EAAYpN,KAAKoN,UACjBlD,EAAMkD,EAAUzM,OAChB6M,EAAOJ,EAAUK,QACZjN,EAAI,EAAGA,EAAI0J,EAAK1J,IACrBgN,EAAKhN,GAAG+C,MAAMgC,GAElBvF,KAAKoN,UAAUzM,OAAS,GAE5BwM,EAAQjN,UAAU0E,SAAW,WACzB,GAAI5E,KAAK2F,OACL,MAAM,IAAI,IAEd3F,KAAKiF,WAAY,EAIjB,IAHA,IAAImI,EAAYpN,KAAKoN,UACjBlD,EAAMkD,EAAUzM,OAChB6M,EAAOJ,EAAUK,QACZjN,EAAI,EAAGA,EAAI0J,EAAK1J,IACrBgN,EAAKhN,GAAGoE,WAEZ5E,KAAKoN,UAAUzM,OAAS,GAE5BwM,EAAQjN,UAAUwF,YAAc,WAC5B1F,KAAKiF,WAAY,EACjBjF,KAAK2F,QAAS,EACd3F,KAAKoN,UAAY,MAErBD,EAAQjN,UAAUiI,cAAgB,SAAU9C,GACxC,GAAIrF,KAAK2F,OACL,MAAM,IAAI,IAGV,OAAOjB,EAAOxE,UAAUiI,cAAcvH,KAAKZ,KAAMqF,IAGzD8H,EAAQjN,UAAUwH,WAAa,SAAUrC,GACrC,GAAIrF,KAAK2F,OACL,MAAM,IAAI,IAET,OAAI3F,KAAKiH,UACV5B,EAAW9B,MAAMvD,KAAKqN,aACf,IAAa/C,OAEftK,KAAKiF,WACVI,EAAWT,WACJ,IAAa0F,QAGpBtK,KAAKoN,UAAUpK,KAAKqC,GACb,IAAI,IAAoBrF,KAAMqF,KAG7C8H,EAAQjN,UAAUwN,aAAe,WAC7B,IAAI7F,EAAa,IAAI,IAErB,OADAA,EAAWf,OAAS9G,KACb6H,GAEXsF,EAAQhN,OAAS,SAAU+E,EAAa4B,GACpC,OAAO,IAAIyG,EAAiBrI,EAAa4B,IAEtCqG,EAlGG,CAmGZ,KAEEI,EAAoB,SAAU7I,GAE9B,SAAS6I,EAAiBrI,EAAa4B,GACnC,IAAIjC,EAAQH,EAAO9D,KAAKZ,OAASA,KAGjC,OAFA6E,EAAMK,YAAcA,EACpBL,EAAMiC,OAASA,EACRjC,EA6BX,OAlCA,YAAU0I,EAAkB7I,GAO5B6I,EAAiBrN,UAAUuB,KAAO,SAAUF,GACxC,IAAI2D,EAAclF,KAAKkF,YACnBA,GAAeA,EAAYzD,MAC3ByD,EAAYzD,KAAKF,IAGzBgM,EAAiBrN,UAAUqD,MAAQ,SAAUgC,GACzC,IAAIL,EAAclF,KAAKkF,YACnBA,GAAeA,EAAY3B,OAC3BvD,KAAKkF,YAAY3B,MAAMgC,IAG/BgI,EAAiBrN,UAAU0E,SAAW,WAClC,IAAIM,EAAclF,KAAKkF,YACnBA,GAAeA,EAAYN,UAC3B5E,KAAKkF,YAAYN,YAGzB2I,EAAiBrN,UAAUwH,WAAa,SAAUrC,GAE9C,OADarF,KAAK8G,OAEP9G,KAAK8G,OAAOO,UAAUhC,GAGtB,IAAaiF,OAGrBiD,EAnCY,CAoCrBJ,I,6BC1JF,kCAMO,IAAIxK,EALe,mBAAXD,QAA0BA,OAAOC,SAGrCD,OAAOC,SAFH,c,6BCFf,6CAEIgL,EAAmB,SAAUjJ,GAE7B,SAASiJ,EAAgBnH,EAAQ4E,EAAYC,GACzC,IAAIxG,EAAQH,EAAO9D,KAAKZ,OAASA,KAKjC,OAJA6E,EAAM2B,OAASA,EACf3B,EAAMuG,WAAaA,EACnBvG,EAAMwG,WAAaA,EACnBxG,EAAMiF,MAAQ,EACPjF,EAaX,OApBA,YAAU8I,EAAiBjJ,GAS3BiJ,EAAgBzN,UAAUoF,MAAQ,SAAU/D,GACxCvB,KAAKwG,OAAOgF,WAAWxL,KAAKoL,WAAY7J,EAAOvB,KAAKqL,WAAYrL,KAAK8J,QAAS9J,OAElF2N,EAAgBzN,UAAUsF,OAAS,SAAUjC,GACzCvD,KAAKwG,OAAOoF,YAAYrI,EAAOvD,MAC/BA,KAAK0F,eAETiI,EAAgBzN,UAAUuF,UAAY,WAClCzF,KAAKwG,OAAOqF,eAAe7L,MAC3BA,KAAK0F,eAEFiI,EArBW,CAFtB,KAwBE,I,gCCxBF,kCAAO,IAAI3F,EACkB,mBAAXtF,OACRA,OAAO,gBACP,kBAAoBkL,KAAKC,U,6BCHnC,oFAMO,SAASC,EAAUhD,EAASiD,GAC/B,MAA8B,mBAAnBA,EACA,SAAUjH,GAAU,OAAOA,EAAO+B,KAAKiF,GAAU,SAAUjK,EAAGrD,GAAK,OAAO,YAAKsK,EAAQjH,EAAGrD,IAAIqI,KAAK,aAAI,SAAUtJ,EAAGyO,GAAM,OAAOD,EAAelK,EAAGtE,EAAGiB,EAAGwN,YAE7J,SAAUlH,GAAU,OAAOA,EAAOa,KAAK,IAAIsG,EAAkBnD,KAExE,IAAImD,EAAqB,WACrB,SAASA,EAAkBnD,GACvB9K,KAAK8K,QAAUA,EAKnB,OAHAmD,EAAkB/N,UAAUU,KAAO,SAAUyE,EAAYyB,GACrD,OAAOA,EAAOO,UAAU,IAAI6G,EAAoB7I,EAAYrF,KAAK8K,WAE9DmD,EAPa,GASpBC,EAAuB,SAAUxJ,GAEjC,SAASwJ,EAAoBhJ,EAAa4F,GACtC,IAAIjG,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAG9C,OAFA6E,EAAMiG,QAAUA,EAChBjG,EAAMiF,MAAQ,EACPjF,EAgDX,OArDA,YAAUqJ,EAAqBxJ,GAO/BwJ,EAAoBhO,UAAUoF,MAAQ,SAAU/D,GAC5C,IAAIK,EACAkI,EAAQ9J,KAAK8J,QACjB,IACIlI,EAAS5B,KAAK8K,QAAQvJ,EAAOuI,GAEjC,MAAOvG,GAEH,YADAvD,KAAKkF,YAAY3B,MAAMA,GAG3BvD,KAAKmO,UAAUvM,EAAQL,EAAOuI,IAElCoE,EAAoBhO,UAAUiO,UAAY,SAAUvM,EAAQL,EAAOuI,GAC/D,IAAIsE,EAAoBpO,KAAKoO,kBACzBA,GACAA,EAAkB1I,cAEtB,IAAI4F,EAAkB,IAAI,IAAgBtL,KAAMuB,EAAOuI,GACnD5E,EAAclF,KAAKkF,YACvBA,EAAYC,IAAImG,GAChBtL,KAAKoO,kBAAoB,YAAkBpO,KAAM4B,OAAQ0F,OAAWA,EAAWgE,GAC3EtL,KAAKoO,oBAAsB9C,GAC3BpG,EAAYC,IAAInF,KAAKoO,oBAG7BF,EAAoBhO,UAAUuF,UAAY,WACtC,IAAI2I,EAAoBpO,KAAKoO,kBACxBA,IAAqBA,EAAkBzI,QACxCjB,EAAOxE,UAAUuF,UAAU7E,KAAKZ,MAEpCA,KAAK0F,eAETwI,EAAoBhO,UAAUwG,aAAe,WACzC1G,KAAKoO,kBAAoB,MAE7BF,EAAoBhO,UAAU2L,eAAiB,SAAUF,GACnC3L,KAAKkF,YACX2E,OAAO8B,GACnB3L,KAAKoO,kBAAoB,KACrBpO,KAAKiF,WACLP,EAAOxE,UAAUuF,UAAU7E,KAAKZ,OAGxCkO,EAAoBhO,UAAUsL,WAAa,SAAUJ,EAAYK,EAAYJ,EAAYK,EAAYC,GACjG3L,KAAKkF,YAAYzD,KAAKgK,IAEnByC,EAtDe,CAuDxB,M,6BC5EF,oDAEO,SAASG,EAAcC,EAAOC,GACjC,OAAO,IAAI,KAAW,SAAUlJ,GAC5B,IAAI8E,EAAM,IAAI,IACV3J,EAAI,EAWR,OAVA2J,EAAIhF,IAAIoJ,EAAUtB,UAAS,WACnBzM,IAAM8N,EAAM3N,QAIhB0E,EAAW5D,KAAK6M,EAAM9N,MACjB6E,EAAWM,QACZwE,EAAIhF,IAAInF,KAAKiN,aALb5H,EAAWT,eAQZuF,O,6BChBf,6DAGO,SAASqE,EAAUF,EAAOC,GAC7B,OAAKA,EAIM,YAAcD,EAAOC,GAHrB,IAAI,IAAW,YAAiBD,M,yICIxC,SAASG,EAAUH,EAAOC,GAC7B,GAAa,MAATD,EAAe,CACf,GCVD,SAA6BA,GAChC,OAAOA,GAA6C,mBAA7BA,EAAM,KDSrBI,CAAoBJ,GACpB,OETL,SAA4BA,EAAOC,GACtC,OAAO,IAAI/G,EAAA,GAAW,SAAUnC,GAC5B,IAAI8E,EAAM,IAAIT,EAAA,EASd,OARAS,EAAIhF,IAAIoJ,EAAUtB,UAAS,WACvB,IAAIpF,EAAayG,EAAM,OACvBnE,EAAIhF,IAAI0C,EAAWR,UAAU,CACzB5F,KAAM,SAAUF,GAAS4I,EAAIhF,IAAIoJ,EAAUtB,UAAS,WAAc,OAAO5H,EAAW5D,KAAKF,QACzFgC,MAAO,SAAUgC,GAAO4E,EAAIhF,IAAIoJ,EAAUtB,UAAS,WAAc,OAAO5H,EAAW9B,MAAMgC,QACzFX,SAAU,WAAcuF,EAAIhF,IAAIoJ,EAAUtB,UAAS,WAAc,OAAO5H,EAAWT,uBAGpFuF,KFFIwE,CAAmBL,EAAOC,GAEhC,GAAI,OAAAK,EAAA,GAAUN,GACf,OGbL,SAAyBA,EAAOC,GACnC,OAAO,IAAI/G,EAAA,GAAW,SAAUnC,GAC5B,IAAI8E,EAAM,IAAIT,EAAA,EASd,OARAS,EAAIhF,IAAIoJ,EAAUtB,UAAS,WAAc,OAAOqB,EAAMxM,MAAK,SAAUP,GACjE4I,EAAIhF,IAAIoJ,EAAUtB,UAAS,WACvB5H,EAAW5D,KAAKF,GAChB4I,EAAIhF,IAAIoJ,EAAUtB,UAAS,WAAc,OAAO5H,EAAWT,sBAEhE,SAAUW,GACT4E,EAAIhF,IAAIoJ,EAAUtB,UAAS,WAAc,OAAO5H,EAAW9B,MAAMgC,cAE9D4E,KHEI0E,CAAgBP,EAAOC,GAE7B,GAAI,OAAAO,EAAA,GAAYR,GACjB,OAAO,OAAAD,EAAA,GAAcC,EAAOC,GAE3B,GInBN,SAAoBD,GACvB,OAAOA,GAA2C,mBAA3BA,EAAM,KJkBhBS,CAAWT,IAA2B,iBAAVA,EACjC,OKlBL,SAA0BA,EAAOC,GACpC,IAAKD,EACD,MAAM,IAAI7H,MAAM,2BAEpB,OAAO,IAAIe,EAAA,GAAW,SAAUnC,GAC5B,IACI1C,EADAwH,EAAM,IAAIT,EAAA,EAiCd,OA/BAS,EAAIhF,KAAI,WACAxC,GAAuC,mBAApBA,EAASqM,QAC5BrM,EAASqM,YAGjB7E,EAAIhF,IAAIoJ,EAAUtB,UAAS,WACvBtK,EAAW2L,EAAM,OACjBnE,EAAIhF,IAAIoJ,EAAUtB,UAAS,WACvB,IAAI5H,EAAWM,OAAf,CAGA,IAAIpE,EACAM,EACJ,IACI,IAAID,EAASe,EAASlB,OACtBF,EAAQK,EAAOL,MACfM,EAAOD,EAAOC,KAElB,MAAO0D,GAEH,YADAF,EAAW9B,MAAMgC,GAGjB1D,EACAwD,EAAWT,YAGXS,EAAW5D,KAAKF,GAChBvB,KAAKiN,qBAIV9C,KLpBI8E,CAAiBX,EAAOC,GAE9B,GAAI7L,QAAUA,OAAOwB,eAAwD,mBAAhCoK,EAAM5L,OAAOwB,eAC3D,OMtBL,SAA+BoK,EAAOC,GACzC,IAAKD,EACD,MAAM,IAAI7H,MAAM,2BAEpB,OAAO,IAAIe,EAAA,GAAW,SAAUnC,GAC5B,IAAI8E,EAAM,IAAIT,EAAA,EAgBd,OAfAS,EAAIhF,IAAIoJ,EAAUtB,UAAS,WACvB,IAAItK,EAAW2L,EAAM5L,OAAOwB,iBAC5BiG,EAAIhF,IAAIoJ,EAAUtB,UAAS,WACvB,IAAIpI,EAAQ7E,KACZ2C,EAASlB,OAAOK,MAAK,SAAUF,GACvBA,EAAOC,KACPwD,EAAWT,YAGXS,EAAW5D,KAAKG,EAAOL,OACvBsD,EAAMoI,uBAKf9C,KNCI+E,CAAsBZ,EAAOC,GAG5C,MAAM,IAAIzL,WAAqB,OAAVwL,UAAyBA,GAASA,GAAS,sBOxB7D,SAASa,EAAKb,EAAOC,GACxB,OAAKA,EAOME,EAAUH,EAAOC,GANpBD,aAAiB9G,EAAA,EACV8G,EAEJ,IAAI9G,EAAA,EAAW,OAAA4H,EAAA,GAAYd,M,0ECRtCe,EAAa,WACb,SAASA,EAAUC,EAAiBC,QACpB,IAARA,IAAkBA,EAAMF,EAAUE,KACtCvP,KAAKsP,gBAAkBA,EACvBtP,KAAKuP,IAAMA,EAOf,OALAF,EAAUnP,UAAU+M,SAAW,SAAUuC,EAAMC,EAAOC,GAElD,YADc,IAAVD,IAAoBA,EAAQ,GACzB,IAAIzP,KAAKsP,gBAAgBtP,KAAMwP,GAAMvC,SAASyC,EAAOD,IAEhEJ,EAAUE,IAAM,WAAc,OAAOI,KAAKJ,OACnCF,EAXK,GCEZ,EAAkB,SAAU3K,GAE5B,SAASkL,EAAeN,EAAiBC,QACzB,IAARA,IAAkBA,EAAMF,EAAUE,KACtC,IAAI1K,EAAQH,EAAO9D,KAAKZ,KAAMsP,GAAiB,WAC3C,OAAIM,EAAeC,UAAYD,EAAeC,WAAahL,EAChD+K,EAAeC,SAASN,MAGxBA,QAETvP,KAIN,OAHA6E,EAAMiL,QAAU,GAChBjL,EAAMkL,QAAS,EACflL,EAAM4J,eAAYnH,EACXzC,EAgCX,OA9CA,YAAU+K,EAAgBlL,GAgB1BkL,EAAe1P,UAAU+M,SAAW,SAAUuC,EAAMC,EAAOC,GAEvD,YADc,IAAVD,IAAoBA,EAAQ,GAC5BG,EAAeC,UAAYD,EAAeC,WAAa7P,KAChD4P,EAAeC,SAAS5C,SAASuC,EAAMC,EAAOC,GAG9ChL,EAAOxE,UAAU+M,SAASrM,KAAKZ,KAAMwP,EAAMC,EAAOC,IAGjEE,EAAe1P,UAAU8P,MAAQ,SAAUC,GACvC,IAAIH,EAAU9P,KAAK8P,QACnB,GAAI9P,KAAK+P,OACLD,EAAQ9M,KAAKiN,OADjB,CAIA,IAAI1M,EACJvD,KAAK+P,QAAS,EACd,GACI,GAAIxM,EAAQ0M,EAAOC,QAAQD,EAAOP,MAAOO,EAAOR,OAC5C,YAECQ,EAASH,EAAQvL,SAE1B,GADAvE,KAAK+P,QAAS,EACVxM,EAAO,CACP,KAAO0M,EAASH,EAAQvL,SACpB0L,EAAOvK,cAEX,MAAMnC,KAGPqM,EA/CU,CAgDnBP,I,0EChDE,EAAe,SAAU3K,GAEzB,SAASyL,EAAY5B,EAAWiB,GAC5B,IAAI3K,EAAQH,EAAO9D,KAAKZ,KAAMuO,EAAWiB,IAASxP,KAIlD,OAHA6E,EAAM0J,UAAYA,EAClB1J,EAAM2K,KAAOA,EACb3K,EAAMuL,SAAU,EACTvL,EA2EX,OAjFA,YAAUsL,EAAazL,GAQvByL,EAAYjQ,UAAU+M,SAAW,SAAUyC,EAAOD,GAE9C,QADc,IAAVA,IAAoBA,EAAQ,GAC5BzP,KAAK2F,OACL,OAAO3F,KAEXA,KAAK0P,MAAQA,EACb,IAAIW,EAAKrQ,KAAKqQ,GACV9B,EAAYvO,KAAKuO,UAOrB,OANU,MAAN8B,IACArQ,KAAKqQ,GAAKrQ,KAAKsQ,eAAe/B,EAAW8B,EAAIZ,IAEjDzP,KAAKoQ,SAAU,EACfpQ,KAAKyP,MAAQA,EACbzP,KAAKqQ,GAAKrQ,KAAKqQ,IAAMrQ,KAAKuQ,eAAehC,EAAWvO,KAAKqQ,GAAIZ,GACtDzP,MAEXmQ,EAAYjQ,UAAUqQ,eAAiB,SAAUhC,EAAW8B,EAAIZ,GAE5D,YADc,IAAVA,IAAoBA,EAAQ,GACzBe,YAAYjC,EAAUyB,MAAM/J,KAAKsI,EAAWvO,MAAOyP,IAE9DU,EAAYjQ,UAAUoQ,eAAiB,SAAU/B,EAAW8B,EAAIZ,GAE5D,QADc,IAAVA,IAAoBA,EAAQ,GAClB,OAAVA,GAAkBzP,KAAKyP,QAAUA,IAA0B,IAAjBzP,KAAKoQ,QAC/C,OAAOC,EAEXI,cAAcJ,IAGlBF,EAAYjQ,UAAUgQ,QAAU,SAAUR,EAAOD,GAC7C,GAAIzP,KAAK2F,OACL,OAAO,IAAIc,MAAM,gCAErBzG,KAAKoQ,SAAU,EACf,IAAI7M,EAAQvD,KAAK0Q,SAAShB,EAAOD,GACjC,GAAIlM,EACA,OAAOA,GAEe,IAAjBvD,KAAKoQ,SAAgC,MAAXpQ,KAAKqQ,KACpCrQ,KAAKqQ,GAAKrQ,KAAKsQ,eAAetQ,KAAKuO,UAAWvO,KAAKqQ,GAAI,QAG/DF,EAAYjQ,UAAUwQ,SAAW,SAAUhB,EAAOD,GAC9C,IAAIkB,GAAU,EACVC,OAAatJ,EACjB,IACItH,KAAKwP,KAAKE,GAEd,MAAOhO,GACHiP,GAAU,EACVC,IAAelP,GAAKA,GAAK,IAAI+E,MAAM/E,GAEvC,GAAIiP,EAEA,OADA3Q,KAAK0F,cACEkL,GAGfT,EAAYjQ,UAAUwG,aAAe,WACjC,IAAI2J,EAAKrQ,KAAKqQ,GACV9B,EAAYvO,KAAKuO,UACjBuB,EAAUvB,EAAUuB,QACpBhG,EAAQgG,EAAQtF,QAAQxK,MAC5BA,KAAKwP,KAAO,KACZxP,KAAK0P,MAAQ,KACb1P,KAAKoQ,SAAU,EACfpQ,KAAKuO,UAAY,MACF,IAAXzE,GACAgG,EAAQnF,OAAOb,EAAO,GAEhB,MAANuG,IACArQ,KAAKqQ,GAAKrQ,KAAKsQ,eAAe/B,EAAW8B,EAAI,OAEjDrQ,KAAKyP,MAAQ,MAEVU,EAlFO,CCAJ,SAAUzL,GAEpB,SAASmM,EAAOtC,EAAWiB,GACvB,OAAO9K,EAAO9D,KAAKZ,OAASA,KAMhC,OARA,YAAU6Q,EAAQnM,GAIlBmM,EAAO3Q,UAAU+M,SAAW,SAAUyC,EAAOD,GAEzC,YADc,IAAVA,IAAoBA,EAAQ,GACzBzP,MAEJ6Q,EATE,C,KAUX,K,6BCZF,8DAGO,SAASC,IAEZ,IADA,IAAIC,EAAO,GACFhI,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpCgI,EAAKhI,GAAMrI,UAAUqI,GAEzB,IAAIwF,EAAYwC,EAAKA,EAAKpQ,OAAS,GACnC,OAAI,YAAY4N,IACZwC,EAAKhO,MACE,YAAcgO,EAAMxC,IAGpB,YAAUwC,K,6BCdzB,sDAEWnH,EAAQ,CACfjE,QAAQ,EACRlE,KAAM,SAAUF,KAChBgC,MAAO,SAAUgC,GACb,GAAI,IAAOY,sCACP,MAAMZ,EAGN,YAAgBA,IAGxBX,SAAU,e,mECVHoM,E,uBCMX,SAASC,EAAS7J,GACd,IAAI7D,EAAQ6D,EAAG7D,MAAoB6D,EAAG/B,WAC3B9B,MAAMA,IDPrB,SAAWyN,GACPA,EAAuB,KAAI,IAC3BA,EAAwB,MAAI,IAC5BA,EAA2B,SAAI,IAHnC,CAIGA,IAAqBA,EAAmB,KAC3C,IAAI,EAAgB,WAChB,SAASE,EAAaC,EAAM5P,EAAOgC,GAC/BvD,KAAKmR,KAAOA,EACZnR,KAAKuB,MAAQA,EACbvB,KAAKuD,MAAQA,EACbvD,KAAKoR,SAAoB,MAATD,EAyDpB,OAvDAD,EAAahR,UAAUmR,QAAU,SAAUjJ,GACvC,OAAQpI,KAAKmR,MACT,IAAK,IACD,OAAO/I,EAAS3G,MAAQ2G,EAAS3G,KAAKzB,KAAKuB,OAC/C,IAAK,IACD,OAAO6G,EAAS7E,OAAS6E,EAAS7E,MAAMvD,KAAKuD,OACjD,IAAK,IACD,OAAO6E,EAASxD,UAAYwD,EAASxD,aAGjDsM,EAAahR,UAAUoR,GAAK,SAAU7P,EAAM8B,EAAOqB,GAE/C,OADW5E,KAAKmR,MAEZ,IAAK,IACD,OAAO1P,GAAQA,EAAKzB,KAAKuB,OAC7B,IAAK,IACD,OAAOgC,GAASA,EAAMvD,KAAKuD,OAC/B,IAAK,IACD,OAAOqB,GAAYA,MAG/BsM,EAAahR,UAAUqR,OAAS,SAAUxJ,EAAgBxE,EAAOqB,GAC7D,OAAImD,GAAiD,mBAAxBA,EAAetG,KACjCzB,KAAKqR,QAAQtJ,GAGb/H,KAAKsR,GAAGvJ,EAAgBxE,EAAOqB,IAG9CsM,EAAahR,UAAUsR,aAAe,WAClC,IC7CmBjO,EAAOgL,ED8C1B,OADWvO,KAAKmR,MAEZ,IAAK,IACD,OAAO,OAAAL,EAAA,GAAG9Q,KAAKuB,OACnB,IAAK,IACD,OClDWgC,EDkDOvD,KAAKuD,MCjD9BgL,EAIM,IAAI/G,EAAA,GAAW,SAAUnC,GAAc,OAAOkJ,EAAUtB,SAASgE,EAAU,EAAG,CAAE1N,MAAOA,EAAO8B,WAAYA,OAH1G,IAAImC,EAAA,GAAW,SAAUnC,GAAc,OAAOA,EAAW9B,MAAMA,MDiDlE,IAAK,IACD,OAAO,IAEf,MAAM,IAAIkD,MAAM,uCAEpByK,EAAaO,WAAa,SAAUlQ,GAChC,YAAqB,IAAVA,EACA,IAAI2P,EAAa,IAAK3P,GAE1B2P,EAAaQ,4BAExBR,EAAaS,YAAc,SAAUpM,GACjC,OAAO,IAAI2L,EAAa,SAAK5J,EAAW/B,IAE5C2L,EAAaU,eAAiB,WAC1B,OAAOV,EAAaW,sBAExBX,EAAaW,qBAAuB,IAAIX,EAAa,KACrDA,EAAaQ,2BAA6B,IAAIR,EAAa,SAAK5J,GACzD4J,EA9DQ,I,gCETnB,gFACO,SAASrI,IAEZ,IADA,IAAIiJ,EAAM,GACD/I,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpC+I,EAAI/I,GAAMrI,UAAUqI,GAExB,OAAOgJ,EAAcD,GAElB,SAASC,EAAcD,GAC1B,OAAmB,IAAfA,EAAInR,OACG,IAEQ,IAAfmR,EAAInR,OACGmR,EAAI,GAER,SAAexD,GAClB,OAAOwD,EAAIlH,QAAO,SAAUoH,EAAMzL,GAAM,OAAOA,EAAGyL,KAAU1D,M,6BChBpE,oDAEO,SAAS2D,EAAqBC,EAASC,GAC1C,OAAO,SAAUrL,GAAU,OAAOA,EAAOa,KAAK,IAAIyK,EAA6BF,EAASC,KAE5F,IAAIC,EAAgC,WAChC,SAASA,EAA6BF,EAASC,GAC3CnS,KAAKkS,QAAUA,EACflS,KAAKmS,YAAcA,EAKvB,OAHAC,EAA6BlS,UAAUU,KAAO,SAAUyE,EAAYyB,GAChE,OAAOA,EAAOO,UAAU,IAAIgL,EAA+BhN,EAAYrF,KAAKkS,QAASlS,KAAKmS,eAEvFC,EARwB,GAU/BC,EAAkC,SAAU3N,GAE5C,SAAS2N,EAA+BnN,EAAagN,EAASC,GAC1D,IAAItN,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAM9C,OALA6E,EAAMsN,YAAcA,EACpBtN,EAAMyN,QAAS,EACQ,mBAAZJ,IACPrN,EAAMqN,QAAUA,GAEbrN,EAgCX,OAxCA,YAAUwN,EAAgC3N,GAU1C2N,EAA+BnS,UAAUgS,QAAU,SAAUjJ,EAAG/G,GAC5D,OAAO+G,IAAM/G,GAEjBmQ,EAA+BnS,UAAUoF,MAAQ,SAAU/D,GACvD,IAAIgR,EACJ,IACI,IAAIJ,EAAcnS,KAAKmS,YACvBI,EAAMJ,EAAcA,EAAY5Q,GAASA,EAE7C,MAAOgE,GACH,OAAOvF,KAAKkF,YAAY3B,MAAMgC,GAElC,IAAI3D,GAAS,EACb,GAAI5B,KAAKsS,OACL,IAEI1Q,GAASsQ,EADKlS,KAAKkS,SACFlS,KAAKuS,IAAKA,GAE/B,MAAOhN,GACH,OAAOvF,KAAKkF,YAAY3B,MAAMgC,QAIlCvF,KAAKsS,QAAS,EAEb1Q,IACD5B,KAAKuS,IAAMA,EACXvS,KAAKkF,YAAYzD,KAAKF,KAGvB8Q,EAzC0B,CA0CnC,M,6BCzDK,SAASjI,EAASnB,GACrB,OAAa,OAANA,GAA2B,iBAANA,EADhC,mC,6BCAA,6CAEIuJ,EAAuB,SAAU9N,GAEjC,SAAS8N,EAAoBlF,EAASjI,GAClC,IAAIR,EAAQH,EAAO9D,KAAKZ,OAASA,KAIjC,OAHA6E,EAAMyI,QAAUA,EAChBzI,EAAMQ,WAAaA,EACnBR,EAAMc,QAAS,EACRd,EAkBX,OAxBA,YAAU2N,EAAqB9N,GAQ/B8N,EAAoBtS,UAAUwF,YAAc,WACxC,IAAI1F,KAAK2F,OAAT,CAGA3F,KAAK2F,QAAS,EACd,IAAI2H,EAAUtN,KAAKsN,QACfF,EAAYE,EAAQF,UAExB,GADApN,KAAKsN,QAAU,KACVF,GAAkC,IAArBA,EAAUzM,SAAgB2M,EAAQrI,YAAaqI,EAAQ3H,OAAzE,CAGA,IAAI8M,EAAkBrF,EAAU5C,QAAQxK,KAAKqF,aACpB,IAArBoN,GACArF,EAAUzC,OAAO8H,EAAiB,MAGnCD,EAzBe,CAF1B,KA4BE,I,6BC5BK,SAASE,EAASzJ,GACrB,OAAOA,EADX,mC,6BCAA,kCAAO,IAAI0J,EAAmB,SAAUC,GAAS,OAAO,SAAUvN,GAC9D,IAAK,IAAI7E,EAAI,EAAG0J,EAAM0I,EAAMjS,OAAQH,EAAI0J,IAAQ7E,EAAWM,OAAQnF,IAC/D6E,EAAW5D,KAAKmR,EAAMpS,IAE1B6E,EAAWT,c,6BCJf,kCAAO,IAAIkK,EAAc,SAAW7F,GAAK,OAAOA,GAAyB,iBAAbA,EAAEtI,QAAoC,mBAANsI,I,6BCArF,SAAS2F,EAAUrN,GACtB,QAASA,GAAoC,mBAApBA,EAAM8F,WAAkD,mBAAf9F,EAAMO,KAD5E,mC,6BCAA,8CAEW+Q,EAAQ,IAFnB,MAEuB,GAAe,M,kICD/B,SAASC,EAAyBC,GACrC,OAAO,SAAU1N,IAIrB,SAAiB0N,EAAe1N,GAC5B,IAAI2N,EAAiBC,EACjBC,EAAK9L,EACT,OAAO,YAAUpH,UAAM,OAAQ,GAAQ,WACnC,IAAIuB,EAAO4R,EACX,OAAO,YAAYnT,MAAM,SAAUoT,GAC/B,OAAQA,EAAG/Q,OACP,KAAK,EACD+Q,EAAG7Q,KAAKS,KAAK,CAAC,EAAG,EAAG,EAAG,KACvBgQ,EAAkB,YAAcD,GAChCK,EAAG/Q,MAAQ,EACf,KAAK,EAAG,MAAO,CAAC,EAAG2Q,EAAgBvR,QACnC,KAAK,EACD,IAAMwR,EAAoBG,EAAG9Q,QAA2BT,KAAO,MAAO,CAAC,EAAG,GAC1EN,EAAQ0R,EAAkB1R,MAC1B8D,EAAW5D,KAAKF,GAChB6R,EAAG/Q,MAAQ,EACf,KAAK,EAAG,MAAO,CAAC,EAAG,GACnB,KAAK,EAAG,MAAO,CAAC,EAAG,IACnB,KAAK,EAGD,OAFA8Q,EAAQC,EAAG9Q,OACX4Q,EAAM,CAAE3P,MAAO4P,GACR,CAAC,EAAG,IACf,KAAK,EAED,OADAC,EAAG7Q,KAAKS,KAAK,CAAC,EAAG,CAAE,EAAG,KAChBiQ,IAAsBA,EAAkBpR,OAASuF,EAAK4L,EAAgBhE,QACrE,CAAC,EAAG5H,EAAGxG,KAAKoS,IAD0E,CAAC,EAAG,GAErG,KAAK,EACDI,EAAG9Q,OACH8Q,EAAG/Q,MAAQ,EACf,KAAK,EAAG,MAAO,CAAC,EAAG,IACnB,KAAK,EACD,GAAI6Q,EAAK,MAAMA,EAAI3P,MACnB,MAAO,CAAC,GACZ,KAAK,GAAI,MAAO,CAAC,GACjB,KAAK,GAED,OADA8B,EAAWT,WACJ,CAAC,WAxCpByO,CAAQN,EAAe1N,GAAYiO,OAAM,SAAU/N,GAAO,OAAOF,EAAW9B,MAAMgC,OCOnF,IAAI6J,EAAc,SAAUxN,GAC/B,GAAMA,GAA+C,mBAA9BA,EAAO,KAC1B,OCXqC2R,EDWR3R,ECXsB,SAAUyD,GACjE,IAAImO,EAAMD,EAAI,OACd,GAA6B,mBAAlBC,EAAInM,UACX,MAAM,IAAIvE,UAAU,kEAGpB,OAAO0Q,EAAInM,UAAUhC,IDOpB,GAAI,OAAAyJ,EAAA,GAAYlN,GACjB,OAAO,OAAA+Q,EAAA,GAAiB/Q,GAEvB,GAAI,OAAAgN,EAAA,GAAUhN,GACf,OEjBkCgF,EFiBRhF,EEjB0B,SAAUyD,GAQlE,OAPAuB,EAAQ9E,MAAK,SAAUP,GACd8D,EAAWM,SACZN,EAAW5D,KAAKF,GAChB8D,EAAWT,eAEhB,SAAUW,GAAO,OAAOF,EAAW9B,MAAMgC,MACvCzD,KAAK,KAAMgL,EAAA,GACTzH,GFWF,GAAMzD,GAA6C,mBAA5BA,EAAO,KAC/B,OGpBmC6R,EHoBR7R,EGpB2B,SAAUyD,GAEpE,IADA,IAAI1C,EAAW8Q,EAAS,SACrB,CACC,IAAIC,EAAO/Q,EAASlB,OACpB,GAAIiS,EAAK7R,KAAM,CACXwD,EAAWT,WACX,MAGJ,GADAS,EAAW5D,KAAKiS,EAAKnS,OACjB8D,EAAWM,OACX,MAUR,MAP+B,mBAApBhD,EAASqM,QAChB3J,EAAWF,KAAI,WACPxC,EAASqM,QACTrM,EAASqM,YAId3J,GHEF,GAAI3C,QAAUA,OAAOwB,eACpBtC,GAAkD,mBAAjCA,EAAOc,OAAOwB,eACjC,OAAO4O,EAAyBlR,GAGhC,IG3BmC6R,EDAD7M,EDAG2M,ED2BjChS,EAAQ,OAAA6I,EAAA,GAASxI,GAAU,oBAAsB,IAAMA,EAAS,IAGpE,MAAM,IAAIkB,UAFA,gBAAkBvB,EAAlB,+F,iHIblB,IAAIoS,EAAoB,WACpB,SAASA,EAAiB7I,EAAS8I,QACZ,IAAfA,IAAyBA,EAAaC,OAAOC,mBACjD9T,KAAK8K,QAAUA,EACf9K,KAAK4T,WAAaA,EAKtB,OAHAD,EAAiBzT,UAAUU,KAAO,SAAUwH,EAAUtB,GAClD,OAAOA,EAAOO,UAAU,IAAI,EAAmBe,EAAUpI,KAAK8K,QAAS9K,KAAK4T,cAEzED,EATY,GAYnB,EAAsB,SAAUjP,GAEhC,SAASqP,EAAmB7O,EAAa4F,EAAS8I,QAC3B,IAAfA,IAAyBA,EAAaC,OAAOC,mBACjD,IAAIjP,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAO9C,OANA6E,EAAMiG,QAAUA,EAChBjG,EAAM+O,WAAaA,EACnB/O,EAAMmP,cAAe,EACrBnP,EAAMoP,OAAS,GACfpP,EAAMkL,OAAS,EACflL,EAAMiF,MAAQ,EACPjF,EAqDX,OA/DA,YAAUkP,EAAoBrP,GAY9BqP,EAAmB7T,UAAUoF,MAAQ,SAAU/D,GACvCvB,KAAK+P,OAAS/P,KAAK4T,WACnB5T,KAAKkU,SAAS3S,GAGdvB,KAAKiU,OAAOjR,KAAKzB,IAGzBwS,EAAmB7T,UAAUgU,SAAW,SAAU3S,GAC9C,IAAIK,EACAkI,EAAQ9J,KAAK8J,QACjB,IACIlI,EAAS5B,KAAK8K,QAAQvJ,EAAOuI,GAEjC,MAAOvE,GAEH,YADAvF,KAAKkF,YAAY3B,MAAMgC,GAG3BvF,KAAK+P,SACL/P,KAAKmO,UAAUvM,EAAQL,EAAOuI,IAElCiK,EAAmB7T,UAAUiO,UAAY,SAAUgG,EAAK5S,EAAOuI,GAC3D,IAAIwB,EAAkB,IAAIqC,EAAA,EAAgB3N,KAAMuB,EAAOuI,GACnD5E,EAAclF,KAAKkF,YACvBA,EAAYC,IAAImG,GAChB,IAAI8C,EAAoB,OAAAlD,EAAA,GAAkBlL,KAAMmU,OAAK7M,OAAWA,EAAWgE,GACvE8C,IAAsB9C,GACtBpG,EAAYC,IAAIiJ,IAGxB2F,EAAmB7T,UAAUuF,UAAY,WACrCzF,KAAKgU,cAAe,EACA,IAAhBhU,KAAK+P,QAAuC,IAAvB/P,KAAKiU,OAAOtT,QACjCX,KAAKkF,YAAYN,WAErB5E,KAAK0F,eAETqO,EAAmB7T,UAAUsL,WAAa,SAAUJ,EAAYK,EAAYJ,EAAYK,EAAYC,GAChG3L,KAAKkF,YAAYzD,KAAKgK,IAE1BsI,EAAmB7T,UAAU2L,eAAiB,SAAUF,GACpD,IAAIsI,EAASjU,KAAKiU,OAClBjU,KAAK6J,OAAO8B,GACZ3L,KAAK+P,SACDkE,EAAOtT,OAAS,EAChBX,KAAKsF,MAAM2O,EAAO1P,SAEG,IAAhBvE,KAAK+P,QAAgB/P,KAAKgU,cAC/BhU,KAAKkF,YAAYN,YAGlBmP,EAhEc,CAiEvBxI,EAAA,G,QC3FK,SAAS6I,EAASR,GAErB,YADmB,IAAfA,IAAyBA,EAAaC,OAAOC,mBDG9C,SAASO,EAASvJ,EAASiD,EAAgB6F,GAE9C,YADmB,IAAfA,IAAyBA,EAAaC,OAAOC,mBACnB,mBAAnB/F,EACA,SAAUjH,GAAU,OAAOA,EAAO+B,KAAKwL,GAAS,SAAUxQ,EAAGrD,GAAK,OAAO,OAAA2O,EAAA,GAAKrE,EAAQjH,EAAGrD,IAAIqI,KAAK,OAAAS,EAAA,IAAI,SAAU/J,EAAGyO,GAAM,OAAOD,EAAelK,EAAGtE,EAAGiB,EAAGwN,SAAa4F,MAE7I,iBAAnB7F,IACZ6F,EAAa7F,GAEV,SAAUjH,GAAU,OAAOA,EAAOa,KAAK,IAAIgM,EAAiB7I,EAAS8I,MCVrES,CAAS3B,EAAA,EAAUkB,K,6BCJ9B,8FAGO,SAASU,EAAU/F,EAAWkB,GAEjC,YADc,IAAVA,IAAoBA,EAAQ,GACzB,SAAmC3I,GACtC,OAAOA,EAAOa,KAAK,IAAI4M,EAAkBhG,EAAWkB,KAG5D,IAAI8E,EAAqB,WACrB,SAASA,EAAkBhG,EAAWkB,QACpB,IAAVA,IAAoBA,EAAQ,GAChCzP,KAAKuO,UAAYA,EACjBvO,KAAKyP,MAAQA,EAKjB,OAHA8E,EAAkBrU,UAAUU,KAAO,SAAUyE,EAAYyB,GACrD,OAAOA,EAAOO,UAAU,IAAImN,EAAoBnP,EAAYrF,KAAKuO,UAAWvO,KAAKyP,SAE9E8E,EATa,GAYpBC,EAAuB,SAAU9P,GAEjC,SAAS8P,EAAoBtP,EAAaqJ,EAAWkB,QACnC,IAAVA,IAAoBA,EAAQ,GAChC,IAAI5K,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAG9C,OAFA6E,EAAM0J,UAAYA,EAClB1J,EAAM4K,MAAQA,EACP5K,EAsBX,OA5BA,YAAU2P,EAAqB9P,GAQ/B8P,EAAoBvD,SAAW,SAAUwD,GACrC,IAAIC,EAAeD,EAAIC,aAAcxP,EAAcuP,EAAIvP,YACvDwP,EAAarD,QAAQnM,GACrBlF,KAAK0F,eAET8O,EAAoBtU,UAAUyU,gBAAkB,SAAUD,GACpC1U,KAAKkF,YACXC,IAAInF,KAAKuO,UAAUtB,SAASuH,EAAoBvD,SAAUjR,KAAKyP,MAAO,IAAImF,EAAiBF,EAAc1U,KAAKkF,gBAE9HsP,EAAoBtU,UAAUoF,MAAQ,SAAU/D,GAC5CvB,KAAK2U,gBAAgB,IAAalD,WAAWlQ,KAEjDiT,EAAoBtU,UAAUsF,OAAS,SAAUD,GAC7CvF,KAAK2U,gBAAgB,IAAahD,YAAYpM,IAC9CvF,KAAK0F,eAET8O,EAAoBtU,UAAUuF,UAAY,WACtCzF,KAAK2U,gBAAgB,IAAa/C,kBAClC5R,KAAK0F,eAEF8O,EA7Be,CA8BxB,KAEEI,EACA,SAA0BF,EAAcxP,GACpClF,KAAK0U,aAAeA,EACpB1U,KAAKkF,YAAcA,I;;;;;;;AClD3B,IAAiD2P,IASxC,WACT,OAAgB,SAAUC,GAEhB,IAAIC,EAAmB,GAGvB,SAASC,EAAoBC,GAG5B,GAAGF,EAAiBE,GACnB,OAAOF,EAAiBE,GAAUC,QAGnC,IAAIC,EAASJ,EAAiBE,GAAY,CACzCzU,EAAGyU,EACHG,GAAG,EACHF,QAAS,IAUV,OANAJ,EAAQG,GAAUrU,KAAKuU,EAAOD,QAASC,EAAQA,EAAOD,QAASF,GAG/DG,EAAOC,GAAI,EAGJD,EAAOD,QA0Df,OArDAF,EAAoB7R,EAAI2R,EAGxBE,EAAoBK,EAAIN,EAGxBC,EAAoB1V,EAAI,SAAS4V,EAASzL,EAAM6L,GAC3CN,EAAoB9R,EAAEgS,EAASzL,IAClCjK,OAAO+V,eAAeL,EAASzL,EAAM,CAAE+L,YAAY,EAAMC,IAAKH,KAKhEN,EAAoB3R,EAAI,SAAS6R,GACX,oBAAXxS,QAA0BA,OAAOgT,aAC1ClW,OAAO+V,eAAeL,EAASxS,OAAOgT,YAAa,CAAEnU,MAAO,WAE7D/B,OAAO+V,eAAeL,EAAS,aAAc,CAAE3T,OAAO,KAQvDyT,EAAoB1U,EAAI,SAASiB,EAAOoU,GAEvC,GADU,EAAPA,IAAUpU,EAAQyT,EAAoBzT,IAC/B,EAAPoU,EAAU,OAAOpU,EACpB,GAAW,EAAPoU,GAA8B,iBAAVpU,GAAsBA,GAASA,EAAMqU,WAAY,OAAOrU,EAChF,IAAIsU,EAAKrW,OAAOW,OAAO,MAGvB,GAFA6U,EAAoB3R,EAAEwS,GACtBrW,OAAO+V,eAAeM,EAAI,UAAW,CAAEL,YAAY,EAAMjU,MAAOA,IACtD,EAAPoU,GAA4B,iBAATpU,EAAmB,IAAI,IAAIgR,KAAOhR,EAAOyT,EAAoB1V,EAAEuW,EAAItD,EAAK,SAASA,GAAO,OAAOhR,EAAMgR,IAAQtM,KAAK,KAAMsM,IAC9I,OAAOsD,GAIRb,EAAoBvU,EAAI,SAAS0U,GAChC,IAAIG,EAASH,GAAUA,EAAOS,WAC7B,WAAwB,OAAOT,EAAgB,SAC/C,WAA8B,OAAOA,GAEtC,OADAH,EAAoB1V,EAAEgW,EAAQ,IAAKA,GAC5BA,GAIRN,EAAoB9R,EAAI,SAAS4S,EAAQC,GAAY,OAAOvW,OAAOU,UAAUL,eAAee,KAAKkV,EAAQC,IAGzGf,EAAoBpV,EAAI,GAIjBoV,EAAoBA,EAAoBzU,EAAI,GAnF7C,CAsFN,CAEJ,SAAU4U,EAAQD,GA4CxBC,EAAOD,QA1CP,SAAgBc,GACZ,IAAIC,EAEJ,GAAyB,WAArBD,EAAQE,SACRF,EAAQG,QAERF,EAAeD,EAAQzU,WAEtB,GAAyB,UAArByU,EAAQE,UAA6C,aAArBF,EAAQE,SAAyB,CACtE,IAAIE,EAAaJ,EAAQK,aAAa,YAEjCD,GACDJ,EAAQM,aAAa,WAAY,IAGrCN,EAAQO,SACRP,EAAQQ,kBAAkB,EAAGR,EAAQzU,MAAMZ,QAEtCyV,GACDJ,EAAQS,gBAAgB,YAG5BR,EAAeD,EAAQzU,UAEtB,CACGyU,EAAQK,aAAa,oBACrBL,EAAQG,QAGZ,IAAIO,EAAYxK,OAAOyK,eACnBC,EAAQC,SAASC,cAErBF,EAAMG,mBAAmBf,GACzBU,EAAUM,kBACVN,EAAUO,SAASL,GAEnBX,EAAeS,EAAUnN,WAG7B,OAAO0M,IAQL,SAAUd,EAAQD,GAExB,SAASgC,KAKTA,EAAEhX,UAAY,CACZiX,GAAI,SAAU1N,EAAM2N,EAAUC,GAC5B,IAAI3V,EAAI1B,KAAK0B,IAAM1B,KAAK0B,EAAI,IAO5B,OALCA,EAAE+H,KAAU/H,EAAE+H,GAAQ,KAAKzG,KAAK,CAC/BuD,GAAI6Q,EACJC,IAAKA,IAGArX,MAGTsX,KAAM,SAAU7N,EAAM2N,EAAUC,GAC9B,IAAIjL,EAAOpM,KACX,SAASuX,IACPnL,EAAKoL,IAAI/N,EAAM8N,GACfH,EAASvW,MAAMwW,EAAK3W,WAItB,OADA6W,EAASnV,EAAIgV,EACNpX,KAAKmX,GAAG1N,EAAM8N,EAAUF,IAGjCI,KAAM,SAAUhO,GAMd,IALA,IAAIiO,EAAO,GAAGjK,MAAM7M,KAAKF,UAAW,GAChCiX,IAAW3X,KAAK0B,IAAM1B,KAAK0B,EAAI,KAAK+H,IAAS,IAAIgE,QACjDjN,EAAI,EACJ0J,EAAMyN,EAAOhX,OAETH,EAAI0J,EAAK1J,IACfmX,EAAOnX,GAAG+F,GAAG1F,MAAM8W,EAAOnX,GAAG6W,IAAKK,GAGpC,OAAO1X,MAGTwX,IAAK,SAAU/N,EAAM2N,GACnB,IAAI1V,EAAI1B,KAAK0B,IAAM1B,KAAK0B,EAAI,IACxBkW,EAAOlW,EAAE+H,GACToO,EAAa,GAEjB,GAAID,GAAQR,EACV,IAAK,IAAI5W,EAAI,EAAG0J,EAAM0N,EAAKjX,OAAQH,EAAI0J,EAAK1J,IACtCoX,EAAKpX,GAAG+F,KAAO6Q,GAAYQ,EAAKpX,GAAG+F,GAAGnE,IAAMgV,GAC9CS,EAAW7U,KAAK4U,EAAKpX,IAY3B,OAJCqX,EAAiB,OACdnW,EAAE+H,GAAQoO,SACHnW,EAAE+H,GAENzJ,OAIXmV,EAAOD,QAAUgC,EACjB/B,EAAOD,QAAQ4C,YAAcZ,GAKvB,SAAU/B,EAAQD,EAASF,GAEjC,IAAI+C,EAAK/C,EAAoB,GACzBnF,EAAWmF,EAAoB,GA6FnCG,EAAOD,QAlFP,SAAgB8C,EAAQC,EAAMb,GAC1B,IAAKY,IAAWC,IAASb,EACrB,MAAM,IAAI3Q,MAAM,8BAGpB,IAAKsR,EAAGG,OAAOD,GACX,MAAM,IAAInV,UAAU,oCAGxB,IAAKiV,EAAGxR,GAAG6Q,GACP,MAAM,IAAItU,UAAU,qCAGxB,GAAIiV,EAAGI,KAAKH,GACR,OAsBR,SAAoBG,EAAMF,EAAMb,GAG5B,OAFAe,EAAKC,iBAAiBH,EAAMb,GAErB,CACHiB,QAAS,WACLF,EAAKG,oBAAoBL,EAAMb,KA3B5BmB,CAAWP,EAAQC,EAAMb,GAE/B,GAAIW,EAAGS,SAASR,GACjB,OAsCR,SAAwBQ,EAAUP,EAAMb,GAKpC,OAJAzX,MAAMO,UAAUuI,QAAQ7H,KAAK4X,GAAU,SAASL,GAC5CA,EAAKC,iBAAiBH,EAAMb,MAGzB,CACHiB,QAAS,WACL1Y,MAAMO,UAAUuI,QAAQ7H,KAAK4X,GAAU,SAASL,GAC5CA,EAAKG,oBAAoBL,EAAMb,QA9ChCqB,CAAeT,EAAQC,EAAMb,GAEnC,GAAIW,EAAGG,OAAOF,GACf,OA0DR,SAAwBU,EAAUT,EAAMb,GACpC,OAAOvH,EAASgH,SAAS7U,KAAM0W,EAAUT,EAAMb,GA3DpCuB,CAAeX,EAAQC,EAAMb,GAGpC,MAAM,IAAItU,UAAU,+EAgEtB,SAAUqS,EAAQD,GAQxBA,EAAQiD,KAAO,SAAS5W,GACpB,YAAiB+F,IAAV/F,GACAA,aAAiBqX,aACE,IAAnBrX,EAAMsX,UASjB3D,EAAQsD,SAAW,SAASjX,GACxB,IAAI0W,EAAOzY,OAAOU,UAAUqJ,SAAS3I,KAAKW,GAE1C,YAAiB+F,IAAV/F,IACU,sBAAT0W,GAAyC,4BAATA,IAChC,WAAY1W,IACK,IAAjBA,EAAMZ,QAAgBuU,EAAQiD,KAAK5W,EAAM,MASrD2T,EAAQgD,OAAS,SAAS3W,GACtB,MAAwB,iBAAVA,GACPA,aAAiBuX,QAS5B5D,EAAQ3O,GAAK,SAAShF,GAGlB,MAAgB,sBAFL/B,OAAOU,UAAUqJ,SAAS3I,KAAKW,KAQxC,SAAU4T,EAAQD,EAASF,GAEjC,IAAI+D,EAAU/D,EAAoB,GAYlC,SAASgE,EAAUhD,EAAS0C,EAAUT,EAAMb,EAAU6B,GAClD,IAAIC,EAAa3B,EAAS1W,MAAMb,KAAMU,WAItC,OAFAsV,EAAQoC,iBAAiBH,EAAMiB,EAAYD,GAEpC,CACHZ,QAAS,WACLrC,EAAQsC,oBAAoBL,EAAMiB,EAAYD,KAgD1D,SAAS1B,EAASvB,EAAS0C,EAAUT,EAAMb,GACvC,OAAO,SAAS1V,GACZA,EAAEyX,eAAiBJ,EAAQrX,EAAEsW,OAAQU,GAEjChX,EAAEyX,gBACF/B,EAASxW,KAAKoV,EAAStU,IAKnCyT,EAAOD,QA3CP,SAAkBkE,EAAUV,EAAUT,EAAMb,EAAU6B,GAElD,MAAyC,mBAA9BG,EAAShB,iBACTY,EAAUnY,MAAM,KAAMH,WAIb,mBAATuX,EAGAe,EAAU/S,KAAK,KAAM4Q,UAAUhW,MAAM,KAAMH,YAI9B,iBAAb0Y,IACPA,EAAWvC,SAASwC,iBAAiBD,IAIlCzZ,MAAMO,UAAUoJ,IAAI1I,KAAKwY,GAAU,SAAUpD,GAChD,OAAOgD,EAAUhD,EAAS0C,EAAUT,EAAMb,EAAU6B,SA4BtD,SAAU9D,EAAQD,GAOxB,GAAuB,oBAAZoE,UAA4BA,QAAQpZ,UAAUqZ,QAAS,CAC9D,IAAIC,EAAQF,QAAQpZ,UAEpBsZ,EAAMD,QAAUC,EAAMC,iBACND,EAAME,oBACNF,EAAMG,mBACNH,EAAMI,kBACNJ,EAAMK,sBAoB1B1E,EAAOD,QAVP,SAAkBc,EAAS0C,GACvB,KAAO1C,GAvBc,IAuBHA,EAAQ6C,UAAiC,CACvD,GAA+B,mBAApB7C,EAAQuD,SACfvD,EAAQuD,QAAQb,GAClB,OAAO1C,EAETA,EAAUA,EAAQ8D,cASpB,SAAU3E,EAAQ4E,EAAqB/E,GAE7C,aACAA,EAAoB3R,EAAE0W,GAGtB,IAAIC,EAAahF,EAAoB,GACjCiF,EAA8BjF,EAAoBvU,EAAEuZ,GAGpDE,EAA4B,mBAAXxX,QAAoD,iBAApBA,OAAOC,SAAwB,SAAU4Q,GAAO,cAAcA,GAAS,SAAUA,GAAO,OAAOA,GAAyB,mBAAX7Q,QAAyB6Q,EAAItT,cAAgByC,QAAU6Q,IAAQ7Q,OAAOxC,UAAY,gBAAkBqT,GAElQ4G,EAAe,WAAc,SAASC,EAAiBpC,EAAQqC,GAAS,IAAK,IAAI7Z,EAAI,EAAGA,EAAI6Z,EAAM1Z,OAAQH,IAAK,CAAE,IAAI8Z,EAAaD,EAAM7Z,GAAI8Z,EAAW9E,WAAa8E,EAAW9E,aAAc,EAAO8E,EAAWC,cAAe,EAAU,UAAWD,IAAYA,EAAWE,UAAW,GAAMhb,OAAO+V,eAAeyC,EAAQsC,EAAW/H,IAAK+H,IAAiB,OAAO,SAAUG,EAAaC,EAAYC,GAAiJ,OAA9HD,GAAYN,EAAiBK,EAAYva,UAAWwa,GAAiBC,GAAaP,EAAiBK,EAAaE,GAAqBF,GAA7gB,GA8PcG,EAnPM,WAInC,SAASC,EAAgBC,IAb7B,SAAyBC,EAAUN,GAAe,KAAMM,aAAoBN,GAAgB,MAAM,IAAI3X,UAAU,qCAcxGkY,CAAgBhb,KAAM6a,GAEtB7a,KAAKib,eAAeH,GACpB9a,KAAKkb,gBAwOT,OA/NAf,EAAaU,EAAiB,CAAC,CAC3BtI,IAAK,iBACLhR,MAAO,WACH,IAAIuZ,EAAUpa,UAAUC,OAAS,QAAsB2G,IAAjB5G,UAAU,GAAmBA,UAAU,GAAK,GAElFV,KAAKiQ,OAAS6K,EAAQ7K,OACtBjQ,KAAKmb,UAAYL,EAAQK,UACzBnb,KAAKob,QAAUN,EAAQM,QACvBpb,KAAKgY,OAAS8C,EAAQ9C,OACtBhY,KAAKqb,KAAOP,EAAQO,KACpBrb,KAAKsb,QAAUR,EAAQQ,QAEvBtb,KAAKiW,aAAe,KAQzB,CACC1D,IAAK,gBACLhR,MAAO,WACCvB,KAAKqb,KACLrb,KAAKub,aACEvb,KAAKgY,QACZhY,KAAKwb,iBASd,CACCjJ,IAAK,aACLhR,MAAO,WACH,IAAIsD,EAAQ7E,KAERyb,EAAwD,OAAhD5E,SAAS6E,gBAAgBC,aAAa,OAElD3b,KAAK4b,aAEL5b,KAAK6b,oBAAsB,WACvB,OAAOhX,EAAM+W,cAEjB5b,KAAK8b,YAAc9b,KAAKmb,UAAU/C,iBAAiB,QAASpY,KAAK6b,uBAAwB,EAEzF7b,KAAK+b,SAAWlF,SAASmF,cAAc,YAEvChc,KAAK+b,SAASE,MAAMC,SAAW,OAE/Blc,KAAK+b,SAASE,MAAME,OAAS,IAC7Bnc,KAAK+b,SAASE,MAAMG,QAAU,IAC9Bpc,KAAK+b,SAASE,MAAMI,OAAS,IAE7Brc,KAAK+b,SAASE,MAAMK,SAAW,WAC/Btc,KAAK+b,SAASE,MAAMR,EAAQ,QAAU,QAAU,UAEhD,IAAIc,EAAYrQ,OAAOsQ,aAAe3F,SAAS6E,gBAAgBe,UAC/Dzc,KAAK+b,SAASE,MAAMS,IAAMH,EAAY,KAEtCvc,KAAK+b,SAASzF,aAAa,WAAY,IACvCtW,KAAK+b,SAASxa,MAAQvB,KAAKqb,KAE3Brb,KAAKmb,UAAUwB,YAAY3c,KAAK+b,UAEhC/b,KAAKiW,aAAegE,IAAiBja,KAAK+b,UAC1C/b,KAAK4c,aAQV,CACCrK,IAAK,aACLhR,MAAO,WACCvB,KAAK8b,cACL9b,KAAKmb,UAAU7C,oBAAoB,QAAStY,KAAK6b,qBACjD7b,KAAK8b,YAAc,KACnB9b,KAAK6b,oBAAsB,MAG3B7b,KAAK+b,WACL/b,KAAKmb,UAAU0B,YAAY7c,KAAK+b,UAChC/b,KAAK+b,SAAW,QAQzB,CACCxJ,IAAK,eACLhR,MAAO,WACHvB,KAAKiW,aAAegE,IAAiBja,KAAKgY,QAC1ChY,KAAK4c,aAOV,CACCrK,IAAK,WACLhR,MAAO,WACH,IAAIub,OAAY,EAEhB,IACIA,EAAYjG,SAASkG,YAAY/c,KAAKiQ,QACxC,MAAO1K,GACLuX,GAAY,EAGhB9c,KAAKgd,aAAaF,KAQvB,CACCvK,IAAK,eACLhR,MAAO,SAAsBub,GACzB9c,KAAKob,QAAQ3D,KAAKqF,EAAY,UAAY,QAAS,CAC/C7M,OAAQjQ,KAAKiQ,OACboL,KAAMrb,KAAKiW,aACXqF,QAAStb,KAAKsb,QACd2B,eAAgBjd,KAAKid,eAAehX,KAAKjG,UAQlD,CACCuS,IAAK,iBACLhR,MAAO,WACCvB,KAAKsb,SACLtb,KAAKsb,QAAQnF,QAEjBU,SAASqG,cAAcC,OACvBjR,OAAOyK,eAAeK,oBAQ3B,CACCzE,IAAK,UAMLhR,MAAO,WACHvB,KAAK4b,eAEV,CACCrJ,IAAK,SACL6K,IAAK,WACD,IAAInN,EAASvP,UAAUC,OAAS,QAAsB2G,IAAjB5G,UAAU,GAAmBA,UAAU,GAAK,OAIjF,GAFAV,KAAKqd,QAAUpN,EAEM,SAAjBjQ,KAAKqd,SAAuC,QAAjBrd,KAAKqd,QAChC,MAAM,IAAI5W,MAAM,uDASxBgP,IAAK,WACD,OAAOzV,KAAKqd,UASjB,CACC9K,IAAK,SACL6K,IAAK,SAAapF,GACd,QAAe1Q,IAAX0Q,EAAsB,CACtB,IAAIA,GAA8E,iBAAjD,IAAXA,EAAyB,YAAckC,EAAQlC,KAA6C,IAApBA,EAAOa,SAWjG,MAAM,IAAIpS,MAAM,+CAVhB,GAAoB,SAAhBzG,KAAKiQ,QAAqB+H,EAAO3B,aAAa,YAC9C,MAAM,IAAI5P,MAAM,qFAGpB,GAAoB,QAAhBzG,KAAKiQ,SAAqB+H,EAAO3B,aAAa,aAAe2B,EAAO3B,aAAa,aACjF,MAAM,IAAI5P,MAAM,0GAGpBzG,KAAKsd,QAAUtF,IAY3BvC,IAAK,WACD,OAAOzV,KAAKsd,YAIbzC,EAhP4B,GAqPnC0C,EAAevI,EAAoB,GACnCwI,EAAoCxI,EAAoBvU,EAAE8c,GAG1DE,EAASzI,EAAoB,GAC7B0I,EAA8B1I,EAAoBvU,EAAEgd,GAGpDE,EAAqC,mBAAXjb,QAAoD,iBAApBA,OAAOC,SAAwB,SAAU4Q,GAAO,cAAcA,GAAS,SAAUA,GAAO,OAAOA,GAAyB,mBAAX7Q,QAAyB6Q,EAAItT,cAAgByC,QAAU6Q,IAAQ7Q,OAAOxC,UAAY,gBAAkBqT,GAE3QqK,EAAwB,WAAc,SAASxD,EAAiBpC,EAAQqC,GAAS,IAAK,IAAI7Z,EAAI,EAAGA,EAAI6Z,EAAM1Z,OAAQH,IAAK,CAAE,IAAI8Z,EAAaD,EAAM7Z,GAAI8Z,EAAW9E,WAAa8E,EAAW9E,aAAc,EAAO8E,EAAWC,cAAe,EAAU,UAAWD,IAAYA,EAAWE,UAAW,GAAMhb,OAAO+V,eAAeyC,EAAQsC,EAAW/H,IAAK+H,IAAiB,OAAO,SAAUG,EAAaC,EAAYC,GAAiJ,OAA9HD,GAAYN,EAAiBK,EAAYva,UAAWwa,GAAiBC,GAAaP,EAAiBK,EAAaE,GAAqBF,GAA7gB,GAiBxBoD,EAAsB,SAAUC,GAOhC,SAASC,EAAUzC,EAASR,IAtBhC,SAAkCC,EAAUN,GAAe,KAAMM,aAAoBN,GAAgB,MAAM,IAAI3X,UAAU,qCAuBjHkb,CAAyBhe,KAAM+d,GAE/B,IAAIlZ,EAvBZ,SAAoCuH,EAAMxL,GAAQ,IAAKwL,EAAQ,MAAM,IAAI6R,eAAe,6DAAgE,OAAOrd,GAAyB,iBAATA,GAAqC,mBAATA,EAA8BwL,EAAPxL,EAuB9Msd,CAA2Ble,MAAO+d,EAAUre,WAAaF,OAAO2e,eAAeJ,IAAYnd,KAAKZ,OAI5G,OAFA6E,EAAMoW,eAAeH,GACrBjW,EAAMuZ,YAAY9C,GACXzW,EAsIX,OA/JJ,SAAmBwZ,EAAUC,GAAc,GAA0B,mBAAfA,GAA4C,OAAfA,EAAuB,MAAM,IAAIxb,UAAU,kEAAoEwb,GAAeD,EAASne,UAAYV,OAAOW,OAAOme,GAAcA,EAAWpe,UAAW,CAAED,YAAa,CAAEsB,MAAO8c,EAAU7I,YAAY,EAAOgF,UAAU,EAAMD,cAAc,KAAe+D,IAAY9e,OAAOC,eAAiBD,OAAOC,eAAe4e,EAAUC,GAAcD,EAAS3e,UAAY4e,GAY7dC,CAAUR,EAAWD,GAuBrBF,EAAsBG,EAAW,CAAC,CAC9BxL,IAAK,iBACLhR,MAAO,WACH,IAAIuZ,EAAUpa,UAAUC,OAAS,QAAsB2G,IAAjB5G,UAAU,GAAmBA,UAAU,GAAK,GAElFV,KAAKiQ,OAAmC,mBAAnB6K,EAAQ7K,OAAwB6K,EAAQ7K,OAASjQ,KAAKwe,cAC3Exe,KAAKgY,OAAmC,mBAAnB8C,EAAQ9C,OAAwB8C,EAAQ9C,OAAShY,KAAKye,cAC3Eze,KAAKqb,KAA+B,mBAAjBP,EAAQO,KAAsBP,EAAQO,KAAOrb,KAAK0e,YACrE1e,KAAKmb,UAAoD,WAAxCwC,EAAiB7C,EAAQK,WAA0BL,EAAQK,UAAYtE,SAAS7U,OAQtG,CACCuQ,IAAK,cACLhR,MAAO,SAAqB+Z,GACxB,IAAIqD,EAAS3e,KAEbA,KAAKuX,SAAWmG,IAAiBpC,EAAS,SAAS,SAAU5Z,GACzD,OAAOid,EAAOC,QAAQld,QAS/B,CACC6Q,IAAK,UACLhR,MAAO,SAAiBG,GACpB,IAAI4Z,EAAU5Z,EAAEyX,gBAAkBzX,EAAEmd,cAEhC7e,KAAK8e,kBACL9e,KAAK8e,gBAAkB,MAG3B9e,KAAK8e,gBAAkB,IAAIlE,EAAiB,CACxC3K,OAAQjQ,KAAKiQ,OAAOqL,GACpBtD,OAAQhY,KAAKgY,OAAOsD,GACpBD,KAAMrb,KAAKqb,KAAKC,GAChBH,UAAWnb,KAAKmb,UAChBG,QAASA,EACTF,QAASpb,SASlB,CACCuS,IAAK,gBACLhR,MAAO,SAAuB+Z,GAC1B,OAAOyD,EAAkB,SAAUzD,KAQxC,CACC/I,IAAK,gBACLhR,MAAO,SAAuB+Z,GAC1B,IAAI5C,EAAWqG,EAAkB,SAAUzD,GAE3C,GAAI5C,EACA,OAAO7B,SAASmI,cAActG,KAUvC,CACCnG,IAAK,cAOLhR,MAAO,SAAqB+Z,GACxB,OAAOyD,EAAkB,OAAQzD,KAOtC,CACC/I,IAAK,UACLhR,MAAO,WACHvB,KAAKuX,SAASc,UAEVrY,KAAK8e,kBACL9e,KAAK8e,gBAAgBzG,UACrBrY,KAAK8e,gBAAkB,SAG/B,CAAC,CACDvM,IAAK,cACLhR,MAAO,WACH,IAAI0O,EAASvP,UAAUC,OAAS,QAAsB2G,IAAjB5G,UAAU,GAAmBA,UAAU,GAAK,CAAC,OAAQ,OAEtFoP,EAA4B,iBAAXG,EAAsB,CAACA,GAAUA,EAClDgP,IAAYpI,SAASqI,sBAMzB,OAJApP,EAAQrH,SAAQ,SAAUwH,GACtBgP,EAAUA,KAAapI,SAASqI,sBAAsBjP,MAGnDgP,MAIRlB,EApJe,CAqJxBP,EAAqB3Z,GASvB,SAASkb,EAAkBI,EAAQnJ,GAC/B,IAAIoJ,EAAY,kBAAoBD,EAEpC,GAAKnJ,EAAQK,aAAa+I,GAI1B,OAAOpJ,EAAQ2F,aAAayD,GAGarF,EAA6B,QAAI,KAGzD,SAn8BnB5E,EAAOD,QAAUL,K,6BCRnB,qFAMIwK,EAAO,GACJ,SAASC,IAEZ,IADA,IAAIC,EAAc,GACTxW,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpCwW,EAAYxW,GAAMrI,UAAUqI,GAEhC,IAAIgF,OAAiBzG,EACjBiH,OAAYjH,EAUhB,OATI,YAAYiY,EAAYA,EAAY5e,OAAS,MAC7C4N,EAAYgR,EAAYxc,OAEuB,mBAAxCwc,EAAYA,EAAY5e,OAAS,KACxCoN,EAAiBwR,EAAYxc,OAEN,IAAvBwc,EAAY5e,QAAgB,YAAQ4e,EAAY,MAChDA,EAAcA,EAAY,IAEvB,YAAUA,EAAahR,GAAW5G,KAAK,IAAI6X,EAAsBzR,IAE5E,IAAIyR,EAAyB,WACzB,SAASA,EAAsBzR,GAC3B/N,KAAK+N,eAAiBA,EAK1B,OAHAyR,EAAsBtf,UAAUU,KAAO,SAAUyE,EAAYyB,GACzD,OAAOA,EAAOO,UAAU,IAAIoY,EAAwBpa,EAAYrF,KAAK+N,kBAElEyR,EAPiB,GAUxBC,EAA2B,SAAU/a,GAErC,SAAS+a,EAAwBva,EAAa6I,GAC1C,IAAIlJ,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAK9C,OAJA6E,EAAMkJ,eAAiBA,EACvBlJ,EAAMkL,OAAS,EACflL,EAAMmC,OAAS,GACfnC,EAAM0a,YAAc,GACb1a,EAqDX,OA5DA,YAAU4a,EAAyB/a,GASnC+a,EAAwBvf,UAAUoF,MAAQ,SAAUuC,GAChD7H,KAAKgH,OAAOhE,KAAKqc,GACjBrf,KAAKuf,YAAYvc,KAAK6E,IAE1B4X,EAAwBvf,UAAUuF,UAAY,WAC1C,IAAI8Z,EAAcvf,KAAKuf,YACnBrV,EAAMqV,EAAY5e,OACtB,GAAY,IAARuJ,EACAlK,KAAKkF,YAAYN,eAEhB,CACD5E,KAAK+P,OAAS7F,EACdlK,KAAK0f,UAAYxV,EACjB,IAAK,IAAI1J,EAAI,EAAGA,EAAI0J,EAAK1J,IAAK,CAC1B,IAAIqH,EAAa0X,EAAY/e,GAC7BR,KAAKmF,IAAI,YAAkBnF,KAAM6H,EAAYA,EAAYrH,OAIrEif,EAAwBvf,UAAU2L,eAAiB,SAAU8T,GAC9B,IAAtB3f,KAAK+P,QAAU,IAChB/P,KAAKkF,YAAYN,YAGzB6a,EAAwBvf,UAAUsL,WAAa,SAAUJ,EAAYK,EAAYJ,EAAYK,EAAYC,GACrG,IAAI3E,EAAShH,KAAKgH,OACd4Y,EAAS5Y,EAAOqE,GAChBqU,EAAa1f,KAAK0f,UAEhBE,IAAWP,IAASrf,KAAK0f,UAAY1f,KAAK0f,UAD1C,EAEN1Y,EAAOqE,GAAcI,EACH,IAAdiU,IACI1f,KAAK+N,eACL/N,KAAK6f,mBAAmB7Y,GAGxBhH,KAAKkF,YAAYzD,KAAKuF,EAAOyG,WAIzCgS,EAAwBvf,UAAU2f,mBAAqB,SAAU7Y,GAC7D,IAAIpF,EACJ,IACIA,EAAS5B,KAAK+N,eAAelN,MAAMb,KAAMgH,GAE7C,MAAOzB,GAEH,YADAvF,KAAKkF,YAAY3B,MAAMgC,GAG3BvF,KAAKkF,YAAYzD,KAAKG,IAEnB6d,EA7DmB,CA8D5B,M,cCjGF,IAAItd,EAGJA,EAAI,WACH,OAAOnC,KADJ,GAIJ,IAECmC,EAAIA,GAAK,IAAI2d,SAAS,cAAb,GACR,MAAOpe,GAEc,iBAAXwK,SAAqB/J,EAAI+J,QAOrCiJ,EAAOD,QAAU/S,G,2CCnBjB,YAOA,IAAI4d,EAAU,WACV,GAAmB,oBAARC,IACP,OAAOA,IASX,SAASC,EAASC,EAAK3N,GACnB,IAAI3Q,GAAU,EAQd,OAPAse,EAAIC,MAAK,SAAUC,EAAOtW,GACtB,OAAIsW,EAAM,KAAO7N,IACb3Q,EAASkI,GACF,MAIRlI,EAEX,OAAsB,WAClB,SAASye,IACLrgB,KAAKsgB,YAAc,GAuEvB,OArEA9gB,OAAO+V,eAAe8K,EAAQngB,UAAW,OAAQ,CAI7CuV,IAAK,WACD,OAAOzV,KAAKsgB,YAAY3f,QAE5B6U,YAAY,EACZ+E,cAAc,IAMlB8F,EAAQngB,UAAUuV,IAAM,SAAUlD,GAC9B,IAAIzI,EAAQmW,EAASjgB,KAAKsgB,YAAa/N,GACnC6N,EAAQpgB,KAAKsgB,YAAYxW,GAC7B,OAAOsW,GAASA,EAAM,IAO1BC,EAAQngB,UAAUkd,IAAM,SAAU7K,EAAKhR,GACnC,IAAIuI,EAAQmW,EAASjgB,KAAKsgB,YAAa/N,IAClCzI,EACD9J,KAAKsgB,YAAYxW,GAAO,GAAKvI,EAG7BvB,KAAKsgB,YAAYtd,KAAK,CAACuP,EAAKhR,KAOpC8e,EAAQngB,UAAUqgB,OAAS,SAAUhO,GACjC,IAAIiO,EAAUxgB,KAAKsgB,YACfxW,EAAQmW,EAASO,EAASjO,IACzBzI,GACD0W,EAAQ7V,OAAOb,EAAO,IAO9BuW,EAAQngB,UAAUugB,IAAM,SAAUlO,GAC9B,SAAU0N,EAASjgB,KAAKsgB,YAAa/N,IAKzC8N,EAAQngB,UAAUwgB,MAAQ,WACtB1gB,KAAKsgB,YAAY3V,OAAO,IAO5B0V,EAAQngB,UAAUuI,QAAU,SAAU2O,EAAUC,QAChC,IAARA,IAAkBA,EAAM,MAC5B,IAAK,IAAItO,EAAK,EAAG3B,EAAKpH,KAAKsgB,YAAavX,EAAK3B,EAAGzG,OAAQoI,IAAM,CAC1D,IAAIqX,EAAQhZ,EAAG2B,GACfqO,EAASxW,KAAKyW,EAAK+I,EAAM,GAAIA,EAAM,MAGpCC,EAzEU,GAtBX,GAsGVM,EAA8B,oBAAXzU,QAA8C,oBAAb2K,UAA4B3K,OAAO2K,WAAaA,SAGpG+J,OACsB,IAAXrU,GAA0BA,EAAOqB,OAASA,KAC1CrB,EAES,oBAATH,MAAwBA,KAAKwB,OAASA,KACtCxB,KAEW,oBAAXF,QAA0BA,OAAO0B,OAASA,KAC1C1B,OAGJ4T,SAAS,cAATA,GASPe,EACqC,mBAA1BC,sBAIAA,sBAAsB7a,KAAK2a,GAE/B,SAAUxJ,GAAY,OAAOrK,YAAW,WAAc,OAAOqK,EAASzH,KAAKJ,SAAW,IAAO,KAqExG,IAGIwR,EAAiB,CAAC,MAAO,QAAS,SAAU,OAAQ,QAAS,SAAU,OAAQ,UAE/EC,EAAwD,oBAArBC,iBAInCC,EAA0C,WAM1C,SAASA,IAMLlhB,KAAKmhB,YAAa,EAMlBnhB,KAAKohB,sBAAuB,EAM5BphB,KAAKqhB,mBAAqB,KAM1BrhB,KAAKshB,WAAa,GAClBthB,KAAKuhB,iBAAmBvhB,KAAKuhB,iBAAiBtb,KAAKjG,MACnDA,KAAKwhB,QAjGb,SAAmBpK,EAAU3H,GACzB,IAAIgS,GAAc,EAAOC,GAAe,EAAOC,EAAe,EAO9D,SAASC,IACDH,IACAA,GAAc,EACdrK,KAEAsK,GACAG,IAUR,SAASC,IACLjB,EAAwBe,GAO5B,SAASC,IACL,IAAIE,EAAYpS,KAAKJ,MACrB,GAAIkS,EAAa,CAEb,GAAIM,EAAYJ,EA7CN,EA8CN,OAMJD,GAAe,OAGfD,GAAc,EACdC,GAAe,EACf3U,WAAW+U,EAAiBrS,GAEhCkS,EAAeI,EAEnB,OAAOF,EA6CYG,CAAShiB,KAAKwhB,QAAQvb,KAAKjG,MAzC9B,IAyMhB,OAxJAkhB,EAAyBhhB,UAAU+hB,YAAc,SAAU7Z,IACjDpI,KAAKshB,WAAW9W,QAAQpC,IAC1BpI,KAAKshB,WAAWte,KAAKoF,GAGpBpI,KAAKmhB,YACNnhB,KAAKkiB,YASbhB,EAAyBhhB,UAAUiiB,eAAiB,SAAU/Z,GAC1D,IAAIgF,EAAYpN,KAAKshB,WACjBxX,EAAQsD,EAAU5C,QAAQpC,IAEzB0B,GACDsD,EAAUzC,OAAOb,EAAO,IAGvBsD,EAAUzM,QAAUX,KAAKmhB,YAC1BnhB,KAAKoiB,eASblB,EAAyBhhB,UAAUshB,QAAU,WACnBxhB,KAAKqiB,oBAIvBriB,KAAKwhB,WAWbN,EAAyBhhB,UAAUmiB,iBAAmB,WAElD,IAAIC,EAAkBtiB,KAAKshB,WAAWiB,QAAO,SAAUna,GACnD,OAAOA,EAASoa,eAAgBpa,EAASqa,eAQ7C,OADAH,EAAgB7Z,SAAQ,SAAUL,GAAY,OAAOA,EAASsa,qBACvDJ,EAAgB3hB,OAAS,GAQpCugB,EAAyBhhB,UAAUgiB,SAAW,WAGrCvB,IAAa3gB,KAAKmhB,aAMvBtK,SAASuB,iBAAiB,gBAAiBpY,KAAKuhB,kBAChDrV,OAAOkM,iBAAiB,SAAUpY,KAAKwhB,SACnCR,GACAhhB,KAAKqhB,mBAAqB,IAAIJ,iBAAiBjhB,KAAKwhB,SACpDxhB,KAAKqhB,mBAAmBhQ,QAAQwF,SAAU,CACtC8L,YAAY,EACZC,WAAW,EACXC,eAAe,EACfC,SAAS,MAIbjM,SAASuB,iBAAiB,qBAAsBpY,KAAKwhB,SACrDxhB,KAAKohB,sBAAuB,GAEhCphB,KAAKmhB,YAAa,IAQtBD,EAAyBhhB,UAAUkiB,YAAc,WAGxCzB,GAAc3gB,KAAKmhB,aAGxBtK,SAASyB,oBAAoB,gBAAiBtY,KAAKuhB,kBACnDrV,OAAOoM,oBAAoB,SAAUtY,KAAKwhB,SACtCxhB,KAAKqhB,oBACLrhB,KAAKqhB,mBAAmB0B,aAExB/iB,KAAKohB,sBACLvK,SAASyB,oBAAoB,qBAAsBtY,KAAKwhB,SAE5DxhB,KAAKqhB,mBAAqB,KAC1BrhB,KAAKohB,sBAAuB,EAC5BphB,KAAKmhB,YAAa,IAStBD,EAAyBhhB,UAAUqhB,iBAAmB,SAAUna,GAC5D,IAAIgM,EAAKhM,EAAG4b,aAAcA,OAAsB,IAAP5P,EAAgB,GAAKA,EAEvC2N,EAAeZ,MAAK,SAAU5N,GACjD,SAAUyQ,EAAaxY,QAAQ+H,OAG/BvS,KAAKwhB,WAQbN,EAAyB+B,YAAc,WAInC,OAHKjjB,KAAKkjB,YACNljB,KAAKkjB,UAAY,IAAIhC,GAElBlhB,KAAKkjB,WAOhBhC,EAAyBgC,UAAY,KAC9BhC,EAhMkC,GA0MzCiC,EAAqB,SAAWnL,EAAQqC,GACxC,IAAK,IAAItR,EAAK,EAAG3B,EAAK5H,OAAO4jB,KAAK/I,GAAQtR,EAAK3B,EAAGzG,OAAQoI,IAAM,CAC5D,IAAIwJ,EAAMnL,EAAG2B,GACbvJ,OAAO+V,eAAeyC,EAAQzF,EAAK,CAC/BhR,MAAO8Y,EAAM9H,GACbiD,YAAY,EACZgF,UAAU,EACVD,cAAc,IAGtB,OAAOvC,GASPqL,EAAc,SAAWrL,GAOzB,OAHkBA,GAAUA,EAAOsL,eAAiBtL,EAAOsL,cAAcC,aAGnD3C,GAItB4C,EAAYC,EAAe,EAAG,EAAG,EAAG,GAOxC,SAASC,EAAQniB,GACb,OAAOoiB,WAAWpiB,IAAU,EAShC,SAASqiB,EAAeC,GAEpB,IADA,IAAIC,EAAY,GACP/a,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpC+a,EAAU/a,EAAK,GAAKrI,UAAUqI,GAElC,OAAO+a,EAAUlZ,QAAO,SAAUmZ,EAAMzH,GAEpC,OAAOyH,EAAOL,EADFG,EAAO,UAAYvH,EAAW,aAE3C,GAmCP,SAAS0H,EAA0BhM,GAG/B,IAAIiM,EAAcjM,EAAOiM,YAAaC,EAAelM,EAAOkM,aAS5D,IAAKD,IAAgBC,EACjB,OAAOV,EAEX,IAAIK,EAASR,EAAYrL,GAAQmM,iBAAiBnM,GAC9CoM,EA3CR,SAAqBP,GAGjB,IAFA,IACIO,EAAW,GACNrb,EAAK,EAAGsb,EAFD,CAAC,MAAO,QAAS,SAAU,QAEDtb,EAAKsb,EAAY1jB,OAAQoI,IAAM,CACrE,IAAIuT,EAAW+H,EAAYtb,GACvBxH,EAAQsiB,EAAO,WAAavH,GAChC8H,EAAS9H,GAAYoH,EAAQniB,GAEjC,OAAO6iB,EAmCQE,CAAYT,GACvBU,EAAWH,EAASI,KAAOJ,EAASK,MACpCC,EAAUN,EAAS1H,IAAM0H,EAASO,OAKlCC,EAAQlB,EAAQG,EAAOe,OAAQC,EAASnB,EAAQG,EAAOgB,QAqB3D,GAlByB,eAArBhB,EAAOiB,YAOHlX,KAAKmX,MAAMH,EAAQL,KAAcN,IACjCW,GAAShB,EAAeC,EAAQ,OAAQ,SAAWU,GAEnD3W,KAAKmX,MAAMF,EAASH,KAAaR,IACjCW,GAAUjB,EAAeC,EAAQ,MAAO,UAAYa,KAoDhE,SAA2B1M,GACvB,OAAOA,IAAWqL,EAAYrL,GAAQnB,SAAS6E,gBA9C1CsJ,CAAkBhN,GAAS,CAK5B,IAAIiN,EAAgBrX,KAAKmX,MAAMH,EAAQL,GAAYN,EAC/CiB,EAAiBtX,KAAKmX,MAAMF,EAASH,GAAWR,EAMpB,IAA5BtW,KAAKuX,IAAIF,KACTL,GAASK,GAEoB,IAA7BrX,KAAKuX,IAAID,KACTL,GAAUK,GAGlB,OAAOzB,EAAeW,EAASI,KAAMJ,EAAS1H,IAAKkI,EAAOC,GAQ9D,IAAIO,EAGkC,oBAAvBC,mBACA,SAAUrN,GAAU,OAAOA,aAAkBqL,EAAYrL,GAAQqN,oBAKrE,SAAUrN,GAAU,OAAQA,aAAkBqL,EAAYrL,GAAQsN,YAC3C,mBAAnBtN,EAAOuN,SAiBtB,SAASC,EAAexN,GACpB,OAAK2I,EAGDyE,EAAqBpN,GAhH7B,SAA2BA,GACvB,IAAIyN,EAAOzN,EAAOuN,UAClB,OAAO9B,EAAe,EAAG,EAAGgC,EAAKb,MAAOa,EAAKZ,QA+GlCa,CAAkB1N,GAEtBgM,EAA0BhM,GALtBwL,EAuCf,SAASC,EAAexa,EAAG/G,EAAG0iB,EAAOC,GACjC,MAAO,CAAE5b,EAAGA,EAAG/G,EAAGA,EAAG0iB,MAAOA,EAAOC,OAAQA,GAO/C,IAAIc,EAAmC,WAMnC,SAASA,EAAkB3N,GAMvBhY,KAAK4lB,eAAiB,EAMtB5lB,KAAK6lB,gBAAkB,EAMvB7lB,KAAK8lB,aAAerC,EAAe,EAAG,EAAG,EAAG,GAC5CzjB,KAAKgY,OAASA,EA0BlB,OAlBA2N,EAAkBzlB,UAAU6lB,SAAW,WACnC,IAAIC,EAAOR,EAAexlB,KAAKgY,QAE/B,OADAhY,KAAK8lB,aAAeE,EACZA,EAAKpB,QAAU5kB,KAAK4lB,gBACxBI,EAAKnB,SAAW7kB,KAAK6lB,iBAQ7BF,EAAkBzlB,UAAU+lB,cAAgB,WACxC,IAAID,EAAOhmB,KAAK8lB,aAGhB,OAFA9lB,KAAK4lB,eAAiBI,EAAKpB,MAC3B5kB,KAAK6lB,gBAAkBG,EAAKnB,OACrBmB,GAEJL,EAnD2B,GAsDlCO,EAOA,SAA6BlO,EAAQmO,GACjC,IA/FoB/e,EACpB6B,EAAU/G,EAAU0iB,EAAkBC,EAEtCuB,EACAJ,EA2FIK,GA9FJpd,GADoB7B,EA+FiB+e,GA9F9Bld,EAAG/G,EAAIkF,EAAGlF,EAAG0iB,EAAQxd,EAAGwd,MAAOC,EAASzd,EAAGyd,OAElDuB,EAAoC,oBAApBE,gBAAkCA,gBAAkB9mB,OACpEwmB,EAAOxmB,OAAOW,OAAOimB,EAAOlmB,WAEhCijB,EAAmB6C,EAAM,CACrB/c,EAAGA,EAAG/G,EAAGA,EAAG0iB,MAAOA,EAAOC,OAAQA,EAClCnI,IAAKxa,EACLuiB,MAAOxb,EAAI2b,EACXD,OAAQE,EAAS3iB,EACjBsiB,KAAMvb,IAEH+c,GAyFH7C,EAAmBnjB,KAAM,CAAEgY,OAAQA,EAAQqO,YAAaA,KAK5DE,EAAmC,WAWnC,SAASA,EAAkBnP,EAAUoP,EAAYC,GAc7C,GAPAzmB,KAAK0mB,oBAAsB,GAM3B1mB,KAAK2mB,cAAgB,IAAI5G,EACD,mBAAb3I,EACP,MAAM,IAAItU,UAAU,2DAExB9C,KAAK4mB,UAAYxP,EACjBpX,KAAK6mB,YAAcL,EACnBxmB,KAAK8mB,aAAeL,EAoHxB,OA5GAF,EAAkBrmB,UAAUmR,QAAU,SAAU2G,GAC5C,IAAKtX,UAAUC,OACX,MAAM,IAAImC,UAAU,4CAGxB,GAAuB,oBAAZwW,SAA6BA,mBAAmB9Z,OAA3D,CAGA,KAAMwY,aAAkBqL,EAAYrL,GAAQsB,SACxC,MAAM,IAAIxW,UAAU,yCAExB,IAAIikB,EAAe/mB,KAAK2mB,cAEpBI,EAAatG,IAAIzI,KAGrB+O,EAAa3J,IAAIpF,EAAQ,IAAI2N,EAAkB3N,IAC/ChY,KAAK6mB,YAAY5E,YAAYjiB,MAE7BA,KAAK6mB,YAAYrF,aAQrB+E,EAAkBrmB,UAAU8mB,UAAY,SAAUhP,GAC9C,IAAKtX,UAAUC,OACX,MAAM,IAAImC,UAAU,4CAGxB,GAAuB,oBAAZwW,SAA6BA,mBAAmB9Z,OAA3D,CAGA,KAAMwY,aAAkBqL,EAAYrL,GAAQsB,SACxC,MAAM,IAAIxW,UAAU,yCAExB,IAAIikB,EAAe/mB,KAAK2mB,cAEnBI,EAAatG,IAAIzI,KAGtB+O,EAAaxG,OAAOvI,GACf+O,EAAahD,MACd/jB,KAAK6mB,YAAY1E,eAAeniB,SAQxCumB,EAAkBrmB,UAAU6iB,WAAa,WACrC/iB,KAAKinB,cACLjnB,KAAK2mB,cAAcjG,QACnB1gB,KAAK6mB,YAAY1E,eAAeniB,OAQpCumB,EAAkBrmB,UAAUsiB,aAAe,WACvC,IAAI3d,EAAQ7E,KACZA,KAAKinB,cACLjnB,KAAK2mB,cAAcle,SAAQ,SAAUye,GAC7BA,EAAYnB,YACZlhB,EAAM6hB,oBAAoB1jB,KAAKkkB,OAU3CX,EAAkBrmB,UAAUwiB,gBAAkB,WAE1C,GAAK1iB,KAAKyiB,YAAV,CAGA,IAAIpL,EAAMrX,KAAK8mB,aAEXtG,EAAUxgB,KAAK0mB,oBAAoBpd,KAAI,SAAU4d,GACjD,OAAO,IAAIhB,EAAoBgB,EAAYlP,OAAQkP,EAAYjB,oBAEnEjmB,KAAK4mB,UAAUhmB,KAAKyW,EAAKmJ,EAASnJ,GAClCrX,KAAKinB,gBAOTV,EAAkBrmB,UAAU+mB,YAAc,WACtCjnB,KAAK0mB,oBAAoB/b,OAAO,IAOpC4b,EAAkBrmB,UAAUuiB,UAAY,WACpC,OAAOziB,KAAK0mB,oBAAoB/lB,OAAS,GAEtC4lB,EAlJ2B,GAwJlCnZ,EAA+B,oBAAZ+Z,QAA0B,IAAIA,QAAY,IAAIpH,EAKjEqH,EAOA,SAASA,EAAehQ,GACpB,KAAMpX,gBAAgBonB,GAClB,MAAM,IAAItkB,UAAU,sCAExB,IAAKpC,UAAUC,OACX,MAAM,IAAImC,UAAU,4CAExB,IAAI0jB,EAAatF,EAAyB+B,cACtC7a,EAAW,IAAIme,EAAkBnP,EAAUoP,EAAYxmB,MAC3DoN,EAAUgQ,IAAIpd,KAAMoI,IAK5B,CACI,UACA,YACA,cACFK,SAAQ,SAAU4e,GAChBD,EAAelnB,UAAUmnB,GAAU,WAC/B,IAAIjgB,EACJ,OAAQA,EAAKgG,EAAUqI,IAAIzV,OAAOqnB,GAAQxmB,MAAMuG,EAAI1G,eAI5D,IAAIoJ,OAEuC,IAA5B8W,EAASwG,eACTxG,EAASwG,eAEbA,EAGI,Q;;;;;;;GCh5Bf,IAAIE,EAAkB,UAOtBnS,EAAOD,QAUP,SAAoBgD,GAClB,IAOIqP,EAPAC,EAAM,GAAKtP,EACXuP,EAAQH,EAAgBI,KAAKF,GAEjC,IAAKC,EACH,OAAOD,EAIT,IAAIG,EAAO,GACP7d,EAAQ,EACR8d,EAAY,EAEhB,IAAK9d,EAAQ2d,EAAM3d,MAAOA,EAAQ0d,EAAI7mB,OAAQmJ,IAAS,CACrD,OAAQ0d,EAAIK,WAAW/d,IACrB,KAAK,GACHyd,EAAS,SACT,MACF,KAAK,GACHA,EAAS,QACT,MACF,KAAK,GACHA,EAAS,QACT,MACF,KAAK,GACHA,EAAS,OACT,MACF,KAAK,GACHA,EAAS,OACT,MACF,QACE,SAGAK,IAAc9d,IAChB6d,GAAQH,EAAIM,UAAUF,EAAW9d,IAGnC8d,EAAY9d,EAAQ,EACpB6d,GAAQJ,EAGV,OAAOK,IAAc9d,EACjB6d,EAAOH,EAAIM,UAAUF,EAAW9d,GAChC6d,I,6BC5EN,6DAGO,SAASI,EAAMC,GAClB,OAAO,IAAI,KAAW,SAAU3iB,GAC5B,IAAIiJ,EACJ,IACIA,EAAQ0Z,IAEZ,MAAOziB,GAEH,YADAF,EAAW9B,MAAMgC,GAIrB,OADa+I,EAAQ,YAAKA,GAAS,KACrBjH,UAAUhC,Q,kFCZ5B,EAAe,SAAUX,GAEzB,SAASujB,EAAY1Z,EAAWiB,GAC5B,IAAI3K,EAAQH,EAAO9D,KAAKZ,KAAMuO,EAAWiB,IAASxP,KAGlD,OAFA6E,EAAM0J,UAAYA,EAClB1J,EAAM2K,KAAOA,EACN3K,EAwBX,OA7BA,YAAUojB,EAAavjB,GAOvBujB,EAAY/nB,UAAU+M,SAAW,SAAUyC,EAAOD,GAE9C,YADc,IAAVA,IAAoBA,EAAQ,GAC5BA,EAAQ,EACD/K,EAAOxE,UAAU+M,SAASrM,KAAKZ,KAAM0P,EAAOD,IAEvDzP,KAAKyP,MAAQA,EACbzP,KAAK0P,MAAQA,EACb1P,KAAKuO,UAAUyB,MAAMhQ,MACdA,OAEXioB,EAAY/nB,UAAUgQ,QAAU,SAAUR,EAAOD,GAC7C,OAAQA,EAAQ,GAAKzP,KAAK2F,OACtBjB,EAAOxE,UAAUgQ,QAAQtP,KAAKZ,KAAM0P,EAAOD,GAC3CzP,KAAK0Q,SAAShB,EAAOD,IAE7BwY,EAAY/nB,UAAUqQ,eAAiB,SAAUhC,EAAW8B,EAAIZ,GAE5D,YADc,IAAVA,IAAoBA,EAAQ,GACjB,OAAVA,GAAkBA,EAAQ,GAAiB,OAAVA,GAAkBzP,KAAKyP,MAAQ,EAC1D/K,EAAOxE,UAAUqQ,eAAe3P,KAAKZ,KAAMuO,EAAW8B,EAAIZ,GAE9DlB,EAAUyB,MAAMhQ,OAEpBioB,EA9BO,C,MA+BhB,GC/BSC,EAAQ,ICAG,SAAUxjB,GAE5B,SAASyjB,IACL,OAAkB,OAAXzjB,GAAmBA,EAAO7D,MAAMb,KAAMU,YAAcV,KAE/D,OAJA,YAAUmoB,EAAgBzjB,GAInByjB,EALU,C,MAMnB,GDNiB,CAAmB,G,+BEKlC,EAAiB,SAAUzjB,GAE3B,SAAS0jB,EAAcC,EAAYC,EAAY/Z,QACxB,IAAf8Z,IAAyBA,EAAaxU,OAAOC,wBAC9B,IAAfwU,IAAyBA,EAAazU,OAAOC,mBACjD,IAAIjP,EAAQH,EAAO9D,KAAKZ,OAASA,KAajC,OAZA6E,EAAM0J,UAAYA,EAClB1J,EAAM0jB,QAAU,GAChB1jB,EAAM2jB,qBAAsB,EAC5B3jB,EAAM4jB,YAAcJ,EAAa,EAAI,EAAIA,EACzCxjB,EAAM6jB,YAAcJ,EAAa,EAAI,EAAIA,EACrCA,IAAezU,OAAOC,mBACtBjP,EAAM2jB,qBAAsB,EAC5B3jB,EAAMpD,KAAOoD,EAAM8jB,wBAGnB9jB,EAAMpD,KAAOoD,EAAM+jB,eAEhB/jB,EA4EX,OA7FA,YAAUujB,EAAe1jB,GAmBzB0jB,EAAcloB,UAAUyoB,uBAAyB,SAAUpnB,GACvD,IAAIgnB,EAAUvoB,KAAKuoB,QACnBA,EAAQvlB,KAAKzB,GACTgnB,EAAQ5nB,OAASX,KAAKyoB,aACtBF,EAAQhkB,QAEZG,EAAOxE,UAAUuB,KAAKb,KAAKZ,KAAMuB,IAErC6mB,EAAcloB,UAAU0oB,eAAiB,SAAUrnB,GAC/CvB,KAAKuoB,QAAQvlB,KAAK,IAAI6lB,EAAY7oB,KAAK8oB,UAAWvnB,IAClDvB,KAAK+oB,2BACLrkB,EAAOxE,UAAUuB,KAAKb,KAAKZ,KAAMuB,IAErC6mB,EAAcloB,UAAUwH,WAAa,SAAUrC,GAC3C,IAIIuD,EAJA4f,EAAsBxoB,KAAKwoB,oBAC3BD,EAAUC,EAAsBxoB,KAAKuoB,QAAUvoB,KAAK+oB,2BACpDxa,EAAYvO,KAAKuO,UACjBrE,EAAMqe,EAAQ5nB,OAElB,GAAIX,KAAK2F,OACL,MAAM,IAAI8G,EAAA,EAYd,GAVSzM,KAAKiF,WAAajF,KAAKiH,SAC5B2B,EAAec,EAAA,EAAaY,OAG5BtK,KAAKoN,UAAUpK,KAAKqC,GACpBuD,EAAe,IAAI4J,EAAA,EAAoBxS,KAAMqF,IAE7CkJ,GACAlJ,EAAWF,IAAIE,EAAa,IAAI,IAAoBA,EAAYkJ,IAEhEia,EACA,IAAK,IAAIhoB,EAAI,EAAGA,EAAI0J,IAAQ7E,EAAWM,OAAQnF,IAC3C6E,EAAW5D,KAAK8mB,EAAQ/nB,SAI5B,IAASA,EAAI,EAAGA,EAAI0J,IAAQ7E,EAAWM,OAAQnF,IAC3C6E,EAAW5D,KAAK8mB,EAAQ/nB,GAAGe,OASnC,OANIvB,KAAKiH,SACL5B,EAAW9B,MAAMvD,KAAKqN,aAEjBrN,KAAKiF,WACVI,EAAWT,WAERgE,GAEXwf,EAAcloB,UAAU4oB,QAAU,WAC9B,OAAQ9oB,KAAKuO,WAAa2Z,GAAO3Y,OAErC6Y,EAAcloB,UAAU6oB,yBAA2B,WAO/C,IANA,IAAIxZ,EAAMvP,KAAK8oB,UACXL,EAAczoB,KAAKyoB,YACnBC,EAAc1oB,KAAK0oB,YACnBH,EAAUvoB,KAAKuoB,QACfS,EAAcT,EAAQ5nB,OACtBsoB,EAAc,EACXA,EAAcD,KACZzZ,EAAMgZ,EAAQU,GAAaC,KAAQR,IAGxCO,IAQJ,OANID,EAAcP,IACdQ,EAAcrb,KAAKub,IAAIF,EAAaD,EAAcP,IAElDQ,EAAc,GACdV,EAAQ5d,OAAO,EAAGse,GAEfV,GAEJH,EA9FS,CA+FlBjb,EAAA,GAEE0b,EACA,SAAqBK,EAAM3nB,GACvBvB,KAAKkpB,KAAOA,EACZlpB,KAAKuB,MAAQA,I,yCC3GN,SAAS6nB,EAAKC,EAAM9V,GACjC,OAAO/T,OAAOU,UAAUL,eAAee,KAAK2S,EAAK8V,GCAnD,IAAI,EAAW7pB,OAAOU,UAAUqJ,SAYjB,EARf,WACE,MAAoC,uBAA7B,EAAS3I,KAAKF,WAAsC,SAAsBuI,GAC/E,MAA4B,uBAArB,EAASrI,KAAKqI,IACnB,SAAsBA,GACxB,OAAOmgB,EAAK,SAAUngB,IAJ1B,GCDIqgB,GAEJ,CACE/f,SAAU,MACVggB,qBAAqB,YACnBC,EAAqB,CAAC,cAAe,UAAW,gBAAiB,WAAY,uBAAwB,iBAAkB,kBAEvHC,EAEJ,WAGE,OAAO/oB,UAAU6oB,qBAAqB,UAHxC,GAMIG,EAAW,SAAkBC,EAAMjW,GAGrC,IAFA,IAAIkW,EAAM,EAEHA,EAAMD,EAAKhpB,QAAQ,CACxB,GAAIgpB,EAAKC,KAASlW,EAChB,OAAO,EAGTkW,GAAO,EAGT,OAAO,GAsBL,EAA8B,mBAAhBpqB,OAAO4jB,MAAwBqG,EAMjD,OAAA7c,EAAA,IAAQ,SAAc2G,GACpB,GAAI/T,OAAO+T,KAASA,EAClB,MAAO,GAGT,IAAI8V,EAAMQ,EACNC,EAAK,GAELC,EAAkBN,GAAkB,EAAalW,GAErD,IAAK8V,KAAQ9V,GACP6V,EAAKC,EAAM9V,IAAUwW,GAA4B,WAATV,IAC1CS,EAAGA,EAAGnpB,QAAU0oB,GAIpB,GAAIC,EAGF,IAFAO,EAAOL,EAAmB7oB,OAAS,EAE5BkpB,GAAQ,GAGTT,EAFJC,EAAOG,EAAmBK,GAEXtW,KAASmW,EAASI,EAAIT,KACnCS,EAAGA,EAAGnpB,QAAU0oB,GAGlBQ,GAAQ,EAIZ,OAAOC,KAlCT,OAAAld,EAAA,IAAQ,SAAc2G,GACpB,OAAO/T,OAAO+T,KAASA,EAAM,GAAK/T,OAAO4jB,KAAK7P,MAmCjC,O,6BC1Ff,oEAIO,SAASyW,EAAIjiB,EAAgBxE,EAAOqB,GACvC,OAAO,SAA6BkC,GAChC,OAAOA,EAAOa,KAAK,IAAIsiB,EAAWliB,EAAgBxE,EAAOqB,KAGjE,IAAIqlB,EAAc,WACd,SAASA,EAAWliB,EAAgBxE,EAAOqB,GACvC5E,KAAK+H,eAAiBA,EACtB/H,KAAKuD,MAAQA,EACbvD,KAAK4E,SAAWA,EAKpB,OAHAqlB,EAAW/pB,UAAUU,KAAO,SAAUyE,EAAYyB,GAC9C,OAAOA,EAAOO,UAAU,IAAI6iB,EAAc7kB,EAAYrF,KAAK+H,eAAgB/H,KAAKuD,MAAOvD,KAAK4E,YAEzFqlB,EATM,GAWbC,EAAiB,SAAUxlB,GAE3B,SAASwlB,EAAchlB,EAAaa,EAAgBxC,EAAOqB,GACvD,IAAIC,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAgB9C,OAfA6E,EAAMslB,SAAW,IACjBtlB,EAAMulB,UAAY,IAClBvlB,EAAMwlB,aAAe,IACrBxlB,EAAMulB,UAAY7mB,GAAS,IAC3BsB,EAAMwlB,aAAezlB,GAAY,IAC7B,YAAWmB,IACXlB,EAAMqB,SAAWrB,EACjBA,EAAMslB,SAAWpkB,GAEZA,IACLlB,EAAMqB,SAAWH,EACjBlB,EAAMslB,SAAWpkB,EAAetE,MAAQ,IACxCoD,EAAMulB,UAAYrkB,EAAexC,OAAS,IAC1CsB,EAAMwlB,aAAetkB,EAAenB,UAAY,KAE7CC,EAgCX,OAlDA,YAAUqlB,EAAexlB,GAoBzBwlB,EAAchqB,UAAUoF,MAAQ,SAAU/D,GACtC,IACIvB,KAAKmqB,SAASvpB,KAAKZ,KAAKkG,SAAU3E,GAEtC,MAAOgE,GAEH,YADAvF,KAAKkF,YAAY3B,MAAMgC,GAG3BvF,KAAKkF,YAAYzD,KAAKF,IAE1B2oB,EAAchqB,UAAUsF,OAAS,SAAUD,GACvC,IACIvF,KAAKoqB,UAAUxpB,KAAKZ,KAAKkG,SAAUX,GAEvC,MAAOA,GAEH,YADAvF,KAAKkF,YAAY3B,MAAMgC,GAG3BvF,KAAKkF,YAAY3B,MAAMgC,IAE3B2kB,EAAchqB,UAAUuF,UAAY,WAChC,IACIzF,KAAKqqB,aAAazpB,KAAKZ,KAAKkG,UAEhC,MAAOX,GAEH,YADAvF,KAAKkF,YAAY3B,MAAMgC,GAG3B,OAAOvF,KAAKkF,YAAYN,YAErBslB,EAnDS,CAoDlB,M,6BCxEF,oDAEO,SAASI,EAAKC,EAAaC,GAC9B,IAAIC,GAAU,EAId,OAHI/pB,UAAUC,QAAU,IACpB8pB,GAAU,GAEP,SAA8B3jB,GACjC,OAAOA,EAAOa,KAAK,IAAI+iB,EAAaH,EAAaC,EAAMC,KAG/D,IAAIC,EAAgB,WAChB,SAASA,EAAaH,EAAaC,EAAMC,QACrB,IAAZA,IAAsBA,GAAU,GACpCzqB,KAAKuqB,YAAcA,EACnBvqB,KAAKwqB,KAAOA,EACZxqB,KAAKyqB,QAAUA,EAKnB,OAHAC,EAAaxqB,UAAUU,KAAO,SAAUyE,EAAYyB,GAChD,OAAOA,EAAOO,UAAU,IAAIsjB,EAAetlB,EAAYrF,KAAKuqB,YAAavqB,KAAKwqB,KAAMxqB,KAAKyqB,WAEtFC,EAVQ,GAYfC,EAAkB,SAAUjmB,GAE5B,SAASimB,EAAezlB,EAAaqlB,EAAaK,EAAQC,GACtD,IAAIhmB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAK9C,OAJA6E,EAAM0lB,YAAcA,EACpB1lB,EAAM+lB,OAASA,EACf/lB,EAAMgmB,UAAYA,EAClBhmB,EAAMiF,MAAQ,EACPjF,EAuBX,OA9BA,YAAU8lB,EAAgBjmB,GAS1BimB,EAAezqB,UAAUoF,MAAQ,SAAU/D,GACvC,IAAI2D,EAAclF,KAAKkF,YACvB,GAAKlF,KAAK6qB,UAKL,CACD,IAAI/gB,EAAQ9J,KAAK8J,QACblI,OAAS,EACb,IACIA,EAAS5B,KAAKuqB,YAAYvqB,KAAK4qB,OAAQrpB,EAAOuI,GAElD,MAAOvE,GAEH,YADAL,EAAY3B,MAAMgC,GAGtBvF,KAAK4qB,OAAShpB,EACdsD,EAAYzD,KAAKG,QAfjB5B,KAAK4qB,OAASrpB,EACdvB,KAAK6qB,WAAY,EACjB3lB,EAAYzD,KAAKF,IAgBlBopB,EA/BU,CAgCnB,M,6BCvDF,2DAGO,SAASG,EAAS1T,GACrB,OAAO,SAAUtQ,GAAU,OAAOA,EAAOa,KAAK,IAAIojB,EAAgB3T,KAEtE,IAAI2T,EAAmB,WACnB,SAASA,EAAgB3T,GACrBpX,KAAKoX,SAAWA,EAKpB,OAHA2T,EAAgB7qB,UAAUU,KAAO,SAAUyE,EAAYyB,GACnD,OAAOA,EAAOO,UAAU,IAAI2jB,EAAkB3lB,EAAYrF,KAAKoX,YAE5D2T,EAPW,GASlBC,EAAqB,SAAUtmB,GAE/B,SAASsmB,EAAkB9lB,EAAakS,GACpC,IAAIvS,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAE9C,OADA6E,EAAMM,IAAI,IAAI,IAAaiS,IACpBvS,EAEX,OANA,YAAUmmB,EAAmBtmB,GAMtBsmB,EAPa,CAQtB,M,0ECrBE,EAAwB,SAAUtmB,GAElC,SAASumB,EAAqB1c,EAAWiB,GACrC,IAAI3K,EAAQH,EAAO9D,KAAKZ,KAAMuO,EAAWiB,IAASxP,KAGlD,OAFA6E,EAAM0J,UAAYA,EAClB1J,EAAM2K,KAAOA,EACN3K,EAqBX,OA1BA,YAAUomB,EAAsBvmB,GAOhCumB,EAAqB/qB,UAAUqQ,eAAiB,SAAUhC,EAAW8B,EAAIZ,GAErE,YADc,IAAVA,IAAoBA,EAAQ,GAClB,OAAVA,GAAkBA,EAAQ,EACnB/K,EAAOxE,UAAUqQ,eAAe3P,KAAKZ,KAAMuO,EAAW8B,EAAIZ,IAErElB,EAAUuB,QAAQ9M,KAAKhD,MAChBuO,EAAUE,YAAcF,EAAUE,UAAYqS,uBAAsB,WAAc,OAAOvS,EAAUyB,WAAM1I,SAEpH2jB,EAAqB/qB,UAAUoQ,eAAiB,SAAU/B,EAAW8B,EAAIZ,GAErE,QADc,IAAVA,IAAoBA,EAAQ,GACjB,OAAVA,GAAkBA,EAAQ,GAAiB,OAAVA,GAAkBzP,KAAKyP,MAAQ,EACjE,OAAO/K,EAAOxE,UAAUoQ,eAAe1P,KAAKZ,KAAMuO,EAAW8B,EAAIZ,GAEpC,IAA7BlB,EAAUuB,QAAQnP,SAClBuqB,qBAAqB7a,GACrB9B,EAAUE,eAAYnH,IAIvB2jB,EA3BgB,C,MA4BzB,GC5BSE,EAAiB,ICAG,SAAUzmB,GAErC,SAAS0mB,IACL,OAAkB,OAAX1mB,GAAmBA,EAAO7D,MAAMb,KAAMU,YAAcV,KAuB/D,OAzBA,YAAUorB,EAAyB1mB,GAInC0mB,EAAwBlrB,UAAU8P,MAAQ,SAAUC,GAChDjQ,KAAK+P,QAAS,EACd/P,KAAKyO,eAAYnH,EACjB,IACI/D,EADAuM,EAAU9P,KAAK8P,QAEfhG,GAAS,EACTmB,EAAQ6E,EAAQnP,OACpBsP,EAASA,GAAUH,EAAQvL,QAC3B,GACI,GAAIhB,EAAQ0M,EAAOC,QAAQD,EAAOP,MAAOO,EAAOR,OAC5C,cAEG3F,EAAQmB,IAAUgF,EAASH,EAAQvL,UAE9C,GADAvE,KAAK+P,QAAS,EACVxM,EAAO,CACP,OAASuG,EAAQmB,IAAUgF,EAASH,EAAQvL,UACxC0L,EAAOvK,cAEX,MAAMnC,IAGP6nB,EA1BmB,C,MA2B5B,GD3B0B,CAA4B,I,gCEFxD,8CACO,SAASC,EAAYC,EAAoBhD,EAAY/Z,GACxD,IAAIrG,EAYJ,OAVIA,EADAojB,GAAoD,iBAAvBA,EACpBA,EAGA,CACLjD,WAAYiD,EACZhD,WAAYA,EACZiD,UAAU,EACVhd,UAAWA,GAGZ,SAAUzH,GAAU,OAAOA,EAAOa,KAE7C,SAA6BP,GACzB,IACIkG,EAEA1E,EAHAwK,EAAKhM,EAAGihB,WAAYA,OAAoB,IAAPjV,EAAgBS,OAAOC,kBAAoBV,EAAIoY,EAAKpkB,EAAGkhB,WAAYA,OAAoB,IAAPkD,EAAgB3X,OAAOC,kBAAoB0X,EAAIC,EAAcrkB,EAAGmkB,SAAUhd,EAAYnH,EAAGmH,UAE1Mgd,EAAW,EAEXtkB,GAAW,EACXykB,GAAa,EACjB,OAAO,SAA8B5kB,GACjCykB,IACKje,IAAWrG,IACZA,GAAW,EACXqG,EAAU,IAAI,IAAc+a,EAAYC,EAAY/Z,GACpD3F,EAAe9B,EAAOO,UAAU,CAC5B5F,KAAM,SAAUF,GAAS+L,EAAQ7L,KAAKF,IACtCgC,MAAO,SAAUgC,GACb0B,GAAW,EACXqG,EAAQ/J,MAAMgC,IAElBX,SAAU,WACN8mB,GAAa,EACb9iB,OAAetB,EACfgG,EAAQ1I,eAIpB,IAAI+G,EAAW2B,EAAQjG,UAAUrH,MACjCA,KAAKmF,KAAI,WACLomB,IACA5f,EAASjG,cACLkD,IAAiB8iB,GAAcD,GAA4B,IAAbF,IAC9C3iB,EAAalD,cACbkD,OAAetB,EACfgG,OAAUhG,OAlCwBqkB,CAAoBzjB,O,6BCdtE,8CACO,SAAS0jB,EAAwBrZ,EAAKL,GACzC,OAAO,aAAqB,SAAUjJ,EAAG/G,GAAK,OAAOgQ,EAAUA,EAAQjJ,EAAEsJ,GAAMrQ,EAAEqQ,IAAQtJ,EAAEsJ,KAASrQ,EAAEqQ,Q,6BCF1G,6DAGO,SAASsZ,IAEZ,IADA,IAAI9a,EAAO,GACFhI,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpCgI,EAAKhI,GAAMrI,UAAUqI,GAEzB,OAAO,SAAUjC,GACb,IAAIgE,EACiC,mBAA1BiG,EAAKA,EAAKpQ,OAAS,KAC1BmK,EAAUiG,EAAKhO,OAEnB,IAAIwc,EAAcxO,EAClB,OAAOjK,EAAOa,KAAK,IAAImkB,EAAuBvM,EAAazU,KAGnE,IAAIghB,EAA0B,WAC1B,SAASA,EAAuBvM,EAAazU,GACzC9K,KAAKuf,YAAcA,EACnBvf,KAAK8K,QAAUA,EAKnB,OAHAghB,EAAuB5rB,UAAUU,KAAO,SAAUyE,EAAYyB,GAC1D,OAAOA,EAAOO,UAAU,IAAI0kB,EAAyB1mB,EAAYrF,KAAKuf,YAAavf,KAAK8K,WAErFghB,EARkB,GAUzBC,EAA4B,SAAUrnB,GAEtC,SAASqnB,EAAyB7mB,EAAaqa,EAAazU,GACxD,IAAIjG,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAC9C6E,EAAM0a,YAAcA,EACpB1a,EAAMiG,QAAUA,EAChBjG,EAAM6a,UAAY,GAClB,IAAIxV,EAAMqV,EAAY5e,OACtBkE,EAAMmC,OAAS,IAAIrH,MAAMuK,GACzB,IAAK,IAAI1J,EAAI,EAAGA,EAAI0J,EAAK1J,IACrBqE,EAAM6a,UAAU1c,KAAKxC,GAEzB,IAASA,EAAI,EAAGA,EAAI0J,EAAK1J,IAAK,CAC1B,IAAIqH,EAAa0X,EAAY/e,GAC7BqE,EAAMM,IAAI,YAAkBN,EAAOgD,EAAYA,EAAYrH,IAE/D,OAAOqE,EAoCX,OAnDA,YAAUknB,EAA0BrnB,GAiBpCqnB,EAAyB7rB,UAAUsL,WAAa,SAAUJ,EAAYK,EAAYJ,EAAYK,EAAYC,GACtG3L,KAAKgH,OAAOqE,GAAcI,EAC1B,IAAIiU,EAAY1f,KAAK0f,UACrB,GAAIA,EAAU/e,OAAS,EAAG,CACtB,IAAIqrB,EAAQtM,EAAUlV,QAAQa,IACf,IAAX2gB,GACAtM,EAAU/U,OAAOqhB,EAAO,KAIpCD,EAAyB7rB,UAAU2L,eAAiB,aAEpDkgB,EAAyB7rB,UAAUoF,MAAQ,SAAU/D,GACjD,GAA8B,IAA1BvB,KAAK0f,UAAU/e,OAAc,CAC7B,IAAIoQ,EAAO,YAAe,CAACxP,GAAQvB,KAAKgH,QACpChH,KAAK8K,QACL9K,KAAKisB,YAAYlb,GAGjB/Q,KAAKkF,YAAYzD,KAAKsP,KAIlCgb,EAAyB7rB,UAAU+rB,YAAc,SAAUlb,GACvD,IAAInP,EACJ,IACIA,EAAS5B,KAAK8K,QAAQjK,MAAMb,KAAM+Q,GAEtC,MAAOxL,GAEH,YADAvF,KAAKkF,YAAY3B,MAAMgC,GAG3BvF,KAAKkF,YAAYzD,KAAKG,IAEnBmqB,EApDoB,CAqD7B,M,6BChFF,oDAEO,SAASG,EAAY7D,EAAY8D,GAEpC,YADyB,IAArBA,IAA+BA,EAAmB,MAC/C,SAAqCrlB,GACxC,OAAOA,EAAOa,KAAK,IAAIykB,EAAoB/D,EAAY8D,KAG/D,IAAIC,EAAuB,WACvB,SAASA,EAAoB/D,EAAY8D,GACrCnsB,KAAKqoB,WAAaA,EAClBroB,KAAKmsB,iBAAmBA,EAKpBnsB,KAAKqsB,gBAJJF,GAAoB9D,IAAe8D,EAIbG,EAHAC,EAS/B,OAHAH,EAAoBlsB,UAAUU,KAAO,SAAUyE,EAAYyB,GACvD,OAAOA,EAAOO,UAAU,IAAIrH,KAAKqsB,gBAAgBhnB,EAAYrF,KAAKqoB,WAAYroB,KAAKmsB,oBAEhFC,EAde,GAgBtBG,EAAyB,SAAU7nB,GAEnC,SAAS6nB,EAAsBrnB,EAAamjB,GACxC,IAAIxjB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAG9C,OAFA6E,EAAMwjB,WAAaA,EACnBxjB,EAAMoP,OAAS,GACRpP,EAiBX,OAtBA,YAAU0nB,EAAuB7nB,GAOjC6nB,EAAsBrsB,UAAUoF,MAAQ,SAAU/D,GAC9C,IAAI0S,EAASjU,KAAKiU,OAClBA,EAAOjR,KAAKzB,GACR0S,EAAOtT,QAAUX,KAAKqoB,aACtBroB,KAAKkF,YAAYzD,KAAKwS,GACtBjU,KAAKiU,OAAS,KAGtBsY,EAAsBrsB,UAAUuF,UAAY,WACxC,IAAIwO,EAASjU,KAAKiU,OACdA,EAAOtT,OAAS,GAChBX,KAAKkF,YAAYzD,KAAKwS,GAE1BvP,EAAOxE,UAAUuF,UAAU7E,KAAKZ,OAE7BusB,EAvBiB,CAwB1B,KACED,EAA6B,SAAU5nB,GAEvC,SAAS4nB,EAA0BpnB,EAAamjB,EAAY8D,GACxD,IAAItnB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAK9C,OAJA6E,EAAMwjB,WAAaA,EACnBxjB,EAAMsnB,iBAAmBA,EACzBtnB,EAAM2nB,QAAU,GAChB3nB,EAAMoG,MAAQ,EACPpG,EA2BX,OAlCA,YAAUynB,EAA2B5nB,GASrC4nB,EAA0BpsB,UAAUoF,MAAQ,SAAU/D,GAClD,IAAe8mB,EAANroB,KAAsBqoB,WAAY8D,EAAlCnsB,KAAwDmsB,iBAAkBK,EAA1ExsB,KAAuFwsB,QAASvhB,EAAhGjL,KAA2GiL,MACpHjL,KAAKiL,QACDA,EAAQkhB,GAAqB,GAC7BK,EAAQxpB,KAAK,IAEjB,IAAK,IAAIxC,EAAIgsB,EAAQ7rB,OAAQH,KAAM,CAC/B,IAAIyT,EAASuY,EAAQhsB,GACrByT,EAAOjR,KAAKzB,GACR0S,EAAOtT,SAAW0nB,IAClBmE,EAAQ7hB,OAAOnK,EAAG,GAClBR,KAAKkF,YAAYzD,KAAKwS,MAIlCqY,EAA0BpsB,UAAUuF,UAAY,WAE5C,IADA,IAAe+mB,EAANxsB,KAAmBwsB,QAAStnB,EAA5BlF,KAA6CkF,YAC/CsnB,EAAQ7rB,OAAS,GAAG,CACvB,IAAIsT,EAASuY,EAAQjoB,QACjB0P,EAAOtT,OAAS,GAChBuE,EAAYzD,KAAKwS,GAGzBvP,EAAOxE,UAAUuF,UAAU7E,KAAKZ,OAE7BssB,EAnCqB,CAoC9B,M,mFCpFK,SAASG,IACZ,OAAO,OAAArY,EAAA,GAAS,GCAb,SAAS3Q,IAEZ,IADA,IAAI8b,EAAc,GACTxW,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpCwW,EAAYxW,GAAMrI,UAAUqI,GAEhC,OAAO0jB,IAAY3b,EAAA,EAAGjQ,WAAM,EAAQ0e,I,YCLjC,SAASmN,IAEZ,IADA,IAAI1lB,EAAS,GACJ+B,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpC/B,EAAO+B,GAAMrI,UAAUqI,GAE3B,IAAIwF,EAAYvH,EAAOA,EAAOrG,OAAS,GACvC,OAAI,OAAAqM,EAAA,GAAYuB,IACZvH,EAAOjE,MACA,SAAU+D,GAAU,OAAOrD,EAAOuD,EAAQF,EAAQyH,KAGlD,SAAUzH,GAAU,OAAOrD,EAAOuD,EAAQF,M,yCCczD,IAAI,EAEJ,OAAA8F,EAAA,IAAQ,SAAiB+c,GACvB,OC9BgC1gB,ED8Bf0gB,EC7B4B,oBAAtCnqB,OAAOU,UAAUqJ,SAAS3I,KAAKqI,GD6Bb0gB,EAAKgD,MAAM,IAAIC,UAAUpjB,KAAK,IAAM7J,MAAMO,UAAUuN,MAAM7M,KAAK+oB,EAAM,GAAGiD,UC9BpF,IAAmB3jB,KDiCnB,O,6BEjCf,oEAIO,SAAS4jB,EAAU7U,EAAQ8U,EAAWhS,EAAS/M,GAKlD,OAJI,YAAW+M,KACX/M,EAAiB+M,EACjBA,OAAUxT,GAEVyG,EACO8e,EAAU7U,EAAQ8U,EAAWhS,GAASjS,KAAK,aAAI,SAAUkI,GAAQ,OAAO,YAAQA,GAAQhD,EAAelN,WAAM,EAAQkQ,GAAQhD,EAAegD,OAEhJ,IAAI,KAAW,SAAU1L,IAYpC,SAAS0nB,EAAkBC,EAAWF,EAAWG,EAAS5nB,EAAYyV,GAClE,IAAIpV,EACJ,GA+BJ,SAAuBsnB,GACnB,OAAOA,GAAmD,mBAA/BA,EAAU5U,kBAA4E,mBAAlC4U,EAAU1U,oBAhCrF4U,CAAcF,GAAY,CAC1B,IAAIG,EAAWH,EACfA,EAAU5U,iBAAiB0U,EAAWG,EAASnS,GAC/CpV,EAAc,WAAc,OAAOynB,EAAS7U,oBAAoBwU,EAAWG,EAASnS,SAEnF,GAuBT,SAAmCkS,GAC/B,OAAOA,GAAqC,mBAAjBA,EAAU7V,IAA8C,mBAAlB6V,EAAUxV,IAxBlE4V,CAA0BJ,GAAY,CAC3C,IAAIK,EAAWL,EACfA,EAAU7V,GAAG2V,EAAWG,GACxBvnB,EAAc,WAAc,OAAO2nB,EAAS7V,IAAIsV,EAAWG,SAE1D,GAeT,SAAiCD,GAC7B,OAAOA,GAA8C,mBAA1BA,EAAUM,aAAkE,mBAA7BN,EAAUO,eAhB3EC,CAAwBR,GAAY,CACzC,IAAIS,EAAWT,EACfA,EAAUM,YAAYR,EAAWG,GACjCvnB,EAAc,WAAc,OAAO+nB,EAASF,eAAeT,EAAWG,QAErE,KAAID,IAAaA,EAAUrsB,OAM5B,MAAM,IAAImC,UAAU,wBALpB,IAAK,IAAItC,EAAI,EAAG0J,EAAM8iB,EAAUrsB,OAAQH,EAAI0J,EAAK1J,IAC7CusB,EAAkBC,EAAUxsB,GAAIssB,EAAWG,EAAS5nB,EAAYyV,GAMxEzV,EAAWF,IAAIO,GA5BXqnB,CAAkB/U,EAAQ8U,GAR1B,SAAiBprB,GACThB,UAAUC,OAAS,EACnB0E,EAAW5D,KAAK9B,MAAMO,UAAUuN,MAAM7M,KAAKF,YAG3C2E,EAAW5D,KAAKC,KAGsB2D,EAAYyV,Q,6BCrBlE,oDAEO,SAAS4S,EAAMnsB,GAClB,OAAO,SAAUuF,GAAU,OAAOA,EAAOa,KAAK,IAAIgmB,EAAcpsB,KAEpE,IAAIosB,EAAiB,WACjB,SAASA,EAAcpsB,GACnBvB,KAAKuB,MAAQA,EAKjB,OAHAosB,EAAcztB,UAAUU,KAAO,SAAUyE,EAAYyB,GACjD,OAAOA,EAAOO,UAAU,IAAIumB,EAAgBvoB,EAAYrF,KAAKuB,SAE1DosB,EAPS,GAShBC,EAAmB,SAAUlpB,GAE7B,SAASkpB,EAAgB1oB,EAAa3D,GAClC,IAAIsD,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAE9C,OADA6E,EAAMtD,MAAQA,EACPsD,EAKX,OATA,YAAU+oB,EAAiBlpB,GAM3BkpB,EAAgB1tB,UAAUoF,MAAQ,SAAU2D,GACxCjJ,KAAKkF,YAAYzD,KAAKzB,KAAKuB,QAExBqsB,EAVW,CAWpB,M,6BCzBF,qEAIO,SAASC,IAEZ,IADA,IAAItO,EAAc,GACTxW,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpCwW,EAAYxW,GAAMrI,UAAUqI,GAEhC,IAAI6K,EAAaC,OAAOC,kBACpBvF,OAAYjH,EACZwmB,EAAOvO,EAAYA,EAAY5e,OAAS,GAU5C,OATI,YAAYmtB,IACZvf,EAAYgR,EAAYxc,MACpBwc,EAAY5e,OAAS,GAAoD,iBAAxC4e,EAAYA,EAAY5e,OAAS,KAClEiT,EAAa2L,EAAYxc,QAGR,iBAAT+qB,IACZla,EAAa2L,EAAYxc,QAExBwL,GAAoC,IAAvBgR,EAAY5e,QAAgB4e,EAAY,aAAc,IAC7DA,EAAY,GAEhB,YAAS3L,EAAT,CAAqB,YAAU2L,EAAahR,M,6BCxBvD,oEAIO,SAASwf,EAAiBC,EAAYC,EAAelgB,GACxD,OAAIA,EACOggB,EAAiBC,EAAYC,GAAeplB,KAAK,aAAI,SAAUkI,GAAQ,OAAO,YAAQA,GAAQhD,EAAelN,WAAM,EAAQkQ,GAAQhD,EAAegD,OAEtJ,IAAI,KAAW,SAAU1L,GAC5B,IAOI6oB,EAPAjB,EAAU,WAEV,IADA,IAAIvrB,EAAI,GACCqH,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpCrH,EAAEqH,GAAMrI,UAAUqI,GAEtB,OAAO1D,EAAW5D,KAAkB,IAAbC,EAAEf,OAAee,EAAE,GAAKA,IAGnD,IACIwsB,EAAWF,EAAWf,GAE1B,MAAO1nB,GAEH,YADAF,EAAW9B,MAAMgC,GAGrB,GAAK,YAAW0oB,GAGhB,OAAO,WAAc,OAAOA,EAAchB,EAASiB,S,6BC3B3D,oDAEO,SAAS3L,EAAO4L,EAAWptB,GAC9B,OAAO,SAAgC+F,GACnC,OAAOA,EAAOa,KAAK,IAAIymB,EAAeD,EAAWptB,KAGzD,IAAIqtB,EAAkB,WAClB,SAASA,EAAeD,EAAWptB,GAC/Bf,KAAKmuB,UAAYA,EACjBnuB,KAAKe,QAAUA,EAKnB,OAHAqtB,EAAeluB,UAAUU,KAAO,SAAUyE,EAAYyB,GAClD,OAAOA,EAAOO,UAAU,IAAIgnB,EAAiBhpB,EAAYrF,KAAKmuB,UAAWnuB,KAAKe,WAE3EqtB,EARU,GAUjBC,EAAoB,SAAU3pB,GAE9B,SAAS2pB,EAAiBnpB,EAAaipB,EAAWptB,GAC9C,IAAI8D,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAI9C,OAHA6E,EAAMspB,UAAYA,EAClBtpB,EAAM9D,QAAUA,EAChB8D,EAAMoG,MAAQ,EACPpG,EAeX,OArBA,YAAUwpB,EAAkB3pB,GAQ5B2pB,EAAiBnuB,UAAUoF,MAAQ,SAAU/D,GACzC,IAAIK,EACJ,IACIA,EAAS5B,KAAKmuB,UAAUvtB,KAAKZ,KAAKe,QAASQ,EAAOvB,KAAKiL,SAE3D,MAAO1F,GAEH,YADAvF,KAAKkF,YAAY3B,MAAMgC,GAGvB3D,GACA5B,KAAKkF,YAAYzD,KAAKF,IAGvB8sB,EAtBY,CAuBrB,M,6BCxCF,6DAGIC,EAAmB,SAAU5pB,GAE7B,SAAS4pB,EAAgBC,GACrB,IAAI1pB,EAAQH,EAAO9D,KAAKZ,OAASA,KAEjC,OADA6E,EAAM0pB,OAASA,EACR1pB,EA8BX,OAlCA,YAAUypB,EAAiB5pB,GAM3BlF,OAAO+V,eAAe+Y,EAAgBpuB,UAAW,QAAS,CACtDuV,IAAK,WACD,OAAOzV,KAAKwuB,YAEhBhZ,YAAY,EACZ+E,cAAc,IAElB+T,EAAgBpuB,UAAUwH,WAAa,SAAUrC,GAC7C,IAAIuD,EAAelE,EAAOxE,UAAUwH,WAAW9G,KAAKZ,KAAMqF,GAI1D,OAHIuD,IAAiBA,EAAajD,QAC9BN,EAAW5D,KAAKzB,KAAKuuB,QAElB3lB,GAEX0lB,EAAgBpuB,UAAUsuB,SAAW,WACjC,GAAIxuB,KAAKiH,SACL,MAAMjH,KAAKqN,YAEV,GAAIrN,KAAK2F,OACV,MAAM,IAAI,IAGV,OAAO3F,KAAKuuB,QAGpBD,EAAgBpuB,UAAUuB,KAAO,SAAUF,GACvCmD,EAAOxE,UAAUuB,KAAKb,KAAKZ,KAAMA,KAAKuuB,OAAShtB,IAE5C+sB,EAnCW,CAoCpB,M,6BCvCF,6CACO,SAASG,IAEZ,IADA,IAAIC,EAAa,GACR3lB,EAAK,EAAGA,EAAKrI,UAAUC,OAAQoI,IACpC2lB,EAAW3lB,GAAMrI,UAAUqI,GAE/B,IAAIpI,EAAS+tB,EAAW/tB,OACxB,GAAe,IAAXA,EACA,MAAM,IAAI8F,MAAM,uCAEpB,OAAO,aAAI,SAAUwC,GAEjB,IADA,IAAI0lB,EAAc1lB,EACTzI,EAAI,EAAGA,EAAIG,EAAQH,IAAK,CAC7B,IAAIZ,EAAI+uB,EAAYD,EAAWluB,IAC/B,QAAiB,IAANZ,EAIP,OAHA+uB,EAAc/uB,EAMtB,OAAO+uB,O,6BCrBf,6DAGWC,EAAwB,CAC/BC,SAAS,EACTC,UAAU,GAEP,SAAS9M,EAAS+M,EAAkB7mB,GAEvC,YADe,IAAXA,IAAqBA,EAAS0mB,GAC3B,SAAU9nB,GAAU,OAAOA,EAAOa,KAAK,IAAIqnB,EAAiBD,IAAoB7mB,EAAO2mB,UAAW3mB,EAAO4mB,YAEpH,IAAIE,EAAoB,WACpB,SAASA,EAAiBD,EAAkBF,EAASC,GACjD9uB,KAAK+uB,iBAAmBA,EACxB/uB,KAAK6uB,QAAUA,EACf7uB,KAAK8uB,SAAWA,EAKpB,OAHAE,EAAiB9uB,UAAUU,KAAO,SAAUyE,EAAYyB,GACpD,OAAOA,EAAOO,UAAU,IAAI4nB,EAAmB5pB,EAAYrF,KAAK+uB,iBAAkB/uB,KAAK6uB,QAAS7uB,KAAK8uB,YAElGE,EATY,GAWnBC,EAAsB,SAAUvqB,GAEhC,SAASuqB,EAAmB/pB,EAAa6pB,EAAkBG,EAAUC,GACjE,IAAItqB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAO9C,OANA6E,EAAMK,YAAcA,EACpBL,EAAMkqB,iBAAmBA,EACzBlqB,EAAMqqB,SAAWA,EACjBrqB,EAAMsqB,UAAYA,EAClBtqB,EAAMuqB,WAAa,KACnBvqB,EAAMwqB,WAAY,EACXxqB,EAsDX,OA/DA,YAAUoqB,EAAoBvqB,GAW9BuqB,EAAmB/uB,UAAUoF,MAAQ,SAAU/D,GAC3CvB,KAAKqvB,WAAY,EACjBrvB,KAAKovB,WAAa7tB,EACbvB,KAAKsvB,aACFtvB,KAAKkvB,SACLlvB,KAAKuvB,OAGLvvB,KAAKgiB,SAASzgB,KAI1B0tB,EAAmB/uB,UAAUqvB,KAAO,WAChC,IAAeF,EAANrvB,KAAqBqvB,UAAWD,EAAhCpvB,KAAgDovB,WACrDC,IACArvB,KAAKkF,YAAYzD,KAAK2tB,GACtBpvB,KAAKgiB,SAASoN,IAElBpvB,KAAKqvB,WAAY,EACjBrvB,KAAKovB,WAAa,MAEtBH,EAAmB/uB,UAAU8hB,SAAW,SAAUzgB,GAC9C,IAAIiuB,EAAWxvB,KAAKyvB,oBAAoBluB,GAClCiuB,GACFxvB,KAAKmF,IAAInF,KAAKsvB,WAAa,YAAkBtvB,KAAMwvB,KAG3DP,EAAmB/uB,UAAUuvB,oBAAsB,SAAUluB,GACzD,IACI,OAAOvB,KAAK+uB,iBAAiBxtB,GAEjC,MAAOgE,GAEH,OADAvF,KAAKkF,YAAY3B,MAAMgC,GAChB,OAGf0pB,EAAmB/uB,UAAUwvB,eAAiB,WAC1C,IAAeJ,EAANtvB,KAAsBsvB,WAAYH,EAAlCnvB,KAAiDmvB,UACtDG,GACAA,EAAW5pB,cAEf1F,KAAKsvB,WAAa,KACdH,GACAnvB,KAAKuvB,QAGbN,EAAmB/uB,UAAUsL,WAAa,SAAUJ,EAAYK,EAAYJ,EAAYK,EAAYC,GAChG3L,KAAK0vB,kBAETT,EAAmB/uB,UAAU2L,eAAiB,WAC1C7L,KAAK0vB,kBAEFT,EAhEc,CAiEvB,M,6BCvFF,8CACO,SAASU,EAAYC,EAAiB7hB,GACzC,OAAOA,EAAiB,aAAU,WAAc,OAAO6hB,IAAoB7hB,GAAkB,aAAU,WAAc,OAAO6hB,O,6BCFhI,6DAGO,SAASC,EAAOC,GACnB,OAAO,SAAUhpB,GAAU,OAAOA,EAAOa,KAAK,IAAIooB,EAAeD,KAErE,IAAIC,EAAkB,WAClB,SAASA,EAAeD,GACpB9vB,KAAK8vB,SAAWA,EAQpB,OANAC,EAAe7vB,UAAUU,KAAO,SAAUyE,EAAYyB,GAClD,IAAIkpB,EAAmB,IAAIC,EAAiB5qB,GACxCuD,EAAe9B,EAAOO,UAAU2oB,GAEpC,OADApnB,EAAazD,IAAI,YAAkB6qB,EAAkBhwB,KAAK8vB,WACnDlnB,GAEJmnB,EAVU,GAYjBE,EAAoB,SAAUvrB,GAE9B,SAASurB,IACL,IAAIprB,EAAmB,OAAXH,GAAmBA,EAAO7D,MAAMb,KAAMU,YAAcV,KAEhE,OADA6E,EAAMuM,UAAW,EACVvM,EAkBX,OAtBA,YAAUorB,EAAkBvrB,GAM5BurB,EAAiB/vB,UAAUoF,MAAQ,SAAU/D,GACzCvB,KAAKuB,MAAQA,EACbvB,KAAKoR,UAAW,GAEpB6e,EAAiB/vB,UAAUsL,WAAa,SAAUJ,EAAYK,EAAYJ,EAAYK,EAAYC,GAC9F3L,KAAKkwB,aAETD,EAAiB/vB,UAAU2L,eAAiB,WACxC7L,KAAKkwB,aAETD,EAAiB/vB,UAAUgwB,UAAY,WAC/BlwB,KAAKoR,WACLpR,KAAKoR,UAAW,EAChBpR,KAAKkF,YAAYzD,KAAKzB,KAAKuB,SAG5B0uB,EAvBY,CAwBrB,M,6BC1CF,qDAEWE,EAAQ,IAAI,IAAW,M,6BCFlC,oDAEO,SAASC,EAAKnlB,GACjB,OAAO,SAAUnE,GAAU,OAAOA,EAAOa,KAAK,IAAI0oB,EAAaplB,KAEnE,IAAIolB,EAAgB,WAChB,SAASA,EAAaC,GAClBtwB,KAAKswB,MAAQA,EAKjB,OAHAD,EAAanwB,UAAUU,KAAO,SAAUyE,EAAYyB,GAChD,OAAOA,EAAOO,UAAU,IAAIkpB,EAAelrB,EAAYrF,KAAKswB,SAEzDD,EAPQ,GASfE,EAAkB,SAAU7rB,GAE5B,SAAS6rB,EAAerrB,EAAaorB,GACjC,IAAIzrB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAG9C,OAFA6E,EAAMyrB,MAAQA,EACdzrB,EAAMoG,MAAQ,EACPpG,EAOX,OAZA,YAAU0rB,EAAgB7rB,GAO1B6rB,EAAerwB,UAAUoF,MAAQ,SAAU2D,KACjCjJ,KAAKiL,MAAQjL,KAAKswB,OACpBtwB,KAAKkF,YAAYzD,KAAKwH,IAGvBsnB,EAbU,CAcnB,M,6BC5BF,qEAIO,SAASC,EAAW9X,GACvB,OAAO,SAAoC5R,GACvC,IAAIc,EAAW,IAAI6oB,EAAc/X,GAC7BgY,EAAS5pB,EAAOa,KAAKC,GACzB,OAAQA,EAAS8oB,OAASA,GAGlC,IAAID,EAAiB,WACjB,SAASA,EAAc/X,GACnB1Y,KAAK0Y,SAAWA,EAKpB,OAHA+X,EAAcvwB,UAAUU,KAAO,SAAUyE,EAAYyB,GACjD,OAAOA,EAAOO,UAAU,IAAIspB,EAAgBtrB,EAAYrF,KAAK0Y,SAAU1Y,KAAK0wB,UAEzED,EAPS,GAShBE,EAAmB,SAAUjsB,GAE7B,SAASisB,EAAgBzrB,EAAawT,EAAUgY,GAC5C,IAAI7rB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAG9C,OAFA6E,EAAM6T,SAAWA,EACjB7T,EAAM6rB,OAASA,EACR7rB,EAqBX,OA1BA,YAAU8rB,EAAiBjsB,GAO3BisB,EAAgBzwB,UAAUqD,MAAQ,SAAUgC,GACxC,IAAKvF,KAAKiF,UAAW,CACjB,IAAIrD,OAAS,EACb,IACIA,EAAS5B,KAAK0Y,SAASnT,EAAKvF,KAAK0wB,QAErC,MAAOE,GAEH,YADAlsB,EAAOxE,UAAUqD,MAAM3C,KAAKZ,KAAM4wB,GAGtC5wB,KAAK4F,yBACL,IAAI0F,EAAkB,IAAI,IAAgBtL,UAAMsH,OAAWA,GAC3DtH,KAAKmF,IAAImG,GACT,IAAI8C,EAAoB,YAAkBpO,KAAM4B,OAAQ0F,OAAWA,EAAWgE,GAC1E8C,IAAsB9C,GACtBtL,KAAKmF,IAAIiJ,KAIduiB,EA3BW,CA4BpB,M,6BChDF,4DAGO,SAASE,EAAaC,EAASviB,GAElC,YADkB,IAAdA,IAAwBA,EAAY,KACjC,SAAUzH,GAAU,OAAOA,EAAOa,KAAK,IAAIopB,EAAqBD,EAASviB,KAEpF,IAAIwiB,EAAwB,WACxB,SAASA,EAAqBD,EAASviB,GACnCvO,KAAK8wB,QAAUA,EACf9wB,KAAKuO,UAAYA,EAKrB,OAHAwiB,EAAqB7wB,UAAUU,KAAO,SAAUyE,EAAYyB,GACxD,OAAOA,EAAOO,UAAU,IAAI2pB,EAAuB3rB,EAAYrF,KAAK8wB,QAAS9wB,KAAKuO,aAE/EwiB,EARgB,GAUvBC,EAA0B,SAAUtsB,GAEpC,SAASssB,EAAuB9rB,EAAa4rB,EAASviB,GAClD,IAAI1J,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAM9C,OALA6E,EAAMisB,QAAUA,EAChBjsB,EAAM0J,UAAYA,EAClB1J,EAAMosB,sBAAwB,KAC9BpsB,EAAMqsB,UAAY,KAClBrsB,EAAMuM,UAAW,EACVvM,EA6BX,OArCA,YAAUmsB,EAAwBtsB,GAUlCssB,EAAuB9wB,UAAUoF,MAAQ,SAAU/D,GAC/CvB,KAAKmxB,gBACLnxB,KAAKkxB,UAAY3vB,EACjBvB,KAAKoR,UAAW,EAChBpR,KAAKmF,IAAInF,KAAKixB,sBAAwBjxB,KAAKuO,UAAUtB,SAASmkB,EAAcpxB,KAAK8wB,QAAS9wB,QAE9FgxB,EAAuB9wB,UAAUuF,UAAY,WACzCzF,KAAKqxB,gBACLrxB,KAAKkF,YAAYN,YAErBosB,EAAuB9wB,UAAUmxB,cAAgB,WAE7C,GADArxB,KAAKmxB,gBACDnxB,KAAKoR,SAAU,CACf,IAAI8f,EAAYlxB,KAAKkxB,UACrBlxB,KAAKkxB,UAAY,KACjBlxB,KAAKoR,UAAW,EAChBpR,KAAKkF,YAAYzD,KAAKyvB,KAG9BF,EAAuB9wB,UAAUixB,cAAgB,WAC7C,IAAIF,EAAwBjxB,KAAKixB,sBACH,OAA1BA,IACAjxB,KAAK6J,OAAOonB,GACZA,EAAsBvrB,cACtB1F,KAAKixB,sBAAwB,OAG9BD,EAtCkB,CAuC3B,KACF,SAASI,EAAa/rB,GAClBA,EAAWgsB,kB,6BC1Df,sDAEO,SAASC,EAAIC,EAAWC,EAAYC,GAGvC,YAFmB,IAAfD,IAAyBA,EAAa,UACtB,IAAhBC,IAA0BA,EAAc,KACrC,aAAM,WAAc,OAAOF,IAAcC,EAAaC,O,6BCLjE,oBAoBIzqB,EAEJ,aAAQ,SAAgBuM,GAMtB,IALA,IAAI8G,EAAQ,YAAK9G,GACbrJ,EAAMmQ,EAAM1Z,OACZ+wB,EAAO,GACP9H,EAAM,EAEHA,EAAM1f,GACXwnB,EAAK9H,GAAOrW,EAAI8G,EAAMuP,IACtBA,GAAO,EAGT,OAAO8H,KAGM,O,uGClCR,SAASnG,IACZ,OAAO,SAAkCzkB,GACrC,OAAOA,EAAOa,KAAK,IAAIgqB,EAAiB7qB,KAGhD,ICwCQ8qB,EDxCJD,EAAoB,WACpB,SAASA,EAAiBE,GACtB7xB,KAAK6xB,YAAcA,EAYvB,OAVAF,EAAiBzxB,UAAUU,KAAO,SAAUyE,EAAYyB,GACpD,IAAI+qB,EAAc7xB,KAAK6xB,YACvBA,EAAYC,YACZ,IAAIC,EAAa,IAAI,EAAmB1sB,EAAYwsB,GAChDjpB,EAAe9B,EAAOO,UAAU0qB,GAIpC,OAHKA,EAAWpsB,SACZosB,EAAWC,WAAaH,EAAYI,WAEjCrpB,GAEJ+oB,EAdY,GAgBnB,EAAsB,SAAUjtB,GAEhC,SAASwtB,EAAmBhtB,EAAa2sB,GACrC,IAAIhtB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAG9C,OAFA6E,EAAMgtB,YAAcA,EACpBhtB,EAAMmtB,WAAa,KACZntB,EA0BX,OA/BA,YAAUqtB,EAAoBxtB,GAO9BwtB,EAAmBhyB,UAAUwG,aAAe,WACxC,IAAImrB,EAAc7xB,KAAK6xB,YACvB,GAAKA,EAAL,CAIA7xB,KAAK6xB,YAAc,KACnB,IAAItG,EAAWsG,EAAYC,UAC3B,GAAIvG,GAAY,EACZvrB,KAAKgyB,WAAa,UAItB,GADAH,EAAYC,UAAYvG,EAAW,EAC/BA,EAAW,EACXvrB,KAAKgyB,WAAa,SADtB,CAIA,IAAIA,EAAahyB,KAAKgyB,WAClBG,EAAmBN,EAAYO,YACnCpyB,KAAKgyB,WAAa,MACdG,GAAsBH,GAAcG,IAAqBH,GACzDG,EAAiBzsB,oBAlBjB1F,KAAKgyB,WAAa,MAqBnBE,EAhCc,CAiCvBztB,EAAA,GClDE,EAAyB,SAAUC,GAEnC,SAAS2tB,EAAsBvrB,EAAQwrB,GACnC,IAAIztB,EAAQH,EAAO9D,KAAKZ,OAASA,KAKjC,OAJA6E,EAAMiC,OAASA,EACfjC,EAAMytB,eAAiBA,EACvBztB,EAAMitB,UAAY,EAClBjtB,EAAM0tB,aAAc,EACb1tB,EA6BX,OApCA,YAAUwtB,EAAuB3tB,GASjC2tB,EAAsBnyB,UAAUwH,WAAa,SAAUrC,GACnD,OAAOrF,KAAKwyB,aAAanrB,UAAUhC,IAEvCgtB,EAAsBnyB,UAAUsyB,WAAa,WACzC,IAAIllB,EAAUtN,KAAKyyB,SAInB,OAHKnlB,IAAWA,EAAQrI,YACpBjF,KAAKyyB,SAAWzyB,KAAKsyB,kBAElBtyB,KAAKyyB,UAEhBJ,EAAsBnyB,UAAU+xB,QAAU,WACtC,IAAID,EAAahyB,KAAKoyB,YAWtB,OAVKJ,IACDhyB,KAAKuyB,aAAc,GACnBP,EAAahyB,KAAKoyB,YAAc,IAAI1oB,EAAA,GACzBvE,IAAInF,KAAK8G,OACfO,UAAU,IAAI,EAAsBrH,KAAKwyB,aAAcxyB,QACxDgyB,EAAWrsB,SACX3F,KAAKoyB,YAAc,KACnBJ,EAAatoB,EAAA,EAAaY,QAG3B0nB,GAEXK,EAAsBnyB,UAAUqrB,SAAW,WACvC,OAAO,IAAsBvrB,OAE1BqyB,EArCiB,CAsC1B7qB,EAAA,GAESkrB,EAEA,CACH9qB,SAAU,CAAErG,MAAO,MACnBuwB,UAAW,CAAEvwB,MAAO,EAAGiZ,UAAU,GACjCiY,SAAU,CAAElxB,MAAO,KAAMiZ,UAAU,GACnC4X,YAAa,CAAE7wB,MAAO,KAAMiZ,UAAU,GACtC9S,WAAY,CAAEnG,OANdqwB,EAAmB,EAAsB1xB,WAMHwH,YACtC6qB,YAAa,CAAEhxB,MAAOqwB,EAAiBW,YAAa/X,UAAU,GAC9DgY,WAAY,CAAEjxB,MAAOqwB,EAAiBY,YACtCP,QAAS,CAAE1wB,MAAOqwB,EAAiBK,SACnC1G,SAAU,CAAEhqB,MAAOqwB,EAAiBrG,WAGxC,EAAyB,SAAU7mB,GAEnC,SAASiuB,EAAsBztB,EAAa2sB,GACxC,IAAIhtB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAE9C,OADA6E,EAAMgtB,YAAcA,EACbhtB,EAwBX,OA5BA,YAAU8tB,EAAuBjuB,GAMjCiuB,EAAsBzyB,UAAUsF,OAAS,SAAUD,GAC/CvF,KAAK0G,eACLhC,EAAOxE,UAAUsF,OAAO5E,KAAKZ,KAAMuF,IAEvCotB,EAAsBzyB,UAAUuF,UAAY,WACxCzF,KAAK6xB,YAAYU,aAAc,EAC/BvyB,KAAK0G,eACLhC,EAAOxE,UAAUuF,UAAU7E,KAAKZ,OAEpC2yB,EAAsBzyB,UAAUwG,aAAe,WAC3C,IAAImrB,EAAc7xB,KAAK6xB,YACvB,GAAIA,EAAa,CACb7xB,KAAK6xB,YAAc,KACnB,IAAIG,EAAaH,EAAYO,YAC7BP,EAAYC,UAAY,EACxBD,EAAYY,SAAW,KACvBZ,EAAYO,YAAc,KACtBJ,GACAA,EAAWtsB,gBAIhBitB,EA7BiB,CA8B1BxlB,EAAA,GAiBE,GAhBoB,WACpB,SAASwkB,EAAiBE,GACtB7xB,KAAK6xB,YAAcA,EAEvBF,EAAiBzxB,UAAUU,KAAO,SAAUyE,EAAYyB,GACpD,IAAI+qB,EAAc7xB,KAAK6xB,YACvBA,EAAYC,YACZ,IAAIC,EAAa,IAAI,EAAmB1sB,EAAYwsB,GAChDjpB,EAAe9B,EAAOO,UAAU0qB,GAIpC,OAHKA,EAAWpsB,SACZosB,EAAWC,WAAaH,EAAYI,WAEjCrpB,GAZQ,GAgBG,SAAUlE,GAEhC,SAASwtB,EAAmBhtB,EAAa2sB,GACrC,IAAIhtB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAE9C,OADA6E,EAAMgtB,YAAcA,EACbhtB,EA0BX,OA9BA,YAAUqtB,EAAoBxtB,GAM9BwtB,EAAmBhyB,UAAUwG,aAAe,WACxC,IAAImrB,EAAc7xB,KAAK6xB,YACvB,GAAKA,EAAL,CAIA7xB,KAAK6xB,YAAc,KACnB,IAAItG,EAAWsG,EAAYC,UAC3B,GAAIvG,GAAY,EACZvrB,KAAKgyB,WAAa,UAItB,GADAH,EAAYC,UAAYvG,EAAW,EAC/BA,EAAW,EACXvrB,KAAKgyB,WAAa,SADtB,CAIA,IAAIA,EAAahyB,KAAKgyB,WAClBG,EAAmBN,EAAYO,YACnCpyB,KAAKgyB,WAAa,MACdG,GAAsBH,GAAcG,IAAqBH,GACzDG,EAAiBzsB,oBAlBjB1F,KAAKgyB,WAAa,MAqBnBE,EA/Bc,CAgCvBztB,EAAA,ICtHF,IAAImuB,EAAqB,WACrB,SAASA,EAAkBN,EAAgB5Z,GACvC1Y,KAAKsyB,eAAiBA,EACtBtyB,KAAK0Y,SAAWA,EASpB,OAPAka,EAAkB1yB,UAAUU,KAAO,SAAUyE,EAAYyB,GACrD,IAAI4R,EAAW1Y,KAAK0Y,SAChBpL,EAAUtN,KAAKsyB,iBACf1pB,EAAe8P,EAASpL,GAASjG,UAAUhC,GAE/C,OADAuD,EAAazD,IAAI2B,EAAOO,UAAUiG,IAC3B1E,GAEJgqB,EAZa,GClBxB,SAASC,IACL,OAAO,IAAI1lB,EAAA,EAER,SAAS2lB,IACZ,OAAO,SAAUhsB,GAAU,OAAOykB,KDNZwH,ECMiCF,EDLhD,SAAmC/rB,GACtC,IAAIwrB,EASJ,GAPIA,EADmC,mBAA5BS,EACUA,EAGA,WACb,OAAOA,GAGS,mBAAbra,EACP,OAAO5R,EAAOa,KAAK,IAAIirB,EAAkBN,EAAgB5Z,IAE7D,IAAImZ,EAAcryB,OAAOW,OAAO2G,EAAQ4rB,GAGxC,OAFAb,EAAY/qB,OAASA,EACrB+qB,EAAYS,eAAiBA,EACtBT,ICXiE/qB,IDNzE,IAAmBisB,EAAyBra,K,yCEDpC,SAASsa,EAAU/pB,GAChC,OAAOA,ECqBT,IAAIyJ,EAEJ,OAAA9F,EAAA,GAAQomB,GAEO,O,uGCeR,SAASC,EAAQC,EAAKC,GACzB,OAAO,IAAI,EAAe,CAAE9L,OAAQ,MAAO6L,IAAKA,EAAKC,QAASA,IAE3D,SAASC,EAASF,EAAKlxB,EAAMmxB,GAChC,OAAO,IAAI,EAAe,CAAE9L,OAAQ,OAAQ6L,IAAKA,EAAKlxB,KAAMA,EAAMmxB,QAASA,IAExE,SAASE,EAAWH,EAAKC,GAC5B,OAAO,IAAI,EAAe,CAAE9L,OAAQ,SAAU6L,IAAKA,EAAKC,QAASA,IAE9D,SAASG,EAAQJ,EAAKlxB,EAAMmxB,GAC/B,OAAO,IAAI,EAAe,CAAE9L,OAAQ,MAAO6L,IAAKA,EAAKlxB,KAAMA,EAAMmxB,QAASA,IAEvE,SAASI,EAAUL,EAAKlxB,EAAMmxB,GACjC,OAAO,IAAI,EAAe,CAAE9L,OAAQ,QAAS6L,IAAKA,EAAKlxB,KAAMA,EAAMmxB,QAASA,IAEhF,IAAIK,EAAc,OAAAlqB,EAAA,IAAI,SAAUL,EAAGa,GAAS,OAAOb,EAAEwqB,YAC9C,SAASC,EAAYR,EAAKC,GAC7B,OAAOK,EAAY,IAAI,EAAe,CAClCnM,OAAQ,MACR6L,IAAKA,EACLS,aAAc,OACdR,QAASA,KAGjB,IAAI,EAAkB,SAAUzuB,GAE5B,SAASkvB,EAAeC,GACpB,IAAIhvB,EAAQH,EAAO9D,KAAKZ,OAASA,KAC7B8zB,EAAU,CACVjhB,OAAO,EACPkhB,UAAW,WACP,OAAO/zB,KAAKg0B,YAnE5B,WACI,GAAIC,EAAA,EAAKC,eACL,OAAO,IAAID,EAAA,EAAKC,eAEf,GAAMD,EAAA,EAAKE,eACZ,OAAO,IAAIF,EAAA,EAAKE,eAGhB,MAAM,IAAI1tB,MAAM,yCA2DkB2tB,GAxD1C,WACI,GAAIH,EAAA,EAAKC,eACL,OAAO,IAAID,EAAA,EAAKC,eAGhB,IAAIG,OAAS,EACb,IAEI,IADA,IAAIC,EAAU,CAAC,iBAAkB,oBAAqB,sBAC7C9zB,EAAI,EAAGA,EAAI,EAAGA,IACnB,IAEI,GADA6zB,EAASC,EAAQ9zB,GACb,IAAIyzB,EAAA,EAAKM,cAAcF,GACvB,MAGR,MAAO3yB,IAGX,OAAO,IAAIuyB,EAAA,EAAKM,cAAcF,GAElC,MAAO3yB,GACH,MAAM,IAAI+E,MAAM,oDAmCiC+tB,IAEjDR,aAAa,EACbS,iBAAiB,EACjBtB,QAAS,GACT9L,OAAQ,MACRsM,aAAc,OACde,QAAS,GAEb,GAA4B,iBAAjBb,EACPC,EAAQZ,IAAMW,OAGd,IAAK,IAAIxK,KAAQwK,EACTA,EAAah0B,eAAewpB,KAC5ByK,EAAQzK,GAAQwK,EAAaxK,IAKzC,OADAxkB,EAAMivB,QAAUA,EACTjvB,EAKa,IAChB1E,EAWR,OA3CA,YAAUyzB,EAAgBlvB,GA4B1BkvB,EAAe1zB,UAAUwH,WAAa,SAAUrC,GAC5C,OAAO,IAAI,EAAeA,EAAYrF,KAAK8zB,UAE/CF,EAAezzB,SACPA,EAAS,SAAU0zB,GACnB,OAAO,IAAID,EAAeC,KAEvBpe,IAAMwd,EACb9yB,EAAOw0B,KAAOvB,EACdjzB,EAAOogB,OAAS8S,EAChBlzB,EAAOy0B,IAAMtB,EACbnzB,EAAO00B,MAAQtB,EACfpzB,EAAO20B,QAAUpB,EACVvzB,GAEJyzB,EA5CU,CA6CnBpsB,EAAA,GAEE,EAAkB,SAAU9C,GAE5B,SAASqwB,EAAe7vB,EAAa4uB,GACjC,IAAIjvB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAC9C6E,EAAMivB,QAAUA,EAChBjvB,EAAMhD,MAAO,EACb,IAAIsxB,EAAUW,EAAQX,QAAUW,EAAQX,SAAW,GAUnD,OATKW,EAAQE,aAAgBnvB,EAAMmwB,UAAU7B,EAAS,sBAClDA,EAAQ,oBAAsB,kBAEVtuB,EAAMmwB,UAAU7B,EAAS,iBACrBc,EAAA,EAAKgB,UAAYnB,EAAQ9xB,gBAAgBiyB,EAAA,EAAKgB,eAAqC,IAAjBnB,EAAQ9xB,OAClGmxB,EAAQ,gBAAkB,oDAE9BW,EAAQ9xB,KAAO6C,EAAMqwB,cAAcpB,EAAQ9xB,KAAM6C,EAAMmwB,UAAUlB,EAAQX,QAAS,iBAClFtuB,EAAM0qB,OACC1qB,EAyLX,OAxMA,YAAUkwB,EAAgBrwB,GAiB1BqwB,EAAe70B,UAAUuB,KAAO,SAAUC,GACtC1B,KAAK6B,MAAO,EACZ,IACID,EADWuzB,EAANn1B,KAAem1B,IAAKrB,EAApB9zB,KAAiC8zB,QAAS5uB,EAA1ClF,KAA2DkF,YAEpE,IACItD,EAAS,IAAIwzB,EAAa1zB,EAAGyzB,EAAKrB,GAEtC,MAAOvuB,GACH,OAAOL,EAAY3B,MAAMgC,GAE7BL,EAAYzD,KAAKG,IAErBmzB,EAAe70B,UAAUqvB,KAAO,WAC5B,IAAeuE,EAAN9zB,KAAmB8zB,QAAS1gB,EAA5BpT,KAAoC8zB,QAASuB,EAAOjiB,EAAGiiB,KAAMhO,EAASjU,EAAGiU,OAAQ6L,EAAM9f,EAAG8f,IAAKrgB,EAAQO,EAAGP,MAAOyiB,EAAWliB,EAAGkiB,SAAUnC,EAAU/f,EAAG+f,QAASnxB,EAAOoR,EAAGpR,KAClL,IACI,IAAImzB,EAAMn1B,KAAKm1B,IAAMrB,EAAQC,YAC7B/zB,KAAKu1B,YAAYJ,EAAKrB,GAClBuB,EACAF,EAAIK,KAAKnO,EAAQ6L,EAAKrgB,EAAOwiB,EAAMC,GAGnCH,EAAIK,KAAKnO,EAAQ6L,EAAKrgB,GAEtBA,IACAsiB,EAAIT,QAAUZ,EAAQY,QACtBS,EAAIxB,aAAeG,EAAQH,cAE3B,oBAAqBwB,IACrBA,EAAIV,kBAAoBX,EAAQW,iBAEpCz0B,KAAKy1B,WAAWN,EAAKhC,GACjBnxB,EACAmzB,EAAI5F,KAAKvtB,GAGTmzB,EAAI5F,OAGZ,MAAOhqB,GACHvF,KAAKuD,MAAMgC,KAGnBwvB,EAAe70B,UAAUg1B,cAAgB,SAAUlzB,EAAM0zB,GACrD,IAAK1zB,GAAwB,iBAATA,EAChB,OAAOA,EAEN,GAAIiyB,EAAA,EAAKgB,UAAYjzB,aAAgBiyB,EAAA,EAAKgB,SAC3C,OAAOjzB,EAEX,GAAI0zB,EAAa,CACb,IAAIC,EAAaD,EAAYlrB,QAAQ,MACjB,IAAhBmrB,IACAD,EAAcA,EAAY5N,UAAU,EAAG6N,IAG/C,OAAQD,GACJ,IAAK,oCACD,OAAOl2B,OAAO4jB,KAAKphB,GAAMsH,KAAI,SAAUiJ,GAAO,OAAOqjB,mBAAmBrjB,GAAO,IAAMqjB,mBAAmB5zB,EAAKuQ,OAAU/I,KAAK,KAChI,IAAK,mBACD,OAAOqsB,KAAKC,UAAU9zB,GAC1B,QACI,OAAOA,IAGnB+yB,EAAe70B,UAAUu1B,WAAa,SAAUN,EAAKhC,GACjD,IAAK,IAAI5gB,KAAO4gB,EACRA,EAAQtzB,eAAe0S,IACvB4iB,EAAIY,iBAAiBxjB,EAAK4gB,EAAQ5gB,KAI9CwiB,EAAe70B,UAAU80B,UAAY,SAAU7B,EAAS6C,GACpD,IAAK,IAAIzjB,KAAO4gB,EACZ,GAAI5gB,EAAI0jB,gBAAkBD,EAAWC,cACjC,OAAO9C,EAAQ5gB,IAK3BwiB,EAAe70B,UAAUq1B,YAAc,SAAUJ,EAAKrB,GAClD,IAAIoC,EAAqBpC,EAAQoC,mBACjC,SAASC,EAAWz0B,GAChB,IAII6B,EAJA6D,EAAK+uB,EAAY9wB,EAAa+B,EAAG/B,WAAY6wB,EAAqB9uB,EAAG8uB,mBAAoBpC,EAAU1sB,EAAG0sB,QACtGoC,GACAA,EAAmB3yB,MAAM7B,GAG7B,IACI6B,EAAQ,IAAI6yB,EAAiBp2B,KAAM8zB,GAEvC,MAAOvuB,GACHhC,EAAQgC,EAEZF,EAAW9B,MAAMA,GAMrB,GAJA4xB,EAAIkB,UAAYF,EAChBA,EAAWrC,QAAUA,EACrBqC,EAAW9wB,WAAarF,KACxBm2B,EAAWD,mBAAqBA,EAC5Bf,EAAImB,QAAU,oBAAqBnB,EAAK,CAEpC,IAAIoB,EAaJC,EAdJ,GAAIN,EAEAK,EAAgB,SAAU70B,GACG60B,EAAcL,mBACpBz0B,KAAKC,IAExBuyB,EAAA,EAAKE,eACLgB,EAAIsB,WAAaF,EAGjBpB,EAAImB,OAAOG,WAAaF,EAE5BA,EAAcL,mBAAqBA,EAGvCM,EAAa,SAAU90B,GACnB,IAII6B,EAJA6D,EAAKovB,EAAYN,EAAqB9uB,EAAG8uB,mBAAoB7wB,EAAa+B,EAAG/B,WAAYyuB,EAAU1sB,EAAG0sB,QACtGoC,GACAA,EAAmB3yB,MAAM7B,GAG7B,IACI6B,EAAQ,IAAImzB,EAAU,aAAc12B,KAAM8zB,GAE9C,MAAOvuB,GACHhC,EAAQgC,EAEZF,EAAW9B,MAAMA,IAErB4xB,EAAIwB,QAAUH,EACdA,EAAW1C,QAAUA,EACrB0C,EAAWnxB,WAAarF,KACxBw2B,EAAWN,mBAAqBA,EAEpC,SAASU,EAAoBl1B,IAO7B,SAASm1B,EAAQn1B,GACb,IAAI0F,EAAKyvB,EAASxxB,EAAa+B,EAAG/B,WAAY6wB,EAAqB9uB,EAAG8uB,mBAAoBpC,EAAU1sB,EAAG0sB,QACvG,GAAwB,IAApB9zB,KAAK82B,WAAkB,CACvB,IAAIC,EAA2B,OAAhB/2B,KAAKg3B,OAAkB,IAAMh3B,KAAKg3B,OAC7CvD,EAAkC,SAAtBzzB,KAAK2zB,aAA2B3zB,KAAKyzB,UAAYzzB,KAAKi3B,aAAgBj3B,KAAKyzB,SAI3F,GAHiB,IAAbsD,IACAA,EAAWtD,EAAW,IAAM,GAE5BsD,EAAW,IACPb,GACAA,EAAmBtxB,WAEvBS,EAAW5D,KAAKC,GAChB2D,EAAWT,eAEV,CACGsxB,GACAA,EAAmB3yB,MAAM7B,GAE7B,IAAI6B,OAAQ,EACZ,IACIA,EAAQ,IAAImzB,EAAU,cAAgBK,EAAU/2B,KAAM8zB,GAE1D,MAAOvuB,GACHhC,EAAQgC,EAEZF,EAAW9B,MAAMA,KA9B7B4xB,EAAI+B,mBAAqBN,EACzBA,EAAoBvxB,WAAarF,KACjC42B,EAAoBV,mBAAqBA,EACzCU,EAAoB9C,QAAUA,EA+B9BqB,EAAIgC,OAASN,EACbA,EAAQxxB,WAAarF,KACrB62B,EAAQX,mBAAqBA,EAC7BW,EAAQ/C,QAAUA,GAEtBiB,EAAe70B,UAAUwF,YAAc,WACnC,IAAe7D,EAAN7B,KAAgB6B,KAAMszB,EAAtBn1B,KAA+Bm1B,KACnCtzB,GAAQszB,GAA0B,IAAnBA,EAAI2B,YAAyC,mBAAd3B,EAAIiC,OACnDjC,EAAIiC,QAER1yB,EAAOxE,UAAUwF,YAAY9E,KAAKZ,OAE/B+0B,EAzMU,CA0MnBtwB,EAAA,GAEE2wB,EACA,SAAsBiC,EAAelC,EAAKrB,GACtC9zB,KAAKq3B,cAAgBA,EACrBr3B,KAAKm1B,IAAMA,EACXn1B,KAAK8zB,QAAUA,EACf9zB,KAAKg3B,OAAS7B,EAAI6B,OAClBh3B,KAAK2zB,aAAewB,EAAIxB,cAAgBG,EAAQH,aAChD3zB,KAAKyzB,SAAW6D,EAAiBt3B,KAAK2zB,aAAcwB,IAoBjDuB,EAfS,WAChB,SAASa,EAAcluB,EAAS8rB,EAAKrB,GASjC,OARArtB,MAAM7F,KAAKZ,MACXA,KAAKqJ,QAAUA,EACfrJ,KAAKyJ,KAAO,YACZzJ,KAAKm1B,IAAMA,EACXn1B,KAAK8zB,QAAUA,EACf9zB,KAAKg3B,OAAS7B,EAAI6B,OAClBh3B,KAAK2zB,aAAewB,EAAIxB,cAAgBG,EAAQH,aAChD3zB,KAAKyzB,SAAW6D,EAAiBt3B,KAAK2zB,aAAcwB,GAC7Cn1B,KAGX,OADAu3B,EAAcr3B,UAAYV,OAAOW,OAAOsG,MAAMvG,WACvCq3B,EAbS,GAwBpB,SAASD,EAAiB3D,EAAcwB,GACpC,OAAQxB,GACJ,IAAK,OACD,OAXZ,SAAmBwB,GACf,MAAI,aAAcA,EACPA,EAAIxB,aAAewB,EAAI1B,SAAWoC,KAAK2B,MAAMrC,EAAI1B,UAAY0B,EAAI8B,cAAgB,QAGjFpB,KAAK2B,MAAMrC,EAAI8B,cAAgB,QAM3BQ,CAAUtC,GACrB,IAAK,MACD,OAAOA,EAAIuC,YACf,IAAK,OACL,QACI,MAAQ,aAAcvC,EAAOA,EAAI1B,SAAW0B,EAAI8B,cAG5D,IASWb,EATgB,WACvB,SAASuB,EAAqBxC,EAAKrB,GAG/B,OAFA4C,EAAU91B,KAAKZ,KAAM,eAAgBm1B,EAAKrB,GAC1C9zB,KAAKyJ,KAAO,mBACLzJ,KAGX,OADA23B,EAAqBz3B,UAAYV,OAAOW,OAAOu2B,EAAUx2B,WAClDy3B,EAPgB,GC1WhBC,EAA6B,EAAez3B,Q,iFCS5C03B,EAVuB,WAC9B,SAASC,IAIL,OAHArxB,MAAM7F,KAAKZ,MACXA,KAAKqJ,QAAU,wBACfrJ,KAAKyJ,KAAO,0BACLzJ,KAGX,OADA83B,EAA4B53B,UAAYV,OAAOW,OAAOsG,MAAMvG,WACrD43B,EARuB,G,QCI3B,SAASC,EAAK9sB,GACjB,OAAO,SAAUnE,GACb,OAAc,IAAVmE,EACO,IAGAnE,EAAOa,KAAK,IAAI,EAAasD,KAIhD,IAAI,EAAgB,WAChB,SAAS+sB,EAAa1H,GAElB,GADAtwB,KAAKswB,MAAQA,EACTtwB,KAAKswB,MAAQ,EACb,MAAM,IAAIuH,EAMlB,OAHAG,EAAa93B,UAAUU,KAAO,SAAUyE,EAAYyB,GAChD,OAAOA,EAAOO,UAAU,IAAI,EAAehC,EAAYrF,KAAKswB,SAEzD0H,EAVQ,GAYf,EAAkB,SAAUtzB,GAE5B,SAASuzB,EAAe/yB,EAAaorB,GACjC,IAAIzrB,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAG9C,OAFA6E,EAAMyrB,MAAQA,EACdzrB,EAAMoG,MAAQ,EACPpG,EAaX,OAlBA,YAAUozB,EAAgBvzB,GAO1BuzB,EAAe/3B,UAAUoF,MAAQ,SAAU/D,GACvC,IAAI+uB,EAAQtwB,KAAKswB,MACbrlB,IAAUjL,KAAKiL,MACfA,GAASqlB,IACTtwB,KAAKkF,YAAYzD,KAAKF,GAClB0J,IAAUqlB,IACVtwB,KAAKkF,YAAYN,WACjB5E,KAAK0F,iBAIVuyB,EAnBU,CAoBnBxzB,EAAA,I,qGCzCK,SAAS,EAAMgL,EAAOlB,QACP,IAAdA,IAAwBA,EAAY,KACxC,ICPmBhN,EDQf22B,GCRe32B,EDOQkO,aCNHE,OAASwoB,OAAO52B,IDOPkO,EAAQlB,EAAUgB,MAAS3B,KAAKuX,IAAI1V,GACrE,OAAO,SAAU3I,GAAU,OAAOA,EAAOa,KAAK,IAAIywB,EAAcF,EAAU3pB,KAE9E,IAAI6pB,EAAiB,WACjB,SAASA,EAAc3oB,EAAOlB,GAC1BvO,KAAKyP,MAAQA,EACbzP,KAAKuO,UAAYA,EAKrB,OAHA6pB,EAAcl4B,UAAUU,KAAO,SAAUyE,EAAYyB,GACjD,OAAOA,EAAOO,UAAU,IAAI,EAAgBhC,EAAYrF,KAAKyP,MAAOzP,KAAKuO,aAEtE6pB,EARS,GAUhB,EAAmB,SAAU1zB,GAE7B,SAAS2zB,EAAgBnzB,EAAauK,EAAOlB,GACzC,IAAI1J,EAAQH,EAAO9D,KAAKZ,KAAMkF,IAAgBlF,KAM9C,OALA6E,EAAM4K,MAAQA,EACd5K,EAAM0J,UAAYA,EAClB1J,EAAMqjB,MAAQ,GACdrjB,EAAMkL,QAAS,EACflL,EAAM8L,SAAU,EACT9L,EAwDX,OAhEA,YAAUwzB,EAAiB3zB,GAU3B2zB,EAAgBpnB,SAAW,SAAUvB,GAKjC,IAJA,IAAI5I,EAAS4I,EAAM5I,OACfohB,EAAQphB,EAAOohB,MACf3Z,EAAYmB,EAAMnB,UAClBrJ,EAAcwK,EAAMxK,YACjBgjB,EAAMvnB,OAAS,GAAMunB,EAAM,GAAGgB,KAAO3a,EAAUgB,OAAU,GAC5D2Y,EAAM3jB,QAAQmQ,aAAarD,QAAQnM,GAEvC,GAAIgjB,EAAMvnB,OAAS,EAAG,CAClB,IAAI23B,EAAU1qB,KAAKub,IAAI,EAAGjB,EAAM,GAAGgB,KAAO3a,EAAUgB,OACpDvP,KAAKiN,SAASyC,EAAO4oB,QAEhBxxB,EAAO7B,WACZ6B,EAAO5B,YAAYN,WACnBkC,EAAOiJ,QAAS,IAGhB/P,KAAK0F,cACLoB,EAAOiJ,QAAS,IAGxBsoB,EAAgBn4B,UAAUq4B,UAAY,SAAUhqB,GAC5CvO,KAAK+P,QAAS,EACI/P,KAAKkF,YACXC,IAAIoJ,EAAUtB,SAASorB,EAAgBpnB,SAAUjR,KAAKyP,MAAO,CACrE3I,OAAQ9G,KAAMkF,YAAalF,KAAKkF,YAAaqJ,UAAWA,MAGhE8pB,EAAgBn4B,UAAUs4B,qBAAuB,SAAU9jB,GACvD,IAAqB,IAAjB1U,KAAK2Q,QAAT,CAGA,IAAIpC,EAAYvO,KAAKuO,UACjBlF,EAAU,IAAIovB,EAAalqB,EAAUgB,MAAQvP,KAAKyP,MAAOiF,GAC7D1U,KAAKkoB,MAAMllB,KAAKqG,IACI,IAAhBrJ,KAAK+P,QACL/P,KAAKu4B,UAAUhqB,KAGvB8pB,EAAgBn4B,UAAUoF,MAAQ,SAAU/D,GACxCvB,KAAKw4B,qBAAqBtnB,EAAA,EAAaO,WAAWlQ,KAEtD82B,EAAgBn4B,UAAUsF,OAAS,SAAUD,GACzCvF,KAAK2Q,SAAU,EACf3Q,KAAKkoB,MAAQ,GACbloB,KAAKkF,YAAY3B,MAAMgC,GACvBvF,KAAK0F,eAET2yB,EAAgBn4B,UAAUuF,UAAY,WACR,IAAtBzF,KAAKkoB,MAAMvnB,QACXX,KAAKkF,YAAYN,WAErB5E,KAAK0F,eAEF2yB,EAjEW,CAkEpB5zB,EAAA,GACEg0B,EACA,SAAsBvP,EAAMxU,GACxB1U,KAAKkpB,KAAOA,EACZlpB,KAAK0U,aAAeA","file":"assets/javascripts/vendor.f81b9e8b.min.js","sourcesContent":["/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation. All rights reserved.\r\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use\r\nthis file except in compliance with the License. You may obtain a copy of the\r\nLicense at http://www.apache.org/licenses/LICENSE-2.0\r\n\r\nTHIS CODE IS PROVIDED ON AN *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\r\nKIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED\r\nWARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,\r\nMERCHANTABLITY OR NON-INFRINGEMENT.\r\n\r\nSee the Apache Version 2.0 License for specific language governing permissions\r\nand limitations under the License.\r\n***************************************************************************** */\r\n/* global Reflect, Promise */\r\n\r\nvar extendStatics = function(d, b) {\r\n extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p]; };\r\n return extendStatics(d, b);\r\n};\r\n\r\nexport function __extends(d, b) {\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n}\r\n\r\nexport var __assign = function() {\r\n __assign = Object.assign || function __assign(t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n }\r\n return __assign.apply(this, arguments);\r\n}\r\n\r\nexport function __rest(s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n}\r\n\r\nexport function __decorate(decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n}\r\n\r\nexport function __param(paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n}\r\n\r\nexport function __metadata(metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n}\r\n\r\nexport function __awaiter(thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n}\r\n\r\nexport function __generator(thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n}\r\n\r\nexport function __exportStar(m, exports) {\r\n for (var p in m) if (!exports.hasOwnProperty(p)) exports[p] = m[p];\r\n}\r\n\r\nexport function __values(o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n}\r\n\r\nexport function __read(o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n}\r\n\r\nexport function __spread() {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n}\r\n\r\nexport function __spreadArrays() {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n};\r\n\r\nexport function __await(v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n}\r\n\r\nexport function __asyncGenerator(thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n}\r\n\r\nexport function __asyncDelegator(o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n}\r\n\r\nexport function __asyncValues(o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n}\r\n\r\nexport function __makeTemplateObject(cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n};\r\n\r\nexport function __importStar(mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k];\r\n result.default = mod;\r\n return result;\r\n}\r\n\r\nexport function __importDefault(mod) {\r\n return (mod && mod.__esModule) ? mod : { default: mod };\r\n}\r\n\r\nexport function __classPrivateFieldGet(receiver, privateMap) {\r\n if (!privateMap.has(receiver)) {\r\n throw new TypeError(\"attempted to get private field on non-instance\");\r\n }\r\n return privateMap.get(receiver);\r\n}\r\n\r\nexport function __classPrivateFieldSet(receiver, privateMap, value) {\r\n if (!privateMap.has(receiver)) {\r\n throw new TypeError(\"attempted to set private field on non-instance\");\r\n }\r\n privateMap.set(receiver, value);\r\n return value;\r\n}\r\n","import { __extends } from \"tslib\";\nimport { isFunction } from './util/isFunction';\nimport { empty as emptyObserver } from './Observer';\nimport { Subscription } from './Subscription';\nimport { rxSubscriber as rxSubscriberSymbol } from '../internal/symbol/rxSubscriber';\nimport { config } from './config';\nimport { hostReportError } from './util/hostReportError';\nvar Subscriber = (function (_super) {\n __extends(Subscriber, _super);\n function Subscriber(destinationOrNext, error, complete) {\n var _this = _super.call(this) || this;\n _this.syncErrorValue = null;\n _this.syncErrorThrown = false;\n _this.syncErrorThrowable = false;\n _this.isStopped = false;\n switch (arguments.length) {\n case 0:\n _this.destination = emptyObserver;\n break;\n case 1:\n if (!destinationOrNext) {\n _this.destination = emptyObserver;\n break;\n }\n if (typeof destinationOrNext === 'object') {\n if (destinationOrNext instanceof Subscriber) {\n _this.syncErrorThrowable = destinationOrNext.syncErrorThrowable;\n _this.destination = destinationOrNext;\n destinationOrNext.add(_this);\n }\n else {\n _this.syncErrorThrowable = true;\n _this.destination = new SafeSubscriber(_this, destinationOrNext);\n }\n break;\n }\n default:\n _this.syncErrorThrowable = true;\n _this.destination = new SafeSubscriber(_this, destinationOrNext, error, complete);\n break;\n }\n return _this;\n }\n Subscriber.prototype[rxSubscriberSymbol] = function () { return this; };\n Subscriber.create = function (next, error, complete) {\n var subscriber = new Subscriber(next, error, complete);\n subscriber.syncErrorThrowable = false;\n return subscriber;\n };\n Subscriber.prototype.next = function (value) {\n if (!this.isStopped) {\n this._next(value);\n }\n };\n Subscriber.prototype.error = function (err) {\n if (!this.isStopped) {\n this.isStopped = true;\n this._error(err);\n }\n };\n Subscriber.prototype.complete = function () {\n if (!this.isStopped) {\n this.isStopped = true;\n this._complete();\n }\n };\n Subscriber.prototype.unsubscribe = function () {\n if (this.closed) {\n return;\n }\n this.isStopped = true;\n _super.prototype.unsubscribe.call(this);\n };\n Subscriber.prototype._next = function (value) {\n this.destination.next(value);\n };\n Subscriber.prototype._error = function (err) {\n this.destination.error(err);\n this.unsubscribe();\n };\n Subscriber.prototype._complete = function () {\n this.destination.complete();\n this.unsubscribe();\n };\n Subscriber.prototype._unsubscribeAndRecycle = function () {\n var _parentOrParents = this._parentOrParents;\n this._parentOrParents = null;\n this.unsubscribe();\n this.closed = false;\n this.isStopped = false;\n this._parentOrParents = _parentOrParents;\n return this;\n };\n return Subscriber;\n}(Subscription));\nexport { Subscriber };\nvar SafeSubscriber = (function (_super) {\n __extends(SafeSubscriber, _super);\n function SafeSubscriber(_parentSubscriber, observerOrNext, error, complete) {\n var _this = _super.call(this) || this;\n _this._parentSubscriber = _parentSubscriber;\n var next;\n var context = _this;\n if (isFunction(observerOrNext)) {\n next = observerOrNext;\n }\n else if (observerOrNext) {\n next = observerOrNext.next;\n error = observerOrNext.error;\n complete = observerOrNext.complete;\n if (observerOrNext !== emptyObserver) {\n context = Object.create(observerOrNext);\n if (isFunction(context.unsubscribe)) {\n _this.add(context.unsubscribe.bind(context));\n }\n context.unsubscribe = _this.unsubscribe.bind(_this);\n }\n }\n _this._context = context;\n _this._next = next;\n _this._error = error;\n _this._complete = complete;\n return _this;\n }\n SafeSubscriber.prototype.next = function (value) {\n if (!this.isStopped && this._next) {\n var _parentSubscriber = this._parentSubscriber;\n if (!config.useDeprecatedSynchronousErrorHandling || !_parentSubscriber.syncErrorThrowable) {\n this.__tryOrUnsub(this._next, value);\n }\n else if (this.__tryOrSetError(_parentSubscriber, this._next, value)) {\n this.unsubscribe();\n }\n }\n };\n SafeSubscriber.prototype.error = function (err) {\n if (!this.isStopped) {\n var _parentSubscriber = this._parentSubscriber;\n var useDeprecatedSynchronousErrorHandling = config.useDeprecatedSynchronousErrorHandling;\n if (this._error) {\n if (!useDeprecatedSynchronousErrorHandling || !_parentSubscriber.syncErrorThrowable) {\n this.__tryOrUnsub(this._error, err);\n this.unsubscribe();\n }\n else {\n this.__tryOrSetError(_parentSubscriber, this._error, err);\n this.unsubscribe();\n }\n }\n else if (!_parentSubscriber.syncErrorThrowable) {\n this.unsubscribe();\n if (useDeprecatedSynchronousErrorHandling) {\n throw err;\n }\n hostReportError(err);\n }\n else {\n if (useDeprecatedSynchronousErrorHandling) {\n _parentSubscriber.syncErrorValue = err;\n _parentSubscriber.syncErrorThrown = true;\n }\n else {\n hostReportError(err);\n }\n this.unsubscribe();\n }\n }\n };\n SafeSubscriber.prototype.complete = function () {\n var _this = this;\n if (!this.isStopped) {\n var _parentSubscriber = this._parentSubscriber;\n if (this._complete) {\n var wrappedComplete = function () { return _this._complete.call(_this._context); };\n if (!config.useDeprecatedSynchronousErrorHandling || !_parentSubscriber.syncErrorThrowable) {\n this.__tryOrUnsub(wrappedComplete);\n this.unsubscribe();\n }\n else {\n this.__tryOrSetError(_parentSubscriber, wrappedComplete);\n this.unsubscribe();\n }\n }\n else {\n this.unsubscribe();\n }\n }\n };\n SafeSubscriber.prototype.__tryOrUnsub = function (fn, value) {\n try {\n fn.call(this._context, value);\n }\n catch (err) {\n this.unsubscribe();\n if (config.useDeprecatedSynchronousErrorHandling) {\n throw err;\n }\n else {\n hostReportError(err);\n }\n }\n };\n SafeSubscriber.prototype.__tryOrSetError = function (parent, fn, value) {\n if (!config.useDeprecatedSynchronousErrorHandling) {\n throw new Error('bad call');\n }\n try {\n fn.call(this._context, value);\n }\n catch (err) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n parent.syncErrorValue = err;\n parent.syncErrorThrown = true;\n return true;\n }\n else {\n hostReportError(err);\n return true;\n }\n }\n return false;\n };\n SafeSubscriber.prototype._unsubscribe = function () {\n var _parentSubscriber = this._parentSubscriber;\n this._context = null;\n this._parentSubscriber = null;\n _parentSubscriber.unsubscribe();\n };\n return SafeSubscriber;\n}(Subscriber));\nexport { SafeSubscriber };\n//# sourceMappingURL=Subscriber.js.map","var Deferred = (function () {\n function Deferred() {\n var _this = this;\n this.resolve = null;\n this.reject = null;\n this.promise = new Promise(function (a, b) {\n _this.resolve = a;\n _this.reject = b;\n });\n }\n return Deferred;\n}());\nexport { Deferred };\n//# sourceMappingURL=deferred.js.map","import { __asyncGenerator, __await, __generator } from \"tslib\";\nimport { Deferred } from './util/deferred';\nexport function asyncIteratorFrom(source) {\n return coroutine(source);\n}\nfunction coroutine(source) {\n return __asyncGenerator(this, arguments, function coroutine_1() {\n var deferreds, values, hasError, error, completed, subs, d, result, err_1;\n return __generator(this, function (_a) {\n switch (_a.label) {\n case 0:\n deferreds = [];\n values = [];\n hasError = false;\n error = null;\n completed = false;\n subs = source.subscribe({\n next: function (value) {\n if (deferreds.length > 0) {\n deferreds.shift().resolve({ value: value, done: false });\n }\n else {\n values.push(value);\n }\n },\n error: function (err) {\n hasError = true;\n error = err;\n while (deferreds.length > 0) {\n deferreds.shift().reject(err);\n }\n },\n complete: function () {\n completed = true;\n while (deferreds.length > 0) {\n deferreds.shift().resolve({ value: undefined, done: true });\n }\n },\n });\n _a.label = 1;\n case 1:\n _a.trys.push([1, 16, 17, 18]);\n _a.label = 2;\n case 2:\n if (!true) return [3, 15];\n if (!(values.length > 0)) return [3, 5];\n return [4, __await(values.shift())];\n case 3: return [4, _a.sent()];\n case 4:\n _a.sent();\n return [3, 14];\n case 5:\n if (!completed) return [3, 7];\n return [4, __await(void 0)];\n case 6: return [2, _a.sent()];\n case 7:\n if (!hasError) return [3, 8];\n throw error;\n case 8:\n d = new Deferred();\n deferreds.push(d);\n return [4, __await(d.promise)];\n case 9:\n result = _a.sent();\n if (!result.done) return [3, 11];\n return [4, __await(void 0)];\n case 10: return [2, _a.sent()];\n case 11: return [4, __await(result.value)];\n case 12: return [4, _a.sent()];\n case 13:\n _a.sent();\n _a.label = 14;\n case 14: return [3, 2];\n case 15: return [3, 18];\n case 16:\n err_1 = _a.sent();\n throw err_1;\n case 17:\n subs.unsubscribe();\n return [7];\n case 18: return [2];\n }\n });\n });\n}\n//# sourceMappingURL=asyncIteratorFrom.js.map","import { canReportError } from './util/canReportError';\nimport { toSubscriber } from './util/toSubscriber';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { asyncIteratorFrom } from './asyncIteratorFrom';\nvar Observable = (function () {\n function Observable(subscribe) {\n this._isScalar = false;\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n Observable.prototype.lift = function (operator) {\n var observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n };\n Observable.prototype.subscribe = function (observerOrNext, error, complete) {\n var operator = this.operator;\n var sink = toSubscriber(observerOrNext, error, complete);\n if (operator) {\n sink.add(operator.call(sink, this.source));\n }\n else {\n sink.add(this.source || (config.useDeprecatedSynchronousErrorHandling && !sink.syncErrorThrowable) ?\n this._subscribe(sink) :\n this._trySubscribe(sink));\n }\n if (config.useDeprecatedSynchronousErrorHandling) {\n if (sink.syncErrorThrowable) {\n sink.syncErrorThrowable = false;\n if (sink.syncErrorThrown) {\n throw sink.syncErrorValue;\n }\n }\n }\n return sink;\n };\n Observable.prototype._trySubscribe = function (sink) {\n try {\n return this._subscribe(sink);\n }\n catch (err) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n sink.syncErrorThrown = true;\n sink.syncErrorValue = err;\n }\n if (canReportError(sink)) {\n sink.error(err);\n }\n else {\n console.warn(err);\n }\n }\n };\n Observable.prototype.forEach = function (next, promiseCtor) {\n var _this = this;\n promiseCtor = getPromiseCtor(promiseCtor);\n return new promiseCtor(function (resolve, reject) {\n var subscription;\n subscription = _this.subscribe(function (value) {\n try {\n next(value);\n }\n catch (err) {\n reject(err);\n if (subscription) {\n subscription.unsubscribe();\n }\n }\n }, reject, resolve);\n });\n };\n Observable.prototype._subscribe = function (subscriber) {\n var source = this.source;\n return source && source.subscribe(subscriber);\n };\n Observable.prototype[Symbol_observable] = function () {\n return this;\n };\n Observable.prototype.pipe = function () {\n var operations = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n operations[_i] = arguments[_i];\n }\n if (operations.length === 0) {\n return this;\n }\n return pipeFromArray(operations)(this);\n };\n Observable.prototype.toPromise = function (promiseCtor) {\n var _this = this;\n promiseCtor = getPromiseCtor(promiseCtor);\n return new promiseCtor(function (resolve, reject) {\n var value;\n _this.subscribe(function (x) { return value = x; }, function (err) { return reject(err); }, function () { return resolve(value); });\n });\n };\n Observable.create = function (subscribe) {\n return new Observable(subscribe);\n };\n return Observable;\n}());\nexport { Observable };\nfunction getPromiseCtor(promiseCtor) {\n if (!promiseCtor) {\n promiseCtor = config.Promise || Promise;\n }\n if (!promiseCtor) {\n throw new Error('no Promise impl found');\n }\n return promiseCtor;\n}\n(function () {\n if (Symbol && Symbol.asyncIterator) {\n Observable.prototype[Symbol.asyncIterator] = function () {\n return asyncIteratorFrom(this);\n };\n }\n})();\n//# sourceMappingURL=Observable.js.map","import { Subscriber } from '../Subscriber';\nimport { rxSubscriber as rxSubscriberSymbol } from '../symbol/rxSubscriber';\nimport { empty as emptyObserver } from '../Observer';\nexport function toSubscriber(nextOrObserver, error, complete) {\n if (nextOrObserver) {\n if (nextOrObserver instanceof Subscriber) {\n return nextOrObserver;\n }\n if (nextOrObserver[rxSubscriberSymbol]) {\n return nextOrObserver[rxSubscriberSymbol]();\n }\n }\n if (!nextOrObserver && !error && !complete) {\n return new Subscriber(emptyObserver);\n }\n return new Subscriber(nextOrObserver, error, complete);\n}\n//# sourceMappingURL=toSubscriber.js.map","import { Subscriber } from '../Subscriber';\nexport function canReportError(observer) {\n while (observer) {\n var _a = observer, closed_1 = _a.closed, destination = _a.destination, isStopped = _a.isStopped;\n if (closed_1 || isStopped) {\n return false;\n }\n else if (destination && destination instanceof Subscriber) {\n observer = destination;\n }\n else {\n observer = null;\n }\n }\n return true;\n}\n//# sourceMappingURL=canReportError.js.map","var UnsubscriptionErrorImpl = (function () {\n function UnsubscriptionErrorImpl(errors) {\n Error.call(this);\n this.message = errors ?\n errors.length + \" errors occurred during unsubscription:\\n\" + errors.map(function (err, i) { return i + 1 + \") \" + err.toString(); }).join('\\n ') : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n return this;\n }\n UnsubscriptionErrorImpl.prototype = Object.create(Error.prototype);\n return UnsubscriptionErrorImpl;\n})();\nexport var UnsubscriptionError = UnsubscriptionErrorImpl;\n//# sourceMappingURL=UnsubscriptionError.js.map","import { isArray } from './util/isArray';\nimport { isObject } from './util/isObject';\nimport { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nvar Subscription = (function () {\n function Subscription(unsubscribe) {\n this.closed = false;\n this._parentOrParents = null;\n this._subscriptions = null;\n if (unsubscribe) {\n this._unsubscribe = unsubscribe;\n }\n }\n Subscription.prototype.unsubscribe = function () {\n var errors;\n if (this.closed) {\n return;\n }\n var _a = this, _parentOrParents = _a._parentOrParents, _unsubscribe = _a._unsubscribe, _subscriptions = _a._subscriptions;\n this.closed = true;\n this._parentOrParents = null;\n this._subscriptions = null;\n if (_parentOrParents instanceof Subscription) {\n _parentOrParents.remove(this);\n }\n else if (_parentOrParents !== null) {\n for (var index = 0; index < _parentOrParents.length; ++index) {\n var parent_1 = _parentOrParents[index];\n parent_1.remove(this);\n }\n }\n if (isFunction(_unsubscribe)) {\n try {\n _unsubscribe.call(this);\n }\n catch (e) {\n errors = e instanceof UnsubscriptionError ? flattenUnsubscriptionErrors(e.errors) : [e];\n }\n }\n if (isArray(_subscriptions)) {\n var index = -1;\n var len = _subscriptions.length;\n while (++index < len) {\n var sub = _subscriptions[index];\n if (isObject(sub)) {\n try {\n sub.unsubscribe();\n }\n catch (e) {\n errors = errors || [];\n if (e instanceof UnsubscriptionError) {\n errors = errors.concat(flattenUnsubscriptionErrors(e.errors));\n }\n else {\n errors.push(e);\n }\n }\n }\n }\n }\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n };\n Subscription.prototype.add = function (teardown) {\n var subscription = teardown;\n if (!teardown) {\n return Subscription.EMPTY;\n }\n switch (typeof teardown) {\n case 'function':\n subscription = new Subscription(teardown);\n case 'object':\n if (subscription === this || subscription.closed || typeof subscription.unsubscribe !== 'function') {\n return subscription;\n }\n else if (this.closed) {\n subscription.unsubscribe();\n return subscription;\n }\n else if (!(subscription instanceof Subscription)) {\n var tmp = subscription;\n subscription = new Subscription();\n subscription._subscriptions = [tmp];\n }\n break;\n default: {\n throw new Error('unrecognized teardown ' + teardown + ' added to Subscription.');\n }\n }\n var _parentOrParents = subscription._parentOrParents;\n if (_parentOrParents === null) {\n subscription._parentOrParents = this;\n }\n else if (_parentOrParents instanceof Subscription) {\n if (_parentOrParents === this) {\n return subscription;\n }\n subscription._parentOrParents = [_parentOrParents, this];\n }\n else if (_parentOrParents.indexOf(this) === -1) {\n _parentOrParents.push(this);\n }\n else {\n return subscription;\n }\n var subscriptions = this._subscriptions;\n if (subscriptions === null) {\n this._subscriptions = [subscription];\n }\n else {\n subscriptions.push(subscription);\n }\n return subscription;\n };\n Subscription.prototype.remove = function (subscription) {\n var subscriptions = this._subscriptions;\n if (subscriptions) {\n var subscriptionIndex = subscriptions.indexOf(subscription);\n if (subscriptionIndex !== -1) {\n subscriptions.splice(subscriptionIndex, 1);\n }\n }\n };\n Subscription.EMPTY = (function (empty) {\n empty.closed = true;\n return empty;\n }(new Subscription()));\n return Subscription;\n}());\nexport { Subscription };\nfunction flattenUnsubscriptionErrors(errors) {\n return errors.reduce(function (errs, err) { return errs.concat((err instanceof UnsubscriptionError) ? err.errors : err); }, []);\n}\n//# sourceMappingURL=Subscription.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nexport function map(project, thisArg) {\n return function mapOperation(source) {\n if (typeof project !== 'function') {\n throw new TypeError('argument is not a function. Are you looking for `mapTo()`?');\n }\n return source.lift(new MapOperator(project, thisArg));\n };\n}\nvar MapOperator = (function () {\n function MapOperator(project, thisArg) {\n this.project = project;\n this.thisArg = thisArg;\n }\n MapOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new MapSubscriber(subscriber, this.project, this.thisArg));\n };\n return MapOperator;\n}());\nexport { MapOperator };\nvar MapSubscriber = (function (_super) {\n __extends(MapSubscriber, _super);\n function MapSubscriber(destination, project, thisArg) {\n var _this = _super.call(this, destination) || this;\n _this.project = project;\n _this.count = 0;\n _this.thisArg = thisArg || _this;\n return _this;\n }\n MapSubscriber.prototype._next = function (value) {\n var result;\n try {\n result = this.project.call(this.thisArg, value, this.count++);\n }\n catch (err) {\n this.destination.error(err);\n return;\n }\n this.destination.next(result);\n };\n return MapSubscriber;\n}(Subscriber));\n//# sourceMappingURL=map.js.map","import { InnerSubscriber } from '../InnerSubscriber';\nimport { subscribeTo } from './subscribeTo';\nimport { Observable } from '../Observable';\nexport function subscribeToResult(outerSubscriber, result, outerValue, outerIndex, innerSubscriber) {\n if (innerSubscriber === void 0) { innerSubscriber = new InnerSubscriber(outerSubscriber, outerValue, outerIndex); }\n if (innerSubscriber.closed) {\n return undefined;\n }\n if (result instanceof Observable) {\n return result.subscribe(innerSubscriber);\n }\n return subscribeTo(result)(innerSubscriber);\n}\n//# sourceMappingURL=subscribeToResult.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from './Subscriber';\nvar OuterSubscriber = (function (_super) {\n __extends(OuterSubscriber, _super);\n function OuterSubscriber() {\n return _super !== null && _super.apply(this, arguments) || this;\n }\n OuterSubscriber.prototype.notifyNext = function (outerValue, innerValue, outerIndex, innerIndex, innerSub) {\n this.destination.next(innerValue);\n };\n OuterSubscriber.prototype.notifyError = function (error, innerSub) {\n this.destination.error(error);\n };\n OuterSubscriber.prototype.notifyComplete = function (innerSub) {\n this.destination.complete();\n };\n return OuterSubscriber;\n}(Subscriber));\nexport { OuterSubscriber };\n//# sourceMappingURL=OuterSubscriber.js.map","var _enable_super_gross_mode_that_will_cause_bad_things = false;\nexport var config = {\n Promise: undefined,\n set useDeprecatedSynchronousErrorHandling(value) {\n if (value) {\n var error = new Error();\n console.warn('DEPRECATED! RxJS was set to use deprecated synchronous error handling behavior by code at: \\n' + error.stack);\n }\n else if (_enable_super_gross_mode_that_will_cause_bad_things) {\n console.log('RxJS: Back to a better error behavior. Thank you. <3');\n }\n _enable_super_gross_mode_that_will_cause_bad_things = value;\n },\n get useDeprecatedSynchronousErrorHandling() {\n return _enable_super_gross_mode_that_will_cause_bad_things;\n },\n};\n//# sourceMappingURL=config.js.map","var __window = typeof window !== 'undefined' && window;\nvar __self = typeof self !== 'undefined' && typeof WorkerGlobalScope !== 'undefined' &&\n self instanceof WorkerGlobalScope && self;\nvar __global = typeof global !== 'undefined' && global;\nvar _root = __window || __global || __self;\n(function () {\n if (!_root) {\n throw new Error('RxJS could not find any global context (window, self, global)');\n }\n})();\nexport { _root as root };\n//# sourceMappingURL=root.js.map","export function isFunction(x) {\n return typeof x === 'function';\n}\n//# sourceMappingURL=isFunction.js.map","export function noop() { }\n//# sourceMappingURL=noop.js.map","export var observable = (function () { return typeof Symbol === 'function' && Symbol.observable || '@@observable'; })();\n//# sourceMappingURL=observable.js.map","import { Observable } from '../Observable';\nexport var EMPTY = new Observable(function (subscriber) { return subscriber.complete(); });\nexport function empty(scheduler) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\nfunction emptyScheduled(scheduler) {\n return new Observable(function (subscriber) { return scheduler.schedule(function () { return subscriber.complete(); }); });\n}\n//# sourceMappingURL=empty.js.map","var ObjectUnsubscribedErrorImpl = (function () {\n function ObjectUnsubscribedErrorImpl() {\n Error.call(this);\n this.message = 'object unsubscribed';\n this.name = 'ObjectUnsubscribedError';\n return this;\n }\n ObjectUnsubscribedErrorImpl.prototype = Object.create(Error.prototype);\n return ObjectUnsubscribedErrorImpl;\n})();\nexport var ObjectUnsubscribedError = ObjectUnsubscribedErrorImpl;\n//# sourceMappingURL=ObjectUnsubscribedError.js.map","export default function _isPlaceholder(a) {\n return a != null && typeof a === 'object' && a['@@functional/placeholder'] === true;\n}","import _isPlaceholder from \"./_isPlaceholder.js\";\n/**\n * Optimized internal one-arity curry function.\n *\n * @private\n * @category Function\n * @param {Function} fn The function to curry.\n * @return {Function} The curried function.\n */\n\nexport default function _curry1(fn) {\n return function f1(a) {\n if (arguments.length === 0 || _isPlaceholder(a)) {\n return f1;\n } else {\n return fn.apply(this, arguments);\n }\n };\n}","export function hostReportError(err) {\n setTimeout(function () { throw err; }, 0);\n}\n//# sourceMappingURL=hostReportError.js.map","export var isArray = (function () { return Array.isArray || (function (x) { return x && typeof x.length === 'number'; }); })();\n//# sourceMappingURL=isArray.js.map","export function isScheduler(value) {\n return value && typeof value.schedule === 'function';\n}\n//# sourceMappingURL=isScheduler.js.map","import { __extends } from \"tslib\";\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { SubjectSubscription } from './SubjectSubscription';\nimport { rxSubscriber as rxSubscriberSymbol } from '../internal/symbol/rxSubscriber';\nvar SubjectSubscriber = (function (_super) {\n __extends(SubjectSubscriber, _super);\n function SubjectSubscriber(destination) {\n var _this = _super.call(this, destination) || this;\n _this.destination = destination;\n return _this;\n }\n return SubjectSubscriber;\n}(Subscriber));\nexport { SubjectSubscriber };\nvar Subject = (function (_super) {\n __extends(Subject, _super);\n function Subject() {\n var _this = _super.call(this) || this;\n _this.observers = [];\n _this.closed = false;\n _this.isStopped = false;\n _this.hasError = false;\n _this.thrownError = null;\n return _this;\n }\n Subject.prototype[rxSubscriberSymbol] = function () {\n return new SubjectSubscriber(this);\n };\n Subject.prototype.lift = function (operator) {\n var subject = new AnonymousSubject(this, this);\n subject.operator = operator;\n return subject;\n };\n Subject.prototype.next = function (value) {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n if (!this.isStopped) {\n var observers = this.observers;\n var len = observers.length;\n var copy = observers.slice();\n for (var i = 0; i < len; i++) {\n copy[i].next(value);\n }\n }\n };\n Subject.prototype.error = function (err) {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n this.hasError = true;\n this.thrownError = err;\n this.isStopped = true;\n var observers = this.observers;\n var len = observers.length;\n var copy = observers.slice();\n for (var i = 0; i < len; i++) {\n copy[i].error(err);\n }\n this.observers.length = 0;\n };\n Subject.prototype.complete = function () {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n this.isStopped = true;\n var observers = this.observers;\n var len = observers.length;\n var copy = observers.slice();\n for (var i = 0; i < len; i++) {\n copy[i].complete();\n }\n this.observers.length = 0;\n };\n Subject.prototype.unsubscribe = function () {\n this.isStopped = true;\n this.closed = true;\n this.observers = null;\n };\n Subject.prototype._trySubscribe = function (subscriber) {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n else {\n return _super.prototype._trySubscribe.call(this, subscriber);\n }\n };\n Subject.prototype._subscribe = function (subscriber) {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n else if (this.hasError) {\n subscriber.error(this.thrownError);\n return Subscription.EMPTY;\n }\n else if (this.isStopped) {\n subscriber.complete();\n return Subscription.EMPTY;\n }\n else {\n this.observers.push(subscriber);\n return new SubjectSubscription(this, subscriber);\n }\n };\n Subject.prototype.asObservable = function () {\n var observable = new Observable();\n observable.source = this;\n return observable;\n };\n Subject.create = function (destination, source) {\n return new AnonymousSubject(destination, source);\n };\n return Subject;\n}(Observable));\nexport { Subject };\nvar AnonymousSubject = (function (_super) {\n __extends(AnonymousSubject, _super);\n function AnonymousSubject(destination, source) {\n var _this = _super.call(this) || this;\n _this.destination = destination;\n _this.source = source;\n return _this;\n }\n AnonymousSubject.prototype.next = function (value) {\n var destination = this.destination;\n if (destination && destination.next) {\n destination.next(value);\n }\n };\n AnonymousSubject.prototype.error = function (err) {\n var destination = this.destination;\n if (destination && destination.error) {\n this.destination.error(err);\n }\n };\n AnonymousSubject.prototype.complete = function () {\n var destination = this.destination;\n if (destination && destination.complete) {\n this.destination.complete();\n }\n };\n AnonymousSubject.prototype._subscribe = function (subscriber) {\n var source = this.source;\n if (source) {\n return this.source.subscribe(subscriber);\n }\n else {\n return Subscription.EMPTY;\n }\n };\n return AnonymousSubject;\n}(Subject));\nexport { AnonymousSubject };\n//# sourceMappingURL=Subject.js.map","export function getSymbolIterator() {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator';\n }\n return Symbol.iterator;\n}\nexport var iterator = getSymbolIterator();\nexport var $$iterator = iterator;\n//# sourceMappingURL=iterator.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from './Subscriber';\nvar InnerSubscriber = (function (_super) {\n __extends(InnerSubscriber, _super);\n function InnerSubscriber(parent, outerValue, outerIndex) {\n var _this = _super.call(this) || this;\n _this.parent = parent;\n _this.outerValue = outerValue;\n _this.outerIndex = outerIndex;\n _this.index = 0;\n return _this;\n }\n InnerSubscriber.prototype._next = function (value) {\n this.parent.notifyNext(this.outerValue, value, this.outerIndex, this.index++, this);\n };\n InnerSubscriber.prototype._error = function (error) {\n this.parent.notifyError(error, this);\n this.unsubscribe();\n };\n InnerSubscriber.prototype._complete = function () {\n this.parent.notifyComplete(this);\n this.unsubscribe();\n };\n return InnerSubscriber;\n}(Subscriber));\nexport { InnerSubscriber };\n//# sourceMappingURL=InnerSubscriber.js.map","export var rxSubscriber = (function () {\n return typeof Symbol === 'function'\n ? Symbol('rxSubscriber')\n : '@@rxSubscriber_' + Math.random();\n})();\nexport var $$rxSubscriber = rxSubscriber;\n//# sourceMappingURL=rxSubscriber.js.map","import { __extends } from \"tslib\";\nimport { OuterSubscriber } from '../OuterSubscriber';\nimport { InnerSubscriber } from '../InnerSubscriber';\nimport { subscribeToResult } from '../util/subscribeToResult';\nimport { map } from './map';\nimport { from } from '../observable/from';\nexport function switchMap(project, resultSelector) {\n if (typeof resultSelector === 'function') {\n return function (source) { return source.pipe(switchMap(function (a, i) { return from(project(a, i)).pipe(map(function (b, ii) { return resultSelector(a, b, i, ii); })); })); };\n }\n return function (source) { return source.lift(new SwitchMapOperator(project)); };\n}\nvar SwitchMapOperator = (function () {\n function SwitchMapOperator(project) {\n this.project = project;\n }\n SwitchMapOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new SwitchMapSubscriber(subscriber, this.project));\n };\n return SwitchMapOperator;\n}());\nvar SwitchMapSubscriber = (function (_super) {\n __extends(SwitchMapSubscriber, _super);\n function SwitchMapSubscriber(destination, project) {\n var _this = _super.call(this, destination) || this;\n _this.project = project;\n _this.index = 0;\n return _this;\n }\n SwitchMapSubscriber.prototype._next = function (value) {\n var result;\n var index = this.index++;\n try {\n result = this.project(value, index);\n }\n catch (error) {\n this.destination.error(error);\n return;\n }\n this._innerSub(result, value, index);\n };\n SwitchMapSubscriber.prototype._innerSub = function (result, value, index) {\n var innerSubscription = this.innerSubscription;\n if (innerSubscription) {\n innerSubscription.unsubscribe();\n }\n var innerSubscriber = new InnerSubscriber(this, value, index);\n var destination = this.destination;\n destination.add(innerSubscriber);\n this.innerSubscription = subscribeToResult(this, result, undefined, undefined, innerSubscriber);\n if (this.innerSubscription !== innerSubscriber) {\n destination.add(this.innerSubscription);\n }\n };\n SwitchMapSubscriber.prototype._complete = function () {\n var innerSubscription = this.innerSubscription;\n if (!innerSubscription || innerSubscription.closed) {\n _super.prototype._complete.call(this);\n }\n this.unsubscribe();\n };\n SwitchMapSubscriber.prototype._unsubscribe = function () {\n this.innerSubscription = null;\n };\n SwitchMapSubscriber.prototype.notifyComplete = function (innerSub) {\n var destination = this.destination;\n destination.remove(innerSub);\n this.innerSubscription = null;\n if (this.isStopped) {\n _super.prototype._complete.call(this);\n }\n };\n SwitchMapSubscriber.prototype.notifyNext = function (outerValue, innerValue, outerIndex, innerIndex, innerSub) {\n this.destination.next(innerValue);\n };\n return SwitchMapSubscriber;\n}(OuterSubscriber));\n//# sourceMappingURL=switchMap.js.map","import { Observable } from '../Observable';\nimport { Subscription } from '../Subscription';\nexport function scheduleArray(input, scheduler) {\n return new Observable(function (subscriber) {\n var sub = new Subscription();\n var i = 0;\n sub.add(scheduler.schedule(function () {\n if (i === input.length) {\n subscriber.complete();\n return;\n }\n subscriber.next(input[i++]);\n if (!subscriber.closed) {\n sub.add(this.schedule());\n }\n }));\n return sub;\n });\n}\n//# sourceMappingURL=scheduleArray.js.map","import { Observable } from '../Observable';\nimport { subscribeToArray } from '../util/subscribeToArray';\nimport { scheduleArray } from '../scheduled/scheduleArray';\nexport function fromArray(input, scheduler) {\n if (!scheduler) {\n return new Observable(subscribeToArray(input));\n }\n else {\n return scheduleArray(input, scheduler);\n }\n}\n//# sourceMappingURL=fromArray.js.map","import { scheduleObservable } from './scheduleObservable';\nimport { schedulePromise } from './schedulePromise';\nimport { scheduleArray } from './scheduleArray';\nimport { scheduleIterable } from './scheduleIterable';\nimport { isInteropObservable } from '../util/isInteropObservable';\nimport { isPromise } from '../util/isPromise';\nimport { isArrayLike } from '../util/isArrayLike';\nimport { isIterable } from '../util/isIterable';\nimport { scheduleAsyncIterable } from './scheduleAsyncIterable';\nexport function scheduled(input, scheduler) {\n if (input != null) {\n if (isInteropObservable(input)) {\n return scheduleObservable(input, scheduler);\n }\n else if (isPromise(input)) {\n return schedulePromise(input, scheduler);\n }\n else if (isArrayLike(input)) {\n return scheduleArray(input, scheduler);\n }\n else if (isIterable(input) || typeof input === 'string') {\n return scheduleIterable(input, scheduler);\n }\n else if (Symbol && Symbol.asyncIterator && typeof input[Symbol.asyncIterator] === 'function') {\n return scheduleAsyncIterable(input, scheduler);\n }\n }\n throw new TypeError((input !== null && typeof input || input) + ' is not observable');\n}\n//# sourceMappingURL=scheduled.js.map","import { observable as Symbol_observable } from '../symbol/observable';\nexport function isInteropObservable(input) {\n return input && typeof input[Symbol_observable] === 'function';\n}\n//# sourceMappingURL=isInteropObservable.js.map","import { Observable } from '../Observable';\nimport { Subscription } from '../Subscription';\nimport { observable as Symbol_observable } from '../symbol/observable';\nexport function scheduleObservable(input, scheduler) {\n return new Observable(function (subscriber) {\n var sub = new Subscription();\n sub.add(scheduler.schedule(function () {\n var observable = input[Symbol_observable]();\n sub.add(observable.subscribe({\n next: function (value) { sub.add(scheduler.schedule(function () { return subscriber.next(value); })); },\n error: function (err) { sub.add(scheduler.schedule(function () { return subscriber.error(err); })); },\n complete: function () { sub.add(scheduler.schedule(function () { return subscriber.complete(); })); },\n }));\n }));\n return sub;\n });\n}\n//# sourceMappingURL=scheduleObservable.js.map","import { Observable } from '../Observable';\nimport { Subscription } from '../Subscription';\nexport function schedulePromise(input, scheduler) {\n return new Observable(function (subscriber) {\n var sub = new Subscription();\n sub.add(scheduler.schedule(function () { return input.then(function (value) {\n sub.add(scheduler.schedule(function () {\n subscriber.next(value);\n sub.add(scheduler.schedule(function () { return subscriber.complete(); }));\n }));\n }, function (err) {\n sub.add(scheduler.schedule(function () { return subscriber.error(err); }));\n }); }));\n return sub;\n });\n}\n//# sourceMappingURL=schedulePromise.js.map","import { iterator as Symbol_iterator } from '../symbol/iterator';\nexport function isIterable(input) {\n return input && typeof input[Symbol_iterator] === 'function';\n}\n//# sourceMappingURL=isIterable.js.map","import { Observable } from '../Observable';\nimport { Subscription } from '../Subscription';\nimport { iterator as Symbol_iterator } from '../symbol/iterator';\nexport function scheduleIterable(input, scheduler) {\n if (!input) {\n throw new Error('Iterable cannot be null');\n }\n return new Observable(function (subscriber) {\n var sub = new Subscription();\n var iterator;\n sub.add(function () {\n if (iterator && typeof iterator.return === 'function') {\n iterator.return();\n }\n });\n sub.add(scheduler.schedule(function () {\n iterator = input[Symbol_iterator]();\n sub.add(scheduler.schedule(function () {\n if (subscriber.closed) {\n return;\n }\n var value;\n var done;\n try {\n var result = iterator.next();\n value = result.value;\n done = result.done;\n }\n catch (err) {\n subscriber.error(err);\n return;\n }\n if (done) {\n subscriber.complete();\n }\n else {\n subscriber.next(value);\n this.schedule();\n }\n }));\n }));\n return sub;\n });\n}\n//# sourceMappingURL=scheduleIterable.js.map","import { Observable } from '../Observable';\nimport { Subscription } from '../Subscription';\nexport function scheduleAsyncIterable(input, scheduler) {\n if (!input) {\n throw new Error('Iterable cannot be null');\n }\n return new Observable(function (subscriber) {\n var sub = new Subscription();\n sub.add(scheduler.schedule(function () {\n var iterator = input[Symbol.asyncIterator]();\n sub.add(scheduler.schedule(function () {\n var _this = this;\n iterator.next().then(function (result) {\n if (result.done) {\n subscriber.complete();\n }\n else {\n subscriber.next(result.value);\n _this.schedule();\n }\n });\n }));\n }));\n return sub;\n });\n}\n//# sourceMappingURL=scheduleAsyncIterable.js.map","import { Observable } from '../Observable';\nimport { subscribeTo } from '../util/subscribeTo';\nimport { scheduled } from '../scheduled/scheduled';\nexport function from(input, scheduler) {\n if (!scheduler) {\n if (input instanceof Observable) {\n return input;\n }\n return new Observable(subscribeTo(input));\n }\n else {\n return scheduled(input, scheduler);\n }\n}\n//# sourceMappingURL=from.js.map","var Scheduler = (function () {\n function Scheduler(SchedulerAction, now) {\n if (now === void 0) { now = Scheduler.now; }\n this.SchedulerAction = SchedulerAction;\n this.now = now;\n }\n Scheduler.prototype.schedule = function (work, delay, state) {\n if (delay === void 0) { delay = 0; }\n return new this.SchedulerAction(this, work).schedule(state, delay);\n };\n Scheduler.now = function () { return Date.now(); };\n return Scheduler;\n}());\nexport { Scheduler };\n//# sourceMappingURL=Scheduler.js.map","import { __extends } from \"tslib\";\nimport { Scheduler } from '../Scheduler';\nvar AsyncScheduler = (function (_super) {\n __extends(AsyncScheduler, _super);\n function AsyncScheduler(SchedulerAction, now) {\n if (now === void 0) { now = Scheduler.now; }\n var _this = _super.call(this, SchedulerAction, function () {\n if (AsyncScheduler.delegate && AsyncScheduler.delegate !== _this) {\n return AsyncScheduler.delegate.now();\n }\n else {\n return now();\n }\n }) || this;\n _this.actions = [];\n _this.active = false;\n _this.scheduled = undefined;\n return _this;\n }\n AsyncScheduler.prototype.schedule = function (work, delay, state) {\n if (delay === void 0) { delay = 0; }\n if (AsyncScheduler.delegate && AsyncScheduler.delegate !== this) {\n return AsyncScheduler.delegate.schedule(work, delay, state);\n }\n else {\n return _super.prototype.schedule.call(this, work, delay, state);\n }\n };\n AsyncScheduler.prototype.flush = function (action) {\n var actions = this.actions;\n if (this.active) {\n actions.push(action);\n return;\n }\n var error;\n this.active = true;\n do {\n if (error = action.execute(action.state, action.delay)) {\n break;\n }\n } while (action = actions.shift());\n this.active = false;\n if (error) {\n while (action = actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n };\n return AsyncScheduler;\n}(Scheduler));\nexport { AsyncScheduler };\n//# sourceMappingURL=AsyncScheduler.js.map","import { __extends } from \"tslib\";\nimport { Action } from './Action';\nvar AsyncAction = (function (_super) {\n __extends(AsyncAction, _super);\n function AsyncAction(scheduler, work) {\n var _this = _super.call(this, scheduler, work) || this;\n _this.scheduler = scheduler;\n _this.work = work;\n _this.pending = false;\n return _this;\n }\n AsyncAction.prototype.schedule = function (state, delay) {\n if (delay === void 0) { delay = 0; }\n if (this.closed) {\n return this;\n }\n this.state = state;\n var id = this.id;\n var scheduler = this.scheduler;\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n this.pending = true;\n this.delay = delay;\n this.id = this.id || this.requestAsyncId(scheduler, this.id, delay);\n return this;\n };\n AsyncAction.prototype.requestAsyncId = function (scheduler, id, delay) {\n if (delay === void 0) { delay = 0; }\n return setInterval(scheduler.flush.bind(scheduler, this), delay);\n };\n AsyncAction.prototype.recycleAsyncId = function (scheduler, id, delay) {\n if (delay === void 0) { delay = 0; }\n if (delay !== null && this.delay === delay && this.pending === false) {\n return id;\n }\n clearInterval(id);\n return undefined;\n };\n AsyncAction.prototype.execute = function (state, delay) {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n this.pending = false;\n var error = this._execute(state, delay);\n if (error) {\n return error;\n }\n else if (this.pending === false && this.id != null) {\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n };\n AsyncAction.prototype._execute = function (state, delay) {\n var errored = false;\n var errorValue = undefined;\n try {\n this.work(state);\n }\n catch (e) {\n errored = true;\n errorValue = !!e && e || new Error(e);\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n };\n AsyncAction.prototype._unsubscribe = function () {\n var id = this.id;\n var scheduler = this.scheduler;\n var actions = scheduler.actions;\n var index = actions.indexOf(this);\n this.work = null;\n this.state = null;\n this.pending = false;\n this.scheduler = null;\n if (index !== -1) {\n actions.splice(index, 1);\n }\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n this.delay = null;\n };\n return AsyncAction;\n}(Action));\nexport { AsyncAction };\n//# sourceMappingURL=AsyncAction.js.map","import { __extends } from \"tslib\";\nimport { Subscription } from '../Subscription';\nvar Action = (function (_super) {\n __extends(Action, _super);\n function Action(scheduler, work) {\n return _super.call(this) || this;\n }\n Action.prototype.schedule = function (state, delay) {\n if (delay === void 0) { delay = 0; }\n return this;\n };\n return Action;\n}(Subscription));\nexport { Action };\n//# sourceMappingURL=Action.js.map","import { isScheduler } from '../util/isScheduler';\nimport { fromArray } from './fromArray';\nimport { scheduleArray } from '../scheduled/scheduleArray';\nexport function of() {\n var args = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n args[_i] = arguments[_i];\n }\n var scheduler = args[args.length - 1];\n if (isScheduler(scheduler)) {\n args.pop();\n return scheduleArray(args, scheduler);\n }\n else {\n return fromArray(args);\n }\n}\n//# sourceMappingURL=of.js.map","import { config } from './config';\nimport { hostReportError } from './util/hostReportError';\nexport var empty = {\n closed: true,\n next: function (value) { },\n error: function (err) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n throw err;\n }\n else {\n hostReportError(err);\n }\n },\n complete: function () { }\n};\n//# sourceMappingURL=Observer.js.map","import { EMPTY } from './observable/empty';\nimport { of } from './observable/of';\nimport { throwError } from './observable/throwError';\nexport var NotificationKind;\n(function (NotificationKind) {\n NotificationKind[\"NEXT\"] = \"N\";\n NotificationKind[\"ERROR\"] = \"E\";\n NotificationKind[\"COMPLETE\"] = \"C\";\n})(NotificationKind || (NotificationKind = {}));\nvar Notification = (function () {\n function Notification(kind, value, error) {\n this.kind = kind;\n this.value = value;\n this.error = error;\n this.hasValue = kind === 'N';\n }\n Notification.prototype.observe = function (observer) {\n switch (this.kind) {\n case 'N':\n return observer.next && observer.next(this.value);\n case 'E':\n return observer.error && observer.error(this.error);\n case 'C':\n return observer.complete && observer.complete();\n }\n };\n Notification.prototype.do = function (next, error, complete) {\n var kind = this.kind;\n switch (kind) {\n case 'N':\n return next && next(this.value);\n case 'E':\n return error && error(this.error);\n case 'C':\n return complete && complete();\n }\n };\n Notification.prototype.accept = function (nextOrObserver, error, complete) {\n if (nextOrObserver && typeof nextOrObserver.next === 'function') {\n return this.observe(nextOrObserver);\n }\n else {\n return this.do(nextOrObserver, error, complete);\n }\n };\n Notification.prototype.toObservable = function () {\n var kind = this.kind;\n switch (kind) {\n case 'N':\n return of(this.value);\n case 'E':\n return throwError(this.error);\n case 'C':\n return EMPTY;\n }\n throw new Error('unexpected notification kind value');\n };\n Notification.createNext = function (value) {\n if (typeof value !== 'undefined') {\n return new Notification('N', value);\n }\n return Notification.undefinedValueNotification;\n };\n Notification.createError = function (err) {\n return new Notification('E', undefined, err);\n };\n Notification.createComplete = function () {\n return Notification.completeNotification;\n };\n Notification.completeNotification = new Notification('C');\n Notification.undefinedValueNotification = new Notification('N', undefined);\n return Notification;\n}());\nexport { Notification };\n//# sourceMappingURL=Notification.js.map","import { Observable } from '../Observable';\nexport function throwError(error, scheduler) {\n if (!scheduler) {\n return new Observable(function (subscriber) { return subscriber.error(error); });\n }\n else {\n return new Observable(function (subscriber) { return scheduler.schedule(dispatch, 0, { error: error, subscriber: subscriber }); });\n }\n}\nfunction dispatch(_a) {\n var error = _a.error, subscriber = _a.subscriber;\n subscriber.error(error);\n}\n//# sourceMappingURL=throwError.js.map","import { identity } from './identity';\nexport function pipe() {\n var fns = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n fns[_i] = arguments[_i];\n }\n return pipeFromArray(fns);\n}\nexport function pipeFromArray(fns) {\n if (fns.length === 0) {\n return identity;\n }\n if (fns.length === 1) {\n return fns[0];\n }\n return function piped(input) {\n return fns.reduce(function (prev, fn) { return fn(prev); }, input);\n };\n}\n//# sourceMappingURL=pipe.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nexport function distinctUntilChanged(compare, keySelector) {\n return function (source) { return source.lift(new DistinctUntilChangedOperator(compare, keySelector)); };\n}\nvar DistinctUntilChangedOperator = (function () {\n function DistinctUntilChangedOperator(compare, keySelector) {\n this.compare = compare;\n this.keySelector = keySelector;\n }\n DistinctUntilChangedOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new DistinctUntilChangedSubscriber(subscriber, this.compare, this.keySelector));\n };\n return DistinctUntilChangedOperator;\n}());\nvar DistinctUntilChangedSubscriber = (function (_super) {\n __extends(DistinctUntilChangedSubscriber, _super);\n function DistinctUntilChangedSubscriber(destination, compare, keySelector) {\n var _this = _super.call(this, destination) || this;\n _this.keySelector = keySelector;\n _this.hasKey = false;\n if (typeof compare === 'function') {\n _this.compare = compare;\n }\n return _this;\n }\n DistinctUntilChangedSubscriber.prototype.compare = function (x, y) {\n return x === y;\n };\n DistinctUntilChangedSubscriber.prototype._next = function (value) {\n var key;\n try {\n var keySelector = this.keySelector;\n key = keySelector ? keySelector(value) : value;\n }\n catch (err) {\n return this.destination.error(err);\n }\n var result = false;\n if (this.hasKey) {\n try {\n var compare = this.compare;\n result = compare(this.key, key);\n }\n catch (err) {\n return this.destination.error(err);\n }\n }\n else {\n this.hasKey = true;\n }\n if (!result) {\n this.key = key;\n this.destination.next(value);\n }\n };\n return DistinctUntilChangedSubscriber;\n}(Subscriber));\n//# sourceMappingURL=distinctUntilChanged.js.map","export function isObject(x) {\n return x !== null && typeof x === 'object';\n}\n//# sourceMappingURL=isObject.js.map","import { __extends } from \"tslib\";\nimport { Subscription } from './Subscription';\nvar SubjectSubscription = (function (_super) {\n __extends(SubjectSubscription, _super);\n function SubjectSubscription(subject, subscriber) {\n var _this = _super.call(this) || this;\n _this.subject = subject;\n _this.subscriber = subscriber;\n _this.closed = false;\n return _this;\n }\n SubjectSubscription.prototype.unsubscribe = function () {\n if (this.closed) {\n return;\n }\n this.closed = true;\n var subject = this.subject;\n var observers = subject.observers;\n this.subject = null;\n if (!observers || observers.length === 0 || subject.isStopped || subject.closed) {\n return;\n }\n var subscriberIndex = observers.indexOf(this.subscriber);\n if (subscriberIndex !== -1) {\n observers.splice(subscriberIndex, 1);\n }\n };\n return SubjectSubscription;\n}(Subscription));\nexport { SubjectSubscription };\n//# sourceMappingURL=SubjectSubscription.js.map","export function identity(x) {\n return x;\n}\n//# sourceMappingURL=identity.js.map","export var subscribeToArray = function (array) { return function (subscriber) {\n for (var i = 0, len = array.length; i < len && !subscriber.closed; i++) {\n subscriber.next(array[i]);\n }\n subscriber.complete();\n}; };\n//# sourceMappingURL=subscribeToArray.js.map","export var isArrayLike = (function (x) { return x && typeof x.length === 'number' && typeof x !== 'function'; });\n//# sourceMappingURL=isArrayLike.js.map","export function isPromise(value) {\n return !!value && typeof value.subscribe !== 'function' && typeof value.then === 'function';\n}\n//# sourceMappingURL=isPromise.js.map","import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\nexport var async = new AsyncScheduler(AsyncAction);\n//# sourceMappingURL=async.js.map","import { __asyncValues, __awaiter, __generator } from \"tslib\";\nexport function subscribeToAsyncIterable(asyncIterable) {\n return function (subscriber) {\n process(asyncIterable, subscriber).catch(function (err) { return subscriber.error(err); });\n };\n}\nfunction process(asyncIterable, subscriber) {\n var asyncIterable_1, asyncIterable_1_1;\n var e_1, _a;\n return __awaiter(this, void 0, void 0, function () {\n var value, e_1_1;\n return __generator(this, function (_b) {\n switch (_b.label) {\n case 0:\n _b.trys.push([0, 5, 6, 11]);\n asyncIterable_1 = __asyncValues(asyncIterable);\n _b.label = 1;\n case 1: return [4, asyncIterable_1.next()];\n case 2:\n if (!(asyncIterable_1_1 = _b.sent(), !asyncIterable_1_1.done)) return [3, 4];\n value = asyncIterable_1_1.value;\n subscriber.next(value);\n _b.label = 3;\n case 3: return [3, 1];\n case 4: return [3, 11];\n case 5:\n e_1_1 = _b.sent();\n e_1 = { error: e_1_1 };\n return [3, 11];\n case 6:\n _b.trys.push([6, , 9, 10]);\n if (!(asyncIterable_1_1 && !asyncIterable_1_1.done && (_a = asyncIterable_1.return))) return [3, 8];\n return [4, _a.call(asyncIterable_1)];\n case 7:\n _b.sent();\n _b.label = 8;\n case 8: return [3, 10];\n case 9:\n if (e_1) throw e_1.error;\n return [7];\n case 10: return [7];\n case 11:\n subscriber.complete();\n return [2];\n }\n });\n });\n}\n//# sourceMappingURL=subscribeToAsyncIterable.js.map","import { subscribeToArray } from './subscribeToArray';\nimport { subscribeToPromise } from './subscribeToPromise';\nimport { subscribeToIterable } from './subscribeToIterable';\nimport { subscribeToObservable } from './subscribeToObservable';\nimport { isArrayLike } from './isArrayLike';\nimport { isPromise } from './isPromise';\nimport { isObject } from './isObject';\nimport { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { subscribeToAsyncIterable } from './subscribeToAsyncIterable';\nexport var subscribeTo = function (result) {\n if (!!result && typeof result[Symbol_observable] === 'function') {\n return subscribeToObservable(result);\n }\n else if (isArrayLike(result)) {\n return subscribeToArray(result);\n }\n else if (isPromise(result)) {\n return subscribeToPromise(result);\n }\n else if (!!result && typeof result[Symbol_iterator] === 'function') {\n return subscribeToIterable(result);\n }\n else if (Symbol && Symbol.asyncIterator &&\n !!result && typeof result[Symbol.asyncIterator] === 'function') {\n return subscribeToAsyncIterable(result);\n }\n else {\n var value = isObject(result) ? 'an invalid object' : \"'\" + result + \"'\";\n var msg = \"You provided \" + value + \" where a stream was expected.\"\n + ' You can provide an Observable, Promise, Array, or Iterable.';\n throw new TypeError(msg);\n }\n};\n//# sourceMappingURL=subscribeTo.js.map","import { observable as Symbol_observable } from '../symbol/observable';\nexport var subscribeToObservable = function (obj) { return function (subscriber) {\n var obs = obj[Symbol_observable]();\n if (typeof obs.subscribe !== 'function') {\n throw new TypeError('Provided object does not correctly implement Symbol.observable');\n }\n else {\n return obs.subscribe(subscriber);\n }\n}; };\n//# sourceMappingURL=subscribeToObservable.js.map","import { hostReportError } from './hostReportError';\nexport var subscribeToPromise = function (promise) { return function (subscriber) {\n promise.then(function (value) {\n if (!subscriber.closed) {\n subscriber.next(value);\n subscriber.complete();\n }\n }, function (err) { return subscriber.error(err); })\n .then(null, hostReportError);\n return subscriber;\n}; };\n//# sourceMappingURL=subscribeToPromise.js.map","import { iterator as Symbol_iterator } from '../symbol/iterator';\nexport var subscribeToIterable = function (iterable) { return function (subscriber) {\n var iterator = iterable[Symbol_iterator]();\n do {\n var item = iterator.next();\n if (item.done) {\n subscriber.complete();\n break;\n }\n subscriber.next(item.value);\n if (subscriber.closed) {\n break;\n }\n } while (true);\n if (typeof iterator.return === 'function') {\n subscriber.add(function () {\n if (iterator.return) {\n iterator.return();\n }\n });\n }\n return subscriber;\n}; };\n//# sourceMappingURL=subscribeToIterable.js.map","import { __extends } from \"tslib\";\nimport { subscribeToResult } from '../util/subscribeToResult';\nimport { OuterSubscriber } from '../OuterSubscriber';\nimport { InnerSubscriber } from '../InnerSubscriber';\nimport { map } from './map';\nimport { from } from '../observable/from';\nexport function mergeMap(project, resultSelector, concurrent) {\n if (concurrent === void 0) { concurrent = Number.POSITIVE_INFINITY; }\n if (typeof resultSelector === 'function') {\n return function (source) { return source.pipe(mergeMap(function (a, i) { return from(project(a, i)).pipe(map(function (b, ii) { return resultSelector(a, b, i, ii); })); }, concurrent)); };\n }\n else if (typeof resultSelector === 'number') {\n concurrent = resultSelector;\n }\n return function (source) { return source.lift(new MergeMapOperator(project, concurrent)); };\n}\nvar MergeMapOperator = (function () {\n function MergeMapOperator(project, concurrent) {\n if (concurrent === void 0) { concurrent = Number.POSITIVE_INFINITY; }\n this.project = project;\n this.concurrent = concurrent;\n }\n MergeMapOperator.prototype.call = function (observer, source) {\n return source.subscribe(new MergeMapSubscriber(observer, this.project, this.concurrent));\n };\n return MergeMapOperator;\n}());\nexport { MergeMapOperator };\nvar MergeMapSubscriber = (function (_super) {\n __extends(MergeMapSubscriber, _super);\n function MergeMapSubscriber(destination, project, concurrent) {\n if (concurrent === void 0) { concurrent = Number.POSITIVE_INFINITY; }\n var _this = _super.call(this, destination) || this;\n _this.project = project;\n _this.concurrent = concurrent;\n _this.hasCompleted = false;\n _this.buffer = [];\n _this.active = 0;\n _this.index = 0;\n return _this;\n }\n MergeMapSubscriber.prototype._next = function (value) {\n if (this.active < this.concurrent) {\n this._tryNext(value);\n }\n else {\n this.buffer.push(value);\n }\n };\n MergeMapSubscriber.prototype._tryNext = function (value) {\n var result;\n var index = this.index++;\n try {\n result = this.project(value, index);\n }\n catch (err) {\n this.destination.error(err);\n return;\n }\n this.active++;\n this._innerSub(result, value, index);\n };\n MergeMapSubscriber.prototype._innerSub = function (ish, value, index) {\n var innerSubscriber = new InnerSubscriber(this, value, index);\n var destination = this.destination;\n destination.add(innerSubscriber);\n var innerSubscription = subscribeToResult(this, ish, undefined, undefined, innerSubscriber);\n if (innerSubscription !== innerSubscriber) {\n destination.add(innerSubscription);\n }\n };\n MergeMapSubscriber.prototype._complete = function () {\n this.hasCompleted = true;\n if (this.active === 0 && this.buffer.length === 0) {\n this.destination.complete();\n }\n this.unsubscribe();\n };\n MergeMapSubscriber.prototype.notifyNext = function (outerValue, innerValue, outerIndex, innerIndex, innerSub) {\n this.destination.next(innerValue);\n };\n MergeMapSubscriber.prototype.notifyComplete = function (innerSub) {\n var buffer = this.buffer;\n this.remove(innerSub);\n this.active--;\n if (buffer.length > 0) {\n this._next(buffer.shift());\n }\n else if (this.active === 0 && this.hasCompleted) {\n this.destination.complete();\n }\n };\n return MergeMapSubscriber;\n}(OuterSubscriber));\nexport { MergeMapSubscriber };\n//# sourceMappingURL=mergeMap.js.map","import { mergeMap } from './mergeMap';\nimport { identity } from '../util/identity';\nexport function mergeAll(concurrent) {\n if (concurrent === void 0) { concurrent = Number.POSITIVE_INFINITY; }\n return mergeMap(identity, concurrent);\n}\n//# sourceMappingURL=mergeAll.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nimport { Notification } from '../Notification';\nexport function observeOn(scheduler, delay) {\n if (delay === void 0) { delay = 0; }\n return function observeOnOperatorFunction(source) {\n return source.lift(new ObserveOnOperator(scheduler, delay));\n };\n}\nvar ObserveOnOperator = (function () {\n function ObserveOnOperator(scheduler, delay) {\n if (delay === void 0) { delay = 0; }\n this.scheduler = scheduler;\n this.delay = delay;\n }\n ObserveOnOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new ObserveOnSubscriber(subscriber, this.scheduler, this.delay));\n };\n return ObserveOnOperator;\n}());\nexport { ObserveOnOperator };\nvar ObserveOnSubscriber = (function (_super) {\n __extends(ObserveOnSubscriber, _super);\n function ObserveOnSubscriber(destination, scheduler, delay) {\n if (delay === void 0) { delay = 0; }\n var _this = _super.call(this, destination) || this;\n _this.scheduler = scheduler;\n _this.delay = delay;\n return _this;\n }\n ObserveOnSubscriber.dispatch = function (arg) {\n var notification = arg.notification, destination = arg.destination;\n notification.observe(destination);\n this.unsubscribe();\n };\n ObserveOnSubscriber.prototype.scheduleMessage = function (notification) {\n var destination = this.destination;\n destination.add(this.scheduler.schedule(ObserveOnSubscriber.dispatch, this.delay, new ObserveOnMessage(notification, this.destination)));\n };\n ObserveOnSubscriber.prototype._next = function (value) {\n this.scheduleMessage(Notification.createNext(value));\n };\n ObserveOnSubscriber.prototype._error = function (err) {\n this.scheduleMessage(Notification.createError(err));\n this.unsubscribe();\n };\n ObserveOnSubscriber.prototype._complete = function () {\n this.scheduleMessage(Notification.createComplete());\n this.unsubscribe();\n };\n return ObserveOnSubscriber;\n}(Subscriber));\nexport { ObserveOnSubscriber };\nvar ObserveOnMessage = (function () {\n function ObserveOnMessage(notification, destination) {\n this.notification = notification;\n this.destination = destination;\n }\n return ObserveOnMessage;\n}());\nexport { ObserveOnMessage };\n//# sourceMappingURL=observeOn.js.map","/*!\n * clipboard.js v2.0.6\n * https://clipboardjs.com/\n * \n * Licensed MIT © Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function(modules) { // webpackBootstrap\n/******/ \t// The module cache\n/******/ \tvar installedModules = {};\n/******/\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(installedModules[moduleId]) {\n/******/ \t\t\treturn installedModules[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = installedModules[moduleId] = {\n/******/ \t\t\ti: moduleId,\n/******/ \t\t\tl: false,\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/\n/******/ \t\t// Execute the module function\n/******/ \t\tmodules[moduleId].call(module.exports, module, module.exports, __webpack_require__);\n/******/\n/******/ \t\t// Flag the module as loaded\n/******/ \t\tmodule.l = true;\n/******/\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/\n/******/\n/******/ \t// expose the modules object (__webpack_modules__)\n/******/ \t__webpack_require__.m = modules;\n/******/\n/******/ \t// expose the module cache\n/******/ \t__webpack_require__.c = installedModules;\n/******/\n/******/ \t// define getter function for harmony exports\n/******/ \t__webpack_require__.d = function(exports, name, getter) {\n/******/ \t\tif(!__webpack_require__.o(exports, name)) {\n/******/ \t\t\tObject.defineProperty(exports, name, { enumerable: true, get: getter });\n/******/ \t\t}\n/******/ \t};\n/******/\n/******/ \t// define __esModule on exports\n/******/ \t__webpack_require__.r = function(exports) {\n/******/ \t\tif(typeof Symbol !== 'undefined' && Symbol.toStringTag) {\n/******/ \t\t\tObject.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });\n/******/ \t\t}\n/******/ \t\tObject.defineProperty(exports, '__esModule', { value: true });\n/******/ \t};\n/******/\n/******/ \t// create a fake namespace object\n/******/ \t// mode & 1: value is a module id, require it\n/******/ \t// mode & 2: merge all properties of value into the ns\n/******/ \t// mode & 4: return value when already ns object\n/******/ \t// mode & 8|1: behave like require\n/******/ \t__webpack_require__.t = function(value, mode) {\n/******/ \t\tif(mode & 1) value = __webpack_require__(value);\n/******/ \t\tif(mode & 8) return value;\n/******/ \t\tif((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;\n/******/ \t\tvar ns = Object.create(null);\n/******/ \t\t__webpack_require__.r(ns);\n/******/ \t\tObject.defineProperty(ns, 'default', { enumerable: true, value: value });\n/******/ \t\tif(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));\n/******/ \t\treturn ns;\n/******/ \t};\n/******/\n/******/ \t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t__webpack_require__.n = function(module) {\n/******/ \t\tvar getter = module && module.__esModule ?\n/******/ \t\t\tfunction getDefault() { return module['default']; } :\n/******/ \t\t\tfunction getModuleExports() { return module; };\n/******/ \t\t__webpack_require__.d(getter, 'a', getter);\n/******/ \t\treturn getter;\n/******/ \t};\n/******/\n/******/ \t// Object.prototype.hasOwnProperty.call\n/******/ \t__webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };\n/******/\n/******/ \t// __webpack_public_path__\n/******/ \t__webpack_require__.p = \"\";\n/******/\n/******/\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(__webpack_require__.s = 6);\n/******/ })\n/************************************************************************/\n/******/ ([\n/* 0 */\n/***/ (function(module, exports) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n/* 1 */\n/***/ (function(module, exports) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ }),\n/* 2 */\n/***/ (function(module, exports, __webpack_require__) {\n\nvar is = __webpack_require__(3);\nvar delegate = __webpack_require__(4);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n/* 3 */\n/***/ (function(module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n/* 4 */\n/***/ (function(module, exports, __webpack_require__) {\n\nvar closest = __webpack_require__(5);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n/* 5 */\n/***/ (function(module, exports) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n/* 6 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n__webpack_require__.r(__webpack_exports__);\n\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(0);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n\n// CONCATENATED MODULE: ./src/clipboard-action.js\nvar _typeof = typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\" ? function (obj) { return typeof obj; } : function (obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; };\n\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\n\n\n/**\n * Inner class which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n */\n\nvar clipboard_action_ClipboardAction = function () {\n /**\n * @param {Object} options\n */\n function ClipboardAction(options) {\n _classCallCheck(this, ClipboardAction);\n\n this.resolveOptions(options);\n this.initSelection();\n }\n\n /**\n * Defines base properties passed from constructor.\n * @param {Object} options\n */\n\n\n _createClass(ClipboardAction, [{\n key: 'resolveOptions',\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n\n this.action = options.action;\n this.container = options.container;\n this.emitter = options.emitter;\n this.target = options.target;\n this.text = options.text;\n this.trigger = options.trigger;\n\n this.selectedText = '';\n }\n\n /**\n * Decides which selection strategy is going to be applied based\n * on the existence of `text` and `target` properties.\n */\n\n }, {\n key: 'initSelection',\n value: function initSelection() {\n if (this.text) {\n this.selectFake();\n } else if (this.target) {\n this.selectTarget();\n }\n }\n\n /**\n * Creates a fake textarea element, sets its value from `text` property,\n * and makes a selection on it.\n */\n\n }, {\n key: 'selectFake',\n value: function selectFake() {\n var _this = this;\n\n var isRTL = document.documentElement.getAttribute('dir') == 'rtl';\n\n this.removeFake();\n\n this.fakeHandlerCallback = function () {\n return _this.removeFake();\n };\n this.fakeHandler = this.container.addEventListener('click', this.fakeHandlerCallback) || true;\n\n this.fakeElem = document.createElement('textarea');\n // Prevent zooming on iOS\n this.fakeElem.style.fontSize = '12pt';\n // Reset box model\n this.fakeElem.style.border = '0';\n this.fakeElem.style.padding = '0';\n this.fakeElem.style.margin = '0';\n // Move element out of screen horizontally\n this.fakeElem.style.position = 'absolute';\n this.fakeElem.style[isRTL ? 'right' : 'left'] = '-9999px';\n // Move element to the same position vertically\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n this.fakeElem.style.top = yPosition + 'px';\n\n this.fakeElem.setAttribute('readonly', '');\n this.fakeElem.value = this.text;\n\n this.container.appendChild(this.fakeElem);\n\n this.selectedText = select_default()(this.fakeElem);\n this.copyText();\n }\n\n /**\n * Only removes the fake element after another click event, that way\n * a user can hit `Ctrl+C` to copy because selection still exists.\n */\n\n }, {\n key: 'removeFake',\n value: function removeFake() {\n if (this.fakeHandler) {\n this.container.removeEventListener('click', this.fakeHandlerCallback);\n this.fakeHandler = null;\n this.fakeHandlerCallback = null;\n }\n\n if (this.fakeElem) {\n this.container.removeChild(this.fakeElem);\n this.fakeElem = null;\n }\n }\n\n /**\n * Selects the content from element passed on `target` property.\n */\n\n }, {\n key: 'selectTarget',\n value: function selectTarget() {\n this.selectedText = select_default()(this.target);\n this.copyText();\n }\n\n /**\n * Executes the copy operation based on the current selection.\n */\n\n }, {\n key: 'copyText',\n value: function copyText() {\n var succeeded = void 0;\n\n try {\n succeeded = document.execCommand(this.action);\n } catch (err) {\n succeeded = false;\n }\n\n this.handleResult(succeeded);\n }\n\n /**\n * Fires an event based on the copy operation result.\n * @param {Boolean} succeeded\n */\n\n }, {\n key: 'handleResult',\n value: function handleResult(succeeded) {\n this.emitter.emit(succeeded ? 'success' : 'error', {\n action: this.action,\n text: this.selectedText,\n trigger: this.trigger,\n clearSelection: this.clearSelection.bind(this)\n });\n }\n\n /**\n * Moves focus away from `target` and back to the trigger, removes current selection.\n */\n\n }, {\n key: 'clearSelection',\n value: function clearSelection() {\n if (this.trigger) {\n this.trigger.focus();\n }\n document.activeElement.blur();\n window.getSelection().removeAllRanges();\n }\n\n /**\n * Sets the `action` to be performed which can be either 'copy' or 'cut'.\n * @param {String} action\n */\n\n }, {\n key: 'destroy',\n\n\n /**\n * Destroy lifecycle.\n */\n value: function destroy() {\n this.removeFake();\n }\n }, {\n key: 'action',\n set: function set() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : 'copy';\n\n this._action = action;\n\n if (this._action !== 'copy' && this._action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n }\n }\n\n /**\n * Gets the `action` property.\n * @return {String}\n */\n ,\n get: function get() {\n return this._action;\n }\n\n /**\n * Sets the `target` property using an element\n * that will be have its content copied.\n * @param {Element} target\n */\n\n }, {\n key: 'target',\n set: function set(target) {\n if (target !== undefined) {\n if (target && (typeof target === 'undefined' ? 'undefined' : _typeof(target)) === 'object' && target.nodeType === 1) {\n if (this.action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (this.action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n\n this._target = target;\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n }\n }\n\n /**\n * Gets the `target` property.\n * @return {String|HTMLElement}\n */\n ,\n get: function get() {\n return this._target;\n }\n }]);\n\n return ClipboardAction;\n}();\n\n/* harmony default export */ var clipboard_action = (clipboard_action_ClipboardAction);\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(1);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(2);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n\n// CONCATENATED MODULE: ./src/clipboard.js\nvar clipboard_typeof = typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\" ? function (obj) { return typeof obj; } : function (obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; };\n\nvar clipboard_createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction clipboard_classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _possibleConstructorReturn(self, call) { if (!self) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return call && (typeof call === \"object\" || typeof call === \"function\") ? call : self; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function, not \" + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; }\n\n\n\n\n\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\nvar clipboard_Clipboard = function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n clipboard_classCallCheck(this, Clipboard);\n\n var _this = _possibleConstructorReturn(this, (Clipboard.__proto__ || Object.getPrototypeOf(Clipboard)).call(this));\n\n _this.resolveOptions(options);\n _this.listenClick(trigger);\n return _this;\n }\n\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n clipboard_createClass(Clipboard, [{\n key: 'resolveOptions',\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: 'listenClick',\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: 'onClick',\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n\n if (this.clipboardAction) {\n this.clipboardAction = null;\n }\n\n this.clipboardAction = new clipboard_action({\n action: this.action(trigger),\n target: this.target(trigger),\n text: this.text(trigger),\n container: this.container,\n trigger: trigger,\n emitter: this\n });\n }\n\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: 'defaultAction',\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: 'defaultTarget',\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: 'defaultText',\n\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: 'destroy',\n value: function destroy() {\n this.listener.destroy();\n\n if (this.clipboardAction) {\n this.clipboardAction.destroy();\n this.clipboardAction = null;\n }\n }\n }], [{\n key: 'isSupported',\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n\n return support;\n }\n }]);\n\n return Clipboard;\n}(tiny_emitter_default.a);\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\n\nfunction getAttributeValue(suffix, element) {\n var attribute = 'data-clipboard-' + suffix;\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n\n/* harmony default export */ var clipboard = __webpack_exports__[\"default\"] = (clipboard_Clipboard);\n\n/***/ })\n/******/ ])[\"default\"];\n});","import { __extends } from \"tslib\";\nimport { isScheduler } from '../util/isScheduler';\nimport { isArray } from '../util/isArray';\nimport { OuterSubscriber } from '../OuterSubscriber';\nimport { subscribeToResult } from '../util/subscribeToResult';\nimport { fromArray } from './fromArray';\nvar NONE = {};\nexport function combineLatest() {\n var observables = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n observables[_i] = arguments[_i];\n }\n var resultSelector = undefined;\n var scheduler = undefined;\n if (isScheduler(observables[observables.length - 1])) {\n scheduler = observables.pop();\n }\n if (typeof observables[observables.length - 1] === 'function') {\n resultSelector = observables.pop();\n }\n if (observables.length === 1 && isArray(observables[0])) {\n observables = observables[0];\n }\n return fromArray(observables, scheduler).lift(new CombineLatestOperator(resultSelector));\n}\nvar CombineLatestOperator = (function () {\n function CombineLatestOperator(resultSelector) {\n this.resultSelector = resultSelector;\n }\n CombineLatestOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new CombineLatestSubscriber(subscriber, this.resultSelector));\n };\n return CombineLatestOperator;\n}());\nexport { CombineLatestOperator };\nvar CombineLatestSubscriber = (function (_super) {\n __extends(CombineLatestSubscriber, _super);\n function CombineLatestSubscriber(destination, resultSelector) {\n var _this = _super.call(this, destination) || this;\n _this.resultSelector = resultSelector;\n _this.active = 0;\n _this.values = [];\n _this.observables = [];\n return _this;\n }\n CombineLatestSubscriber.prototype._next = function (observable) {\n this.values.push(NONE);\n this.observables.push(observable);\n };\n CombineLatestSubscriber.prototype._complete = function () {\n var observables = this.observables;\n var len = observables.length;\n if (len === 0) {\n this.destination.complete();\n }\n else {\n this.active = len;\n this.toRespond = len;\n for (var i = 0; i < len; i++) {\n var observable = observables[i];\n this.add(subscribeToResult(this, observable, observable, i));\n }\n }\n };\n CombineLatestSubscriber.prototype.notifyComplete = function (unused) {\n if ((this.active -= 1) === 0) {\n this.destination.complete();\n }\n };\n CombineLatestSubscriber.prototype.notifyNext = function (outerValue, innerValue, outerIndex, innerIndex, innerSub) {\n var values = this.values;\n var oldVal = values[outerIndex];\n var toRespond = !this.toRespond\n ? 0\n : oldVal === NONE ? --this.toRespond : this.toRespond;\n values[outerIndex] = innerValue;\n if (toRespond === 0) {\n if (this.resultSelector) {\n this._tryResultSelector(values);\n }\n else {\n this.destination.next(values.slice());\n }\n }\n };\n CombineLatestSubscriber.prototype._tryResultSelector = function (values) {\n var result;\n try {\n result = this.resultSelector.apply(this, values);\n }\n catch (err) {\n this.destination.error(err);\n return;\n }\n this.destination.next(result);\n };\n return CombineLatestSubscriber;\n}(OuterSubscriber));\nexport { CombineLatestSubscriber };\n//# sourceMappingURL=combineLatest.js.map","var g;\n\n// This works in non-strict mode\ng = (function() {\n\treturn this;\n})();\n\ntry {\n\t// This works if eval is allowed (see CSP)\n\tg = g || new Function(\"return this\")();\n} catch (e) {\n\t// This works if the window reference is available\n\tif (typeof window === \"object\") g = window;\n}\n\n// g can still be undefined, but nothing to do about it...\n// We return undefined, instead of nothing here, so it's\n// easier to handle this case. if(!global) { ...}\n\nmodule.exports = g;\n","/**\r\n * A collection of shims that provide minimal functionality of the ES6 collections.\r\n *\r\n * These implementations are not meant to be used outside of the ResizeObserver\r\n * modules as they cover only a limited range of use cases.\r\n */\r\n/* eslint-disable require-jsdoc, valid-jsdoc */\r\nvar MapShim = (function () {\r\n if (typeof Map !== 'undefined') {\r\n return Map;\r\n }\r\n /**\r\n * Returns index in provided array that matches the specified key.\r\n *\r\n * @param {Array} arr\r\n * @param {*} key\r\n * @returns {number}\r\n */\r\n function getIndex(arr, key) {\r\n var result = -1;\r\n arr.some(function (entry, index) {\r\n if (entry[0] === key) {\r\n result = index;\r\n return true;\r\n }\r\n return false;\r\n });\r\n return result;\r\n }\r\n return /** @class */ (function () {\r\n function class_1() {\r\n this.__entries__ = [];\r\n }\r\n Object.defineProperty(class_1.prototype, \"size\", {\r\n /**\r\n * @returns {boolean}\r\n */\r\n get: function () {\r\n return this.__entries__.length;\r\n },\r\n enumerable: true,\r\n configurable: true\r\n });\r\n /**\r\n * @param {*} key\r\n * @returns {*}\r\n */\r\n class_1.prototype.get = function (key) {\r\n var index = getIndex(this.__entries__, key);\r\n var entry = this.__entries__[index];\r\n return entry && entry[1];\r\n };\r\n /**\r\n * @param {*} key\r\n * @param {*} value\r\n * @returns {void}\r\n */\r\n class_1.prototype.set = function (key, value) {\r\n var index = getIndex(this.__entries__, key);\r\n if (~index) {\r\n this.__entries__[index][1] = value;\r\n }\r\n else {\r\n this.__entries__.push([key, value]);\r\n }\r\n };\r\n /**\r\n * @param {*} key\r\n * @returns {void}\r\n */\r\n class_1.prototype.delete = function (key) {\r\n var entries = this.__entries__;\r\n var index = getIndex(entries, key);\r\n if (~index) {\r\n entries.splice(index, 1);\r\n }\r\n };\r\n /**\r\n * @param {*} key\r\n * @returns {void}\r\n */\r\n class_1.prototype.has = function (key) {\r\n return !!~getIndex(this.__entries__, key);\r\n };\r\n /**\r\n * @returns {void}\r\n */\r\n class_1.prototype.clear = function () {\r\n this.__entries__.splice(0);\r\n };\r\n /**\r\n * @param {Function} callback\r\n * @param {*} [ctx=null]\r\n * @returns {void}\r\n */\r\n class_1.prototype.forEach = function (callback, ctx) {\r\n if (ctx === void 0) { ctx = null; }\r\n for (var _i = 0, _a = this.__entries__; _i < _a.length; _i++) {\r\n var entry = _a[_i];\r\n callback.call(ctx, entry[1], entry[0]);\r\n }\r\n };\r\n return class_1;\r\n }());\r\n})();\n\n/**\r\n * Detects whether window and document objects are available in current environment.\r\n */\r\nvar isBrowser = typeof window !== 'undefined' && typeof document !== 'undefined' && window.document === document;\n\n// Returns global object of a current environment.\r\nvar global$1 = (function () {\r\n if (typeof global !== 'undefined' && global.Math === Math) {\r\n return global;\r\n }\r\n if (typeof self !== 'undefined' && self.Math === Math) {\r\n return self;\r\n }\r\n if (typeof window !== 'undefined' && window.Math === Math) {\r\n return window;\r\n }\r\n // eslint-disable-next-line no-new-func\r\n return Function('return this')();\r\n})();\n\n/**\r\n * A shim for the requestAnimationFrame which falls back to the setTimeout if\r\n * first one is not supported.\r\n *\r\n * @returns {number} Requests' identifier.\r\n */\r\nvar requestAnimationFrame$1 = (function () {\r\n if (typeof requestAnimationFrame === 'function') {\r\n // It's required to use a bounded function because IE sometimes throws\r\n // an \"Invalid calling object\" error if rAF is invoked without the global\r\n // object on the left hand side.\r\n return requestAnimationFrame.bind(global$1);\r\n }\r\n return function (callback) { return setTimeout(function () { return callback(Date.now()); }, 1000 / 60); };\r\n})();\n\n// Defines minimum timeout before adding a trailing call.\r\nvar trailingTimeout = 2;\r\n/**\r\n * Creates a wrapper function which ensures that provided callback will be\r\n * invoked only once during the specified delay period.\r\n *\r\n * @param {Function} callback - Function to be invoked after the delay period.\r\n * @param {number} delay - Delay after which to invoke callback.\r\n * @returns {Function}\r\n */\r\nfunction throttle (callback, delay) {\r\n var leadingCall = false, trailingCall = false, lastCallTime = 0;\r\n /**\r\n * Invokes the original callback function and schedules new invocation if\r\n * the \"proxy\" was called during current request.\r\n *\r\n * @returns {void}\r\n */\r\n function resolvePending() {\r\n if (leadingCall) {\r\n leadingCall = false;\r\n callback();\r\n }\r\n if (trailingCall) {\r\n proxy();\r\n }\r\n }\r\n /**\r\n * Callback invoked after the specified delay. It will further postpone\r\n * invocation of the original function delegating it to the\r\n * requestAnimationFrame.\r\n *\r\n * @returns {void}\r\n */\r\n function timeoutCallback() {\r\n requestAnimationFrame$1(resolvePending);\r\n }\r\n /**\r\n * Schedules invocation of the original function.\r\n *\r\n * @returns {void}\r\n */\r\n function proxy() {\r\n var timeStamp = Date.now();\r\n if (leadingCall) {\r\n // Reject immediately following calls.\r\n if (timeStamp - lastCallTime < trailingTimeout) {\r\n return;\r\n }\r\n // Schedule new call to be in invoked when the pending one is resolved.\r\n // This is important for \"transitions\" which never actually start\r\n // immediately so there is a chance that we might miss one if change\r\n // happens amids the pending invocation.\r\n trailingCall = true;\r\n }\r\n else {\r\n leadingCall = true;\r\n trailingCall = false;\r\n setTimeout(timeoutCallback, delay);\r\n }\r\n lastCallTime = timeStamp;\r\n }\r\n return proxy;\r\n}\n\n// Minimum delay before invoking the update of observers.\r\nvar REFRESH_DELAY = 20;\r\n// A list of substrings of CSS properties used to find transition events that\r\n// might affect dimensions of observed elements.\r\nvar transitionKeys = ['top', 'right', 'bottom', 'left', 'width', 'height', 'size', 'weight'];\r\n// Check if MutationObserver is available.\r\nvar mutationObserverSupported = typeof MutationObserver !== 'undefined';\r\n/**\r\n * Singleton controller class which handles updates of ResizeObserver instances.\r\n */\r\nvar ResizeObserverController = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserverController.\r\n *\r\n * @private\r\n */\r\n function ResizeObserverController() {\r\n /**\r\n * Indicates whether DOM listeners have been added.\r\n *\r\n * @private {boolean}\r\n */\r\n this.connected_ = false;\r\n /**\r\n * Tells that controller has subscribed for Mutation Events.\r\n *\r\n * @private {boolean}\r\n */\r\n this.mutationEventsAdded_ = false;\r\n /**\r\n * Keeps reference to the instance of MutationObserver.\r\n *\r\n * @private {MutationObserver}\r\n */\r\n this.mutationsObserver_ = null;\r\n /**\r\n * A list of connected observers.\r\n *\r\n * @private {Array}\r\n */\r\n this.observers_ = [];\r\n this.onTransitionEnd_ = this.onTransitionEnd_.bind(this);\r\n this.refresh = throttle(this.refresh.bind(this), REFRESH_DELAY);\r\n }\r\n /**\r\n * Adds observer to observers list.\r\n *\r\n * @param {ResizeObserverSPI} observer - Observer to be added.\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.addObserver = function (observer) {\r\n if (!~this.observers_.indexOf(observer)) {\r\n this.observers_.push(observer);\r\n }\r\n // Add listeners if they haven't been added yet.\r\n if (!this.connected_) {\r\n this.connect_();\r\n }\r\n };\r\n /**\r\n * Removes observer from observers list.\r\n *\r\n * @param {ResizeObserverSPI} observer - Observer to be removed.\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.removeObserver = function (observer) {\r\n var observers = this.observers_;\r\n var index = observers.indexOf(observer);\r\n // Remove observer if it's present in registry.\r\n if (~index) {\r\n observers.splice(index, 1);\r\n }\r\n // Remove listeners if controller has no connected observers.\r\n if (!observers.length && this.connected_) {\r\n this.disconnect_();\r\n }\r\n };\r\n /**\r\n * Invokes the update of observers. It will continue running updates insofar\r\n * it detects changes.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.refresh = function () {\r\n var changesDetected = this.updateObservers_();\r\n // Continue running updates if changes have been detected as there might\r\n // be future ones caused by CSS transitions.\r\n if (changesDetected) {\r\n this.refresh();\r\n }\r\n };\r\n /**\r\n * Updates every observer from observers list and notifies them of queued\r\n * entries.\r\n *\r\n * @private\r\n * @returns {boolean} Returns \"true\" if any observer has detected changes in\r\n * dimensions of it's elements.\r\n */\r\n ResizeObserverController.prototype.updateObservers_ = function () {\r\n // Collect observers that have active observations.\r\n var activeObservers = this.observers_.filter(function (observer) {\r\n return observer.gatherActive(), observer.hasActive();\r\n });\r\n // Deliver notifications in a separate cycle in order to avoid any\r\n // collisions between observers, e.g. when multiple instances of\r\n // ResizeObserver are tracking the same element and the callback of one\r\n // of them changes content dimensions of the observed target. Sometimes\r\n // this may result in notifications being blocked for the rest of observers.\r\n activeObservers.forEach(function (observer) { return observer.broadcastActive(); });\r\n return activeObservers.length > 0;\r\n };\r\n /**\r\n * Initializes DOM listeners.\r\n *\r\n * @private\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.connect_ = function () {\r\n // Do nothing if running in a non-browser environment or if listeners\r\n // have been already added.\r\n if (!isBrowser || this.connected_) {\r\n return;\r\n }\r\n // Subscription to the \"Transitionend\" event is used as a workaround for\r\n // delayed transitions. This way it's possible to capture at least the\r\n // final state of an element.\r\n document.addEventListener('transitionend', this.onTransitionEnd_);\r\n window.addEventListener('resize', this.refresh);\r\n if (mutationObserverSupported) {\r\n this.mutationsObserver_ = new MutationObserver(this.refresh);\r\n this.mutationsObserver_.observe(document, {\r\n attributes: true,\r\n childList: true,\r\n characterData: true,\r\n subtree: true\r\n });\r\n }\r\n else {\r\n document.addEventListener('DOMSubtreeModified', this.refresh);\r\n this.mutationEventsAdded_ = true;\r\n }\r\n this.connected_ = true;\r\n };\r\n /**\r\n * Removes DOM listeners.\r\n *\r\n * @private\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.disconnect_ = function () {\r\n // Do nothing if running in a non-browser environment or if listeners\r\n // have been already removed.\r\n if (!isBrowser || !this.connected_) {\r\n return;\r\n }\r\n document.removeEventListener('transitionend', this.onTransitionEnd_);\r\n window.removeEventListener('resize', this.refresh);\r\n if (this.mutationsObserver_) {\r\n this.mutationsObserver_.disconnect();\r\n }\r\n if (this.mutationEventsAdded_) {\r\n document.removeEventListener('DOMSubtreeModified', this.refresh);\r\n }\r\n this.mutationsObserver_ = null;\r\n this.mutationEventsAdded_ = false;\r\n this.connected_ = false;\r\n };\r\n /**\r\n * \"Transitionend\" event handler.\r\n *\r\n * @private\r\n * @param {TransitionEvent} event\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.onTransitionEnd_ = function (_a) {\r\n var _b = _a.propertyName, propertyName = _b === void 0 ? '' : _b;\r\n // Detect whether transition may affect dimensions of an element.\r\n var isReflowProperty = transitionKeys.some(function (key) {\r\n return !!~propertyName.indexOf(key);\r\n });\r\n if (isReflowProperty) {\r\n this.refresh();\r\n }\r\n };\r\n /**\r\n * Returns instance of the ResizeObserverController.\r\n *\r\n * @returns {ResizeObserverController}\r\n */\r\n ResizeObserverController.getInstance = function () {\r\n if (!this.instance_) {\r\n this.instance_ = new ResizeObserverController();\r\n }\r\n return this.instance_;\r\n };\r\n /**\r\n * Holds reference to the controller's instance.\r\n *\r\n * @private {ResizeObserverController}\r\n */\r\n ResizeObserverController.instance_ = null;\r\n return ResizeObserverController;\r\n}());\n\n/**\r\n * Defines non-writable/enumerable properties of the provided target object.\r\n *\r\n * @param {Object} target - Object for which to define properties.\r\n * @param {Object} props - Properties to be defined.\r\n * @returns {Object} Target object.\r\n */\r\nvar defineConfigurable = (function (target, props) {\r\n for (var _i = 0, _a = Object.keys(props); _i < _a.length; _i++) {\r\n var key = _a[_i];\r\n Object.defineProperty(target, key, {\r\n value: props[key],\r\n enumerable: false,\r\n writable: false,\r\n configurable: true\r\n });\r\n }\r\n return target;\r\n});\n\n/**\r\n * Returns the global object associated with provided element.\r\n *\r\n * @param {Object} target\r\n * @returns {Object}\r\n */\r\nvar getWindowOf = (function (target) {\r\n // Assume that the element is an instance of Node, which means that it\r\n // has the \"ownerDocument\" property from which we can retrieve a\r\n // corresponding global object.\r\n var ownerGlobal = target && target.ownerDocument && target.ownerDocument.defaultView;\r\n // Return the local global object if it's not possible extract one from\r\n // provided element.\r\n return ownerGlobal || global$1;\r\n});\n\n// Placeholder of an empty content rectangle.\r\nvar emptyRect = createRectInit(0, 0, 0, 0);\r\n/**\r\n * Converts provided string to a number.\r\n *\r\n * @param {number|string} value\r\n * @returns {number}\r\n */\r\nfunction toFloat(value) {\r\n return parseFloat(value) || 0;\r\n}\r\n/**\r\n * Extracts borders size from provided styles.\r\n *\r\n * @param {CSSStyleDeclaration} styles\r\n * @param {...string} positions - Borders positions (top, right, ...)\r\n * @returns {number}\r\n */\r\nfunction getBordersSize(styles) {\r\n var positions = [];\r\n for (var _i = 1; _i < arguments.length; _i++) {\r\n positions[_i - 1] = arguments[_i];\r\n }\r\n return positions.reduce(function (size, position) {\r\n var value = styles['border-' + position + '-width'];\r\n return size + toFloat(value);\r\n }, 0);\r\n}\r\n/**\r\n * Extracts paddings sizes from provided styles.\r\n *\r\n * @param {CSSStyleDeclaration} styles\r\n * @returns {Object} Paddings box.\r\n */\r\nfunction getPaddings(styles) {\r\n var positions = ['top', 'right', 'bottom', 'left'];\r\n var paddings = {};\r\n for (var _i = 0, positions_1 = positions; _i < positions_1.length; _i++) {\r\n var position = positions_1[_i];\r\n var value = styles['padding-' + position];\r\n paddings[position] = toFloat(value);\r\n }\r\n return paddings;\r\n}\r\n/**\r\n * Calculates content rectangle of provided SVG element.\r\n *\r\n * @param {SVGGraphicsElement} target - Element content rectangle of which needs\r\n * to be calculated.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getSVGContentRect(target) {\r\n var bbox = target.getBBox();\r\n return createRectInit(0, 0, bbox.width, bbox.height);\r\n}\r\n/**\r\n * Calculates content rectangle of provided HTMLElement.\r\n *\r\n * @param {HTMLElement} target - Element for which to calculate the content rectangle.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getHTMLElementContentRect(target) {\r\n // Client width & height properties can't be\r\n // used exclusively as they provide rounded values.\r\n var clientWidth = target.clientWidth, clientHeight = target.clientHeight;\r\n // By this condition we can catch all non-replaced inline, hidden and\r\n // detached elements. Though elements with width & height properties less\r\n // than 0.5 will be discarded as well.\r\n //\r\n // Without it we would need to implement separate methods for each of\r\n // those cases and it's not possible to perform a precise and performance\r\n // effective test for hidden elements. E.g. even jQuery's ':visible' filter\r\n // gives wrong results for elements with width & height less than 0.5.\r\n if (!clientWidth && !clientHeight) {\r\n return emptyRect;\r\n }\r\n var styles = getWindowOf(target).getComputedStyle(target);\r\n var paddings = getPaddings(styles);\r\n var horizPad = paddings.left + paddings.right;\r\n var vertPad = paddings.top + paddings.bottom;\r\n // Computed styles of width & height are being used because they are the\r\n // only dimensions available to JS that contain non-rounded values. It could\r\n // be possible to utilize the getBoundingClientRect if only it's data wasn't\r\n // affected by CSS transformations let alone paddings, borders and scroll bars.\r\n var width = toFloat(styles.width), height = toFloat(styles.height);\r\n // Width & height include paddings and borders when the 'border-box' box\r\n // model is applied (except for IE).\r\n if (styles.boxSizing === 'border-box') {\r\n // Following conditions are required to handle Internet Explorer which\r\n // doesn't include paddings and borders to computed CSS dimensions.\r\n //\r\n // We can say that if CSS dimensions + paddings are equal to the \"client\"\r\n // properties then it's either IE, and thus we don't need to subtract\r\n // anything, or an element merely doesn't have paddings/borders styles.\r\n if (Math.round(width + horizPad) !== clientWidth) {\r\n width -= getBordersSize(styles, 'left', 'right') + horizPad;\r\n }\r\n if (Math.round(height + vertPad) !== clientHeight) {\r\n height -= getBordersSize(styles, 'top', 'bottom') + vertPad;\r\n }\r\n }\r\n // Following steps can't be applied to the document's root element as its\r\n // client[Width/Height] properties represent viewport area of the window.\r\n // Besides, it's as well not necessary as the itself neither has\r\n // rendered scroll bars nor it can be clipped.\r\n if (!isDocumentElement(target)) {\r\n // In some browsers (only in Firefox, actually) CSS width & height\r\n // include scroll bars size which can be removed at this step as scroll\r\n // bars are the only difference between rounded dimensions + paddings\r\n // and \"client\" properties, though that is not always true in Chrome.\r\n var vertScrollbar = Math.round(width + horizPad) - clientWidth;\r\n var horizScrollbar = Math.round(height + vertPad) - clientHeight;\r\n // Chrome has a rather weird rounding of \"client\" properties.\r\n // E.g. for an element with content width of 314.2px it sometimes gives\r\n // the client width of 315px and for the width of 314.7px it may give\r\n // 314px. And it doesn't happen all the time. So just ignore this delta\r\n // as a non-relevant.\r\n if (Math.abs(vertScrollbar) !== 1) {\r\n width -= vertScrollbar;\r\n }\r\n if (Math.abs(horizScrollbar) !== 1) {\r\n height -= horizScrollbar;\r\n }\r\n }\r\n return createRectInit(paddings.left, paddings.top, width, height);\r\n}\r\n/**\r\n * Checks whether provided element is an instance of the SVGGraphicsElement.\r\n *\r\n * @param {Element} target - Element to be checked.\r\n * @returns {boolean}\r\n */\r\nvar isSVGGraphicsElement = (function () {\r\n // Some browsers, namely IE and Edge, don't have the SVGGraphicsElement\r\n // interface.\r\n if (typeof SVGGraphicsElement !== 'undefined') {\r\n return function (target) { return target instanceof getWindowOf(target).SVGGraphicsElement; };\r\n }\r\n // If it's so, then check that element is at least an instance of the\r\n // SVGElement and that it has the \"getBBox\" method.\r\n // eslint-disable-next-line no-extra-parens\r\n return function (target) { return (target instanceof getWindowOf(target).SVGElement &&\r\n typeof target.getBBox === 'function'); };\r\n})();\r\n/**\r\n * Checks whether provided element is a document element ().\r\n *\r\n * @param {Element} target - Element to be checked.\r\n * @returns {boolean}\r\n */\r\nfunction isDocumentElement(target) {\r\n return target === getWindowOf(target).document.documentElement;\r\n}\r\n/**\r\n * Calculates an appropriate content rectangle for provided html or svg element.\r\n *\r\n * @param {Element} target - Element content rectangle of which needs to be calculated.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getContentRect(target) {\r\n if (!isBrowser) {\r\n return emptyRect;\r\n }\r\n if (isSVGGraphicsElement(target)) {\r\n return getSVGContentRect(target);\r\n }\r\n return getHTMLElementContentRect(target);\r\n}\r\n/**\r\n * Creates rectangle with an interface of the DOMRectReadOnly.\r\n * Spec: https://drafts.fxtf.org/geometry/#domrectreadonly\r\n *\r\n * @param {DOMRectInit} rectInit - Object with rectangle's x/y coordinates and dimensions.\r\n * @returns {DOMRectReadOnly}\r\n */\r\nfunction createReadOnlyRect(_a) {\r\n var x = _a.x, y = _a.y, width = _a.width, height = _a.height;\r\n // If DOMRectReadOnly is available use it as a prototype for the rectangle.\r\n var Constr = typeof DOMRectReadOnly !== 'undefined' ? DOMRectReadOnly : Object;\r\n var rect = Object.create(Constr.prototype);\r\n // Rectangle's properties are not writable and non-enumerable.\r\n defineConfigurable(rect, {\r\n x: x, y: y, width: width, height: height,\r\n top: y,\r\n right: x + width,\r\n bottom: height + y,\r\n left: x\r\n });\r\n return rect;\r\n}\r\n/**\r\n * Creates DOMRectInit object based on the provided dimensions and the x/y coordinates.\r\n * Spec: https://drafts.fxtf.org/geometry/#dictdef-domrectinit\r\n *\r\n * @param {number} x - X coordinate.\r\n * @param {number} y - Y coordinate.\r\n * @param {number} width - Rectangle's width.\r\n * @param {number} height - Rectangle's height.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction createRectInit(x, y, width, height) {\r\n return { x: x, y: y, width: width, height: height };\r\n}\n\n/**\r\n * Class that is responsible for computations of the content rectangle of\r\n * provided DOM element and for keeping track of it's changes.\r\n */\r\nvar ResizeObservation = /** @class */ (function () {\r\n /**\r\n * Creates an instance of ResizeObservation.\r\n *\r\n * @param {Element} target - Element to be observed.\r\n */\r\n function ResizeObservation(target) {\r\n /**\r\n * Broadcasted width of content rectangle.\r\n *\r\n * @type {number}\r\n */\r\n this.broadcastWidth = 0;\r\n /**\r\n * Broadcasted height of content rectangle.\r\n *\r\n * @type {number}\r\n */\r\n this.broadcastHeight = 0;\r\n /**\r\n * Reference to the last observed content rectangle.\r\n *\r\n * @private {DOMRectInit}\r\n */\r\n this.contentRect_ = createRectInit(0, 0, 0, 0);\r\n this.target = target;\r\n }\r\n /**\r\n * Updates content rectangle and tells whether it's width or height properties\r\n * have changed since the last broadcast.\r\n *\r\n * @returns {boolean}\r\n */\r\n ResizeObservation.prototype.isActive = function () {\r\n var rect = getContentRect(this.target);\r\n this.contentRect_ = rect;\r\n return (rect.width !== this.broadcastWidth ||\r\n rect.height !== this.broadcastHeight);\r\n };\r\n /**\r\n * Updates 'broadcastWidth' and 'broadcastHeight' properties with a data\r\n * from the corresponding properties of the last observed content rectangle.\r\n *\r\n * @returns {DOMRectInit} Last observed content rectangle.\r\n */\r\n ResizeObservation.prototype.broadcastRect = function () {\r\n var rect = this.contentRect_;\r\n this.broadcastWidth = rect.width;\r\n this.broadcastHeight = rect.height;\r\n return rect;\r\n };\r\n return ResizeObservation;\r\n}());\n\nvar ResizeObserverEntry = /** @class */ (function () {\r\n /**\r\n * Creates an instance of ResizeObserverEntry.\r\n *\r\n * @param {Element} target - Element that is being observed.\r\n * @param {DOMRectInit} rectInit - Data of the element's content rectangle.\r\n */\r\n function ResizeObserverEntry(target, rectInit) {\r\n var contentRect = createReadOnlyRect(rectInit);\r\n // According to the specification following properties are not writable\r\n // and are also not enumerable in the native implementation.\r\n //\r\n // Property accessors are not being used as they'd require to define a\r\n // private WeakMap storage which may cause memory leaks in browsers that\r\n // don't support this type of collections.\r\n defineConfigurable(this, { target: target, contentRect: contentRect });\r\n }\r\n return ResizeObserverEntry;\r\n}());\n\nvar ResizeObserverSPI = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserver.\r\n *\r\n * @param {ResizeObserverCallback} callback - Callback function that is invoked\r\n * when one of the observed elements changes it's content dimensions.\r\n * @param {ResizeObserverController} controller - Controller instance which\r\n * is responsible for the updates of observer.\r\n * @param {ResizeObserver} callbackCtx - Reference to the public\r\n * ResizeObserver instance which will be passed to callback function.\r\n */\r\n function ResizeObserverSPI(callback, controller, callbackCtx) {\r\n /**\r\n * Collection of resize observations that have detected changes in dimensions\r\n * of elements.\r\n *\r\n * @private {Array}\r\n */\r\n this.activeObservations_ = [];\r\n /**\r\n * Registry of the ResizeObservation instances.\r\n *\r\n * @private {Map}\r\n */\r\n this.observations_ = new MapShim();\r\n if (typeof callback !== 'function') {\r\n throw new TypeError('The callback provided as parameter 1 is not a function.');\r\n }\r\n this.callback_ = callback;\r\n this.controller_ = controller;\r\n this.callbackCtx_ = callbackCtx;\r\n }\r\n /**\r\n * Starts observing provided element.\r\n *\r\n * @param {Element} target - Element to be observed.\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.observe = function (target) {\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n // Do nothing if current environment doesn't have the Element interface.\r\n if (typeof Element === 'undefined' || !(Element instanceof Object)) {\r\n return;\r\n }\r\n if (!(target instanceof getWindowOf(target).Element)) {\r\n throw new TypeError('parameter 1 is not of type \"Element\".');\r\n }\r\n var observations = this.observations_;\r\n // Do nothing if element is already being observed.\r\n if (observations.has(target)) {\r\n return;\r\n }\r\n observations.set(target, new ResizeObservation(target));\r\n this.controller_.addObserver(this);\r\n // Force the update of observations.\r\n this.controller_.refresh();\r\n };\r\n /**\r\n * Stops observing provided element.\r\n *\r\n * @param {Element} target - Element to stop observing.\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.unobserve = function (target) {\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n // Do nothing if current environment doesn't have the Element interface.\r\n if (typeof Element === 'undefined' || !(Element instanceof Object)) {\r\n return;\r\n }\r\n if (!(target instanceof getWindowOf(target).Element)) {\r\n throw new TypeError('parameter 1 is not of type \"Element\".');\r\n }\r\n var observations = this.observations_;\r\n // Do nothing if element is not being observed.\r\n if (!observations.has(target)) {\r\n return;\r\n }\r\n observations.delete(target);\r\n if (!observations.size) {\r\n this.controller_.removeObserver(this);\r\n }\r\n };\r\n /**\r\n * Stops observing all elements.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.disconnect = function () {\r\n this.clearActive();\r\n this.observations_.clear();\r\n this.controller_.removeObserver(this);\r\n };\r\n /**\r\n * Collects observation instances the associated element of which has changed\r\n * it's content rectangle.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.gatherActive = function () {\r\n var _this = this;\r\n this.clearActive();\r\n this.observations_.forEach(function (observation) {\r\n if (observation.isActive()) {\r\n _this.activeObservations_.push(observation);\r\n }\r\n });\r\n };\r\n /**\r\n * Invokes initial callback function with a list of ResizeObserverEntry\r\n * instances collected from active resize observations.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.broadcastActive = function () {\r\n // Do nothing if observer doesn't have active observations.\r\n if (!this.hasActive()) {\r\n return;\r\n }\r\n var ctx = this.callbackCtx_;\r\n // Create ResizeObserverEntry instance for every active observation.\r\n var entries = this.activeObservations_.map(function (observation) {\r\n return new ResizeObserverEntry(observation.target, observation.broadcastRect());\r\n });\r\n this.callback_.call(ctx, entries, ctx);\r\n this.clearActive();\r\n };\r\n /**\r\n * Clears the collection of active observations.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.clearActive = function () {\r\n this.activeObservations_.splice(0);\r\n };\r\n /**\r\n * Tells whether observer has active observations.\r\n *\r\n * @returns {boolean}\r\n */\r\n ResizeObserverSPI.prototype.hasActive = function () {\r\n return this.activeObservations_.length > 0;\r\n };\r\n return ResizeObserverSPI;\r\n}());\n\n// Registry of internal observers. If WeakMap is not available use current shim\r\n// for the Map collection as it has all required methods and because WeakMap\r\n// can't be fully polyfilled anyway.\r\nvar observers = typeof WeakMap !== 'undefined' ? new WeakMap() : new MapShim();\r\n/**\r\n * ResizeObserver API. Encapsulates the ResizeObserver SPI implementation\r\n * exposing only those methods and properties that are defined in the spec.\r\n */\r\nvar ResizeObserver = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserver.\r\n *\r\n * @param {ResizeObserverCallback} callback - Callback that is invoked when\r\n * dimensions of the observed elements change.\r\n */\r\n function ResizeObserver(callback) {\r\n if (!(this instanceof ResizeObserver)) {\r\n throw new TypeError('Cannot call a class as a function.');\r\n }\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n var controller = ResizeObserverController.getInstance();\r\n var observer = new ResizeObserverSPI(callback, controller, this);\r\n observers.set(this, observer);\r\n }\r\n return ResizeObserver;\r\n}());\r\n// Expose public methods of ResizeObserver.\r\n[\r\n 'observe',\r\n 'unobserve',\r\n 'disconnect'\r\n].forEach(function (method) {\r\n ResizeObserver.prototype[method] = function () {\r\n var _a;\r\n return (_a = observers.get(this))[method].apply(_a, arguments);\r\n };\r\n});\n\nvar index = (function () {\r\n // Export existing implementation if available.\r\n if (typeof global$1.ResizeObserver !== 'undefined') {\r\n return global$1.ResizeObserver;\r\n }\r\n return ResizeObserver;\r\n})();\n\nexport default index;\n","/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n","import { Observable } from '../Observable';\nimport { from } from './from';\nimport { EMPTY } from './empty';\nexport function defer(observableFactory) {\n return new Observable(function (subscriber) {\n var input;\n try {\n input = observableFactory();\n }\n catch (err) {\n subscriber.error(err);\n return undefined;\n }\n var source = input ? from(input) : EMPTY;\n return source.subscribe(subscriber);\n });\n}\n//# sourceMappingURL=defer.js.map","import { __extends } from \"tslib\";\nimport { AsyncAction } from './AsyncAction';\nvar QueueAction = (function (_super) {\n __extends(QueueAction, _super);\n function QueueAction(scheduler, work) {\n var _this = _super.call(this, scheduler, work) || this;\n _this.scheduler = scheduler;\n _this.work = work;\n return _this;\n }\n QueueAction.prototype.schedule = function (state, delay) {\n if (delay === void 0) { delay = 0; }\n if (delay > 0) {\n return _super.prototype.schedule.call(this, state, delay);\n }\n this.delay = delay;\n this.state = state;\n this.scheduler.flush(this);\n return this;\n };\n QueueAction.prototype.execute = function (state, delay) {\n return (delay > 0 || this.closed) ?\n _super.prototype.execute.call(this, state, delay) :\n this._execute(state, delay);\n };\n QueueAction.prototype.requestAsyncId = function (scheduler, id, delay) {\n if (delay === void 0) { delay = 0; }\n if ((delay !== null && delay > 0) || (delay === null && this.delay > 0)) {\n return _super.prototype.requestAsyncId.call(this, scheduler, id, delay);\n }\n return scheduler.flush(this);\n };\n return QueueAction;\n}(AsyncAction));\nexport { QueueAction };\n//# sourceMappingURL=QueueAction.js.map","import { QueueAction } from './QueueAction';\nimport { QueueScheduler } from './QueueScheduler';\nexport var queue = new QueueScheduler(QueueAction);\n//# sourceMappingURL=queue.js.map","import { __extends } from \"tslib\";\nimport { AsyncScheduler } from './AsyncScheduler';\nvar QueueScheduler = (function (_super) {\n __extends(QueueScheduler, _super);\n function QueueScheduler() {\n return _super !== null && _super.apply(this, arguments) || this;\n }\n return QueueScheduler;\n}(AsyncScheduler));\nexport { QueueScheduler };\n//# sourceMappingURL=QueueScheduler.js.map","import { __extends } from \"tslib\";\nimport { Subject } from './Subject';\nimport { queue } from './scheduler/queue';\nimport { Subscription } from './Subscription';\nimport { ObserveOnSubscriber } from './operators/observeOn';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { SubjectSubscription } from './SubjectSubscription';\nvar ReplaySubject = (function (_super) {\n __extends(ReplaySubject, _super);\n function ReplaySubject(bufferSize, windowTime, scheduler) {\n if (bufferSize === void 0) { bufferSize = Number.POSITIVE_INFINITY; }\n if (windowTime === void 0) { windowTime = Number.POSITIVE_INFINITY; }\n var _this = _super.call(this) || this;\n _this.scheduler = scheduler;\n _this._events = [];\n _this._infiniteTimeWindow = false;\n _this._bufferSize = bufferSize < 1 ? 1 : bufferSize;\n _this._windowTime = windowTime < 1 ? 1 : windowTime;\n if (windowTime === Number.POSITIVE_INFINITY) {\n _this._infiniteTimeWindow = true;\n _this.next = _this.nextInfiniteTimeWindow;\n }\n else {\n _this.next = _this.nextTimeWindow;\n }\n return _this;\n }\n ReplaySubject.prototype.nextInfiniteTimeWindow = function (value) {\n var _events = this._events;\n _events.push(value);\n if (_events.length > this._bufferSize) {\n _events.shift();\n }\n _super.prototype.next.call(this, value);\n };\n ReplaySubject.prototype.nextTimeWindow = function (value) {\n this._events.push(new ReplayEvent(this._getNow(), value));\n this._trimBufferThenGetEvents();\n _super.prototype.next.call(this, value);\n };\n ReplaySubject.prototype._subscribe = function (subscriber) {\n var _infiniteTimeWindow = this._infiniteTimeWindow;\n var _events = _infiniteTimeWindow ? this._events : this._trimBufferThenGetEvents();\n var scheduler = this.scheduler;\n var len = _events.length;\n var subscription;\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n else if (this.isStopped || this.hasError) {\n subscription = Subscription.EMPTY;\n }\n else {\n this.observers.push(subscriber);\n subscription = new SubjectSubscription(this, subscriber);\n }\n if (scheduler) {\n subscriber.add(subscriber = new ObserveOnSubscriber(subscriber, scheduler));\n }\n if (_infiniteTimeWindow) {\n for (var i = 0; i < len && !subscriber.closed; i++) {\n subscriber.next(_events[i]);\n }\n }\n else {\n for (var i = 0; i < len && !subscriber.closed; i++) {\n subscriber.next(_events[i].value);\n }\n }\n if (this.hasError) {\n subscriber.error(this.thrownError);\n }\n else if (this.isStopped) {\n subscriber.complete();\n }\n return subscription;\n };\n ReplaySubject.prototype._getNow = function () {\n return (this.scheduler || queue).now();\n };\n ReplaySubject.prototype._trimBufferThenGetEvents = function () {\n var now = this._getNow();\n var _bufferSize = this._bufferSize;\n var _windowTime = this._windowTime;\n var _events = this._events;\n var eventsCount = _events.length;\n var spliceCount = 0;\n while (spliceCount < eventsCount) {\n if ((now - _events[spliceCount].time) < _windowTime) {\n break;\n }\n spliceCount++;\n }\n if (eventsCount > _bufferSize) {\n spliceCount = Math.max(spliceCount, eventsCount - _bufferSize);\n }\n if (spliceCount > 0) {\n _events.splice(0, spliceCount);\n }\n return _events;\n };\n return ReplaySubject;\n}(Subject));\nexport { ReplaySubject };\nvar ReplayEvent = (function () {\n function ReplayEvent(time, value) {\n this.time = time;\n this.value = value;\n }\n return ReplayEvent;\n}());\n//# sourceMappingURL=ReplaySubject.js.map","export default function _has(prop, obj) {\n return Object.prototype.hasOwnProperty.call(obj, prop);\n}","import _has from \"./_has.js\";\nvar toString = Object.prototype.toString;\n\nvar _isArguments =\n/*#__PURE__*/\nfunction () {\n return toString.call(arguments) === '[object Arguments]' ? function _isArguments(x) {\n return toString.call(x) === '[object Arguments]';\n } : function _isArguments(x) {\n return _has('callee', x);\n };\n}();\n\nexport default _isArguments;","import _curry1 from \"./internal/_curry1.js\";\nimport _has from \"./internal/_has.js\";\nimport _isArguments from \"./internal/_isArguments.js\"; // cover IE < 9 keys issues\n\nvar hasEnumBug = !\n/*#__PURE__*/\n{\n toString: null\n}.propertyIsEnumerable('toString');\nvar nonEnumerableProps = ['constructor', 'valueOf', 'isPrototypeOf', 'toString', 'propertyIsEnumerable', 'hasOwnProperty', 'toLocaleString']; // Safari bug\n\nvar hasArgsEnumBug =\n/*#__PURE__*/\nfunction () {\n 'use strict';\n\n return arguments.propertyIsEnumerable('length');\n}();\n\nvar contains = function contains(list, item) {\n var idx = 0;\n\n while (idx < list.length) {\n if (list[idx] === item) {\n return true;\n }\n\n idx += 1;\n }\n\n return false;\n};\n/**\n * Returns a list containing the names of all the enumerable own properties of\n * the supplied object.\n * Note that the order of the output array is not guaranteed to be consistent\n * across different JS platforms.\n *\n * @func\n * @memberOf R\n * @since v0.1.0\n * @category Object\n * @sig {k: v} -> [k]\n * @param {Object} obj The object to extract properties from\n * @return {Array} An array of the object's own properties.\n * @see R.keysIn, R.values\n * @example\n *\n * R.keys({a: 1, b: 2, c: 3}); //=> ['a', 'b', 'c']\n */\n\n\nvar keys = typeof Object.keys === 'function' && !hasArgsEnumBug ?\n/*#__PURE__*/\n_curry1(function keys(obj) {\n return Object(obj) !== obj ? [] : Object.keys(obj);\n}) :\n/*#__PURE__*/\n_curry1(function keys(obj) {\n if (Object(obj) !== obj) {\n return [];\n }\n\n var prop, nIdx;\n var ks = [];\n\n var checkArgsLength = hasArgsEnumBug && _isArguments(obj);\n\n for (prop in obj) {\n if (_has(prop, obj) && (!checkArgsLength || prop !== 'length')) {\n ks[ks.length] = prop;\n }\n }\n\n if (hasEnumBug) {\n nIdx = nonEnumerableProps.length - 1;\n\n while (nIdx >= 0) {\n prop = nonEnumerableProps[nIdx];\n\n if (_has(prop, obj) && !contains(ks, prop)) {\n ks[ks.length] = prop;\n }\n\n nIdx -= 1;\n }\n }\n\n return ks;\n});\nexport default keys;","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nimport { noop } from '../util/noop';\nimport { isFunction } from '../util/isFunction';\nexport function tap(nextOrObserver, error, complete) {\n return function tapOperatorFunction(source) {\n return source.lift(new DoOperator(nextOrObserver, error, complete));\n };\n}\nvar DoOperator = (function () {\n function DoOperator(nextOrObserver, error, complete) {\n this.nextOrObserver = nextOrObserver;\n this.error = error;\n this.complete = complete;\n }\n DoOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new TapSubscriber(subscriber, this.nextOrObserver, this.error, this.complete));\n };\n return DoOperator;\n}());\nvar TapSubscriber = (function (_super) {\n __extends(TapSubscriber, _super);\n function TapSubscriber(destination, observerOrNext, error, complete) {\n var _this = _super.call(this, destination) || this;\n _this._tapNext = noop;\n _this._tapError = noop;\n _this._tapComplete = noop;\n _this._tapError = error || noop;\n _this._tapComplete = complete || noop;\n if (isFunction(observerOrNext)) {\n _this._context = _this;\n _this._tapNext = observerOrNext;\n }\n else if (observerOrNext) {\n _this._context = observerOrNext;\n _this._tapNext = observerOrNext.next || noop;\n _this._tapError = observerOrNext.error || noop;\n _this._tapComplete = observerOrNext.complete || noop;\n }\n return _this;\n }\n TapSubscriber.prototype._next = function (value) {\n try {\n this._tapNext.call(this._context, value);\n }\n catch (err) {\n this.destination.error(err);\n return;\n }\n this.destination.next(value);\n };\n TapSubscriber.prototype._error = function (err) {\n try {\n this._tapError.call(this._context, err);\n }\n catch (err) {\n this.destination.error(err);\n return;\n }\n this.destination.error(err);\n };\n TapSubscriber.prototype._complete = function () {\n try {\n this._tapComplete.call(this._context);\n }\n catch (err) {\n this.destination.error(err);\n return;\n }\n return this.destination.complete();\n };\n return TapSubscriber;\n}(Subscriber));\n//# sourceMappingURL=tap.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nexport function scan(accumulator, seed) {\n var hasSeed = false;\n if (arguments.length >= 2) {\n hasSeed = true;\n }\n return function scanOperatorFunction(source) {\n return source.lift(new ScanOperator(accumulator, seed, hasSeed));\n };\n}\nvar ScanOperator = (function () {\n function ScanOperator(accumulator, seed, hasSeed) {\n if (hasSeed === void 0) { hasSeed = false; }\n this.accumulator = accumulator;\n this.seed = seed;\n this.hasSeed = hasSeed;\n }\n ScanOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new ScanSubscriber(subscriber, this.accumulator, this.seed, this.hasSeed));\n };\n return ScanOperator;\n}());\nvar ScanSubscriber = (function (_super) {\n __extends(ScanSubscriber, _super);\n function ScanSubscriber(destination, accumulator, _state, _hasState) {\n var _this = _super.call(this, destination) || this;\n _this.accumulator = accumulator;\n _this._state = _state;\n _this._hasState = _hasState;\n _this.index = 0;\n return _this;\n }\n ScanSubscriber.prototype._next = function (value) {\n var destination = this.destination;\n if (!this._hasState) {\n this._state = value;\n this._hasState = true;\n destination.next(value);\n }\n else {\n var index = this.index++;\n var result = void 0;\n try {\n result = this.accumulator(this._state, value, index);\n }\n catch (err) {\n destination.error(err);\n return;\n }\n this._state = result;\n destination.next(result);\n }\n };\n return ScanSubscriber;\n}(Subscriber));\n//# sourceMappingURL=scan.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nimport { Subscription } from '../Subscription';\nexport function finalize(callback) {\n return function (source) { return source.lift(new FinallyOperator(callback)); };\n}\nvar FinallyOperator = (function () {\n function FinallyOperator(callback) {\n this.callback = callback;\n }\n FinallyOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new FinallySubscriber(subscriber, this.callback));\n };\n return FinallyOperator;\n}());\nvar FinallySubscriber = (function (_super) {\n __extends(FinallySubscriber, _super);\n function FinallySubscriber(destination, callback) {\n var _this = _super.call(this, destination) || this;\n _this.add(new Subscription(callback));\n return _this;\n }\n return FinallySubscriber;\n}(Subscriber));\n//# sourceMappingURL=finalize.js.map","import { __extends } from \"tslib\";\nimport { AsyncAction } from './AsyncAction';\nvar AnimationFrameAction = (function (_super) {\n __extends(AnimationFrameAction, _super);\n function AnimationFrameAction(scheduler, work) {\n var _this = _super.call(this, scheduler, work) || this;\n _this.scheduler = scheduler;\n _this.work = work;\n return _this;\n }\n AnimationFrameAction.prototype.requestAsyncId = function (scheduler, id, delay) {\n if (delay === void 0) { delay = 0; }\n if (delay !== null && delay > 0) {\n return _super.prototype.requestAsyncId.call(this, scheduler, id, delay);\n }\n scheduler.actions.push(this);\n return scheduler.scheduled || (scheduler.scheduled = requestAnimationFrame(function () { return scheduler.flush(undefined); }));\n };\n AnimationFrameAction.prototype.recycleAsyncId = function (scheduler, id, delay) {\n if (delay === void 0) { delay = 0; }\n if ((delay !== null && delay > 0) || (delay === null && this.delay > 0)) {\n return _super.prototype.recycleAsyncId.call(this, scheduler, id, delay);\n }\n if (scheduler.actions.length === 0) {\n cancelAnimationFrame(id);\n scheduler.scheduled = undefined;\n }\n return undefined;\n };\n return AnimationFrameAction;\n}(AsyncAction));\nexport { AnimationFrameAction };\n//# sourceMappingURL=AnimationFrameAction.js.map","import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nexport var animationFrame = new AnimationFrameScheduler(AnimationFrameAction);\n//# sourceMappingURL=animationFrame.js.map","import { __extends } from \"tslib\";\nimport { AsyncScheduler } from './AsyncScheduler';\nvar AnimationFrameScheduler = (function (_super) {\n __extends(AnimationFrameScheduler, _super);\n function AnimationFrameScheduler() {\n return _super !== null && _super.apply(this, arguments) || this;\n }\n AnimationFrameScheduler.prototype.flush = function (action) {\n this.active = true;\n this.scheduled = undefined;\n var actions = this.actions;\n var error;\n var index = -1;\n var count = actions.length;\n action = action || actions.shift();\n do {\n if (error = action.execute(action.state, action.delay)) {\n break;\n }\n } while (++index < count && (action = actions.shift()));\n this.active = false;\n if (error) {\n while (++index < count && (action = actions.shift())) {\n action.unsubscribe();\n }\n throw error;\n }\n };\n return AnimationFrameScheduler;\n}(AsyncScheduler));\nexport { AnimationFrameScheduler };\n//# sourceMappingURL=AnimationFrameScheduler.js.map","import { ReplaySubject } from '../ReplaySubject';\nexport function shareReplay(configOrBufferSize, windowTime, scheduler) {\n var config;\n if (configOrBufferSize && typeof configOrBufferSize === 'object') {\n config = configOrBufferSize;\n }\n else {\n config = {\n bufferSize: configOrBufferSize,\n windowTime: windowTime,\n refCount: false,\n scheduler: scheduler\n };\n }\n return function (source) { return source.lift(shareReplayOperator(config)); };\n}\nfunction shareReplayOperator(_a) {\n var _b = _a.bufferSize, bufferSize = _b === void 0 ? Number.POSITIVE_INFINITY : _b, _c = _a.windowTime, windowTime = _c === void 0 ? Number.POSITIVE_INFINITY : _c, useRefCount = _a.refCount, scheduler = _a.scheduler;\n var subject;\n var refCount = 0;\n var subscription;\n var hasError = false;\n var isComplete = false;\n return function shareReplayOperation(source) {\n refCount++;\n if (!subject || hasError) {\n hasError = false;\n subject = new ReplaySubject(bufferSize, windowTime, scheduler);\n subscription = source.subscribe({\n next: function (value) { subject.next(value); },\n error: function (err) {\n hasError = true;\n subject.error(err);\n },\n complete: function () {\n isComplete = true;\n subscription = undefined;\n subject.complete();\n },\n });\n }\n var innerSub = subject.subscribe(this);\n this.add(function () {\n refCount--;\n innerSub.unsubscribe();\n if (subscription && !isComplete && useRefCount && refCount === 0) {\n subscription.unsubscribe();\n subscription = undefined;\n subject = undefined;\n }\n });\n };\n}\n//# sourceMappingURL=shareReplay.js.map","import { distinctUntilChanged } from './distinctUntilChanged';\nexport function distinctUntilKeyChanged(key, compare) {\n return distinctUntilChanged(function (x, y) { return compare ? compare(x[key], y[key]) : x[key] === y[key]; });\n}\n//# sourceMappingURL=distinctUntilKeyChanged.js.map","import { __extends, __spreadArrays } from \"tslib\";\nimport { OuterSubscriber } from '../OuterSubscriber';\nimport { subscribeToResult } from '../util/subscribeToResult';\nexport function withLatestFrom() {\n var args = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n args[_i] = arguments[_i];\n }\n return function (source) {\n var project;\n if (typeof args[args.length - 1] === 'function') {\n project = args.pop();\n }\n var observables = args;\n return source.lift(new WithLatestFromOperator(observables, project));\n };\n}\nvar WithLatestFromOperator = (function () {\n function WithLatestFromOperator(observables, project) {\n this.observables = observables;\n this.project = project;\n }\n WithLatestFromOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new WithLatestFromSubscriber(subscriber, this.observables, this.project));\n };\n return WithLatestFromOperator;\n}());\nvar WithLatestFromSubscriber = (function (_super) {\n __extends(WithLatestFromSubscriber, _super);\n function WithLatestFromSubscriber(destination, observables, project) {\n var _this = _super.call(this, destination) || this;\n _this.observables = observables;\n _this.project = project;\n _this.toRespond = [];\n var len = observables.length;\n _this.values = new Array(len);\n for (var i = 0; i < len; i++) {\n _this.toRespond.push(i);\n }\n for (var i = 0; i < len; i++) {\n var observable = observables[i];\n _this.add(subscribeToResult(_this, observable, observable, i));\n }\n return _this;\n }\n WithLatestFromSubscriber.prototype.notifyNext = function (outerValue, innerValue, outerIndex, innerIndex, innerSub) {\n this.values[outerIndex] = innerValue;\n var toRespond = this.toRespond;\n if (toRespond.length > 0) {\n var found = toRespond.indexOf(outerIndex);\n if (found !== -1) {\n toRespond.splice(found, 1);\n }\n }\n };\n WithLatestFromSubscriber.prototype.notifyComplete = function () {\n };\n WithLatestFromSubscriber.prototype._next = function (value) {\n if (this.toRespond.length === 0) {\n var args = __spreadArrays([value], this.values);\n if (this.project) {\n this._tryProject(args);\n }\n else {\n this.destination.next(args);\n }\n }\n };\n WithLatestFromSubscriber.prototype._tryProject = function (args) {\n var result;\n try {\n result = this.project.apply(this, args);\n }\n catch (err) {\n this.destination.error(err);\n return;\n }\n this.destination.next(result);\n };\n return WithLatestFromSubscriber;\n}(OuterSubscriber));\n//# sourceMappingURL=withLatestFrom.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nexport function bufferCount(bufferSize, startBufferEvery) {\n if (startBufferEvery === void 0) { startBufferEvery = null; }\n return function bufferCountOperatorFunction(source) {\n return source.lift(new BufferCountOperator(bufferSize, startBufferEvery));\n };\n}\nvar BufferCountOperator = (function () {\n function BufferCountOperator(bufferSize, startBufferEvery) {\n this.bufferSize = bufferSize;\n this.startBufferEvery = startBufferEvery;\n if (!startBufferEvery || bufferSize === startBufferEvery) {\n this.subscriberClass = BufferCountSubscriber;\n }\n else {\n this.subscriberClass = BufferSkipCountSubscriber;\n }\n }\n BufferCountOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new this.subscriberClass(subscriber, this.bufferSize, this.startBufferEvery));\n };\n return BufferCountOperator;\n}());\nvar BufferCountSubscriber = (function (_super) {\n __extends(BufferCountSubscriber, _super);\n function BufferCountSubscriber(destination, bufferSize) {\n var _this = _super.call(this, destination) || this;\n _this.bufferSize = bufferSize;\n _this.buffer = [];\n return _this;\n }\n BufferCountSubscriber.prototype._next = function (value) {\n var buffer = this.buffer;\n buffer.push(value);\n if (buffer.length == this.bufferSize) {\n this.destination.next(buffer);\n this.buffer = [];\n }\n };\n BufferCountSubscriber.prototype._complete = function () {\n var buffer = this.buffer;\n if (buffer.length > 0) {\n this.destination.next(buffer);\n }\n _super.prototype._complete.call(this);\n };\n return BufferCountSubscriber;\n}(Subscriber));\nvar BufferSkipCountSubscriber = (function (_super) {\n __extends(BufferSkipCountSubscriber, _super);\n function BufferSkipCountSubscriber(destination, bufferSize, startBufferEvery) {\n var _this = _super.call(this, destination) || this;\n _this.bufferSize = bufferSize;\n _this.startBufferEvery = startBufferEvery;\n _this.buffers = [];\n _this.count = 0;\n return _this;\n }\n BufferSkipCountSubscriber.prototype._next = function (value) {\n var _a = this, bufferSize = _a.bufferSize, startBufferEvery = _a.startBufferEvery, buffers = _a.buffers, count = _a.count;\n this.count++;\n if (count % startBufferEvery === 0) {\n buffers.push([]);\n }\n for (var i = buffers.length; i--;) {\n var buffer = buffers[i];\n buffer.push(value);\n if (buffer.length === bufferSize) {\n buffers.splice(i, 1);\n this.destination.next(buffer);\n }\n }\n };\n BufferSkipCountSubscriber.prototype._complete = function () {\n var _a = this, buffers = _a.buffers, destination = _a.destination;\n while (buffers.length > 0) {\n var buffer = buffers.shift();\n if (buffer.length > 0) {\n destination.next(buffer);\n }\n }\n _super.prototype._complete.call(this);\n };\n return BufferSkipCountSubscriber;\n}(Subscriber));\n//# sourceMappingURL=bufferCount.js.map","import { mergeAll } from './mergeAll';\nexport function concatAll() {\n return mergeAll(1);\n}\n//# sourceMappingURL=concatAll.js.map","import { of } from './of';\nimport { concatAll } from '../operators/concatAll';\nexport function concat() {\n var observables = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n observables[_i] = arguments[_i];\n }\n return concatAll()(of.apply(void 0, observables));\n}\n//# sourceMappingURL=concat.js.map","import { concat } from '../observable/concat';\nimport { isScheduler } from '../util/isScheduler';\nexport function startWith() {\n var values = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n values[_i] = arguments[_i];\n }\n var scheduler = values[values.length - 1];\n if (isScheduler(scheduler)) {\n values.pop();\n return function (source) { return concat(values, source, scheduler); };\n }\n else {\n return function (source) { return concat(values, source); };\n }\n}\n//# sourceMappingURL=startWith.js.map","import _curry1 from \"./internal/_curry1.js\";\nimport _isString from \"./internal/_isString.js\";\n/**\n * Returns a new list or string with the elements or characters in reverse\n * order.\n *\n * @func\n * @memberOf R\n * @since v0.1.0\n * @category List\n * @sig [a] -> [a]\n * @sig String -> String\n * @param {Array|String} list\n * @return {Array|String}\n * @example\n *\n * R.reverse([1, 2, 3]); //=> [3, 2, 1]\n * R.reverse([1, 2]); //=> [2, 1]\n * R.reverse([1]); //=> [1]\n * R.reverse([]); //=> []\n *\n * R.reverse('abc'); //=> 'cba'\n * R.reverse('ab'); //=> 'ba'\n * R.reverse('a'); //=> 'a'\n * R.reverse(''); //=> ''\n */\n\nvar reverse =\n/*#__PURE__*/\n_curry1(function reverse(list) {\n return _isString(list) ? list.split('').reverse().join('') : Array.prototype.slice.call(list, 0).reverse();\n});\n\nexport default reverse;","export default function _isString(x) {\n return Object.prototype.toString.call(x) === '[object String]';\n}","import { Observable } from '../Observable';\nimport { isArray } from '../util/isArray';\nimport { isFunction } from '../util/isFunction';\nimport { map } from '../operators/map';\nexport function fromEvent(target, eventName, options, resultSelector) {\n if (isFunction(options)) {\n resultSelector = options;\n options = undefined;\n }\n if (resultSelector) {\n return fromEvent(target, eventName, options).pipe(map(function (args) { return isArray(args) ? resultSelector.apply(void 0, args) : resultSelector(args); }));\n }\n return new Observable(function (subscriber) {\n function handler(e) {\n if (arguments.length > 1) {\n subscriber.next(Array.prototype.slice.call(arguments));\n }\n else {\n subscriber.next(e);\n }\n }\n setupSubscription(target, eventName, handler, subscriber, options);\n });\n}\nfunction setupSubscription(sourceObj, eventName, handler, subscriber, options) {\n var unsubscribe;\n if (isEventTarget(sourceObj)) {\n var source_1 = sourceObj;\n sourceObj.addEventListener(eventName, handler, options);\n unsubscribe = function () { return source_1.removeEventListener(eventName, handler, options); };\n }\n else if (isJQueryStyleEventEmitter(sourceObj)) {\n var source_2 = sourceObj;\n sourceObj.on(eventName, handler);\n unsubscribe = function () { return source_2.off(eventName, handler); };\n }\n else if (isNodeStyleEventEmitter(sourceObj)) {\n var source_3 = sourceObj;\n sourceObj.addListener(eventName, handler);\n unsubscribe = function () { return source_3.removeListener(eventName, handler); };\n }\n else if (sourceObj && sourceObj.length) {\n for (var i = 0, len = sourceObj.length; i < len; i++) {\n setupSubscription(sourceObj[i], eventName, handler, subscriber, options);\n }\n }\n else {\n throw new TypeError('Invalid event target');\n }\n subscriber.add(unsubscribe);\n}\nfunction isNodeStyleEventEmitter(sourceObj) {\n return sourceObj && typeof sourceObj.addListener === 'function' && typeof sourceObj.removeListener === 'function';\n}\nfunction isJQueryStyleEventEmitter(sourceObj) {\n return sourceObj && typeof sourceObj.on === 'function' && typeof sourceObj.off === 'function';\n}\nfunction isEventTarget(sourceObj) {\n return sourceObj && typeof sourceObj.addEventListener === 'function' && typeof sourceObj.removeEventListener === 'function';\n}\n//# sourceMappingURL=fromEvent.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nexport function mapTo(value) {\n return function (source) { return source.lift(new MapToOperator(value)); };\n}\nvar MapToOperator = (function () {\n function MapToOperator(value) {\n this.value = value;\n }\n MapToOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new MapToSubscriber(subscriber, this.value));\n };\n return MapToOperator;\n}());\nvar MapToSubscriber = (function (_super) {\n __extends(MapToSubscriber, _super);\n function MapToSubscriber(destination, value) {\n var _this = _super.call(this, destination) || this;\n _this.value = value;\n return _this;\n }\n MapToSubscriber.prototype._next = function (x) {\n this.destination.next(this.value);\n };\n return MapToSubscriber;\n}(Subscriber));\n//# sourceMappingURL=mapTo.js.map","import { Observable } from '../Observable';\nimport { isScheduler } from '../util/isScheduler';\nimport { mergeAll } from '../operators/mergeAll';\nimport { fromArray } from './fromArray';\nexport function merge() {\n var observables = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n observables[_i] = arguments[_i];\n }\n var concurrent = Number.POSITIVE_INFINITY;\n var scheduler = undefined;\n var last = observables[observables.length - 1];\n if (isScheduler(last)) {\n scheduler = observables.pop();\n if (observables.length > 1 && typeof observables[observables.length - 1] === 'number') {\n concurrent = observables.pop();\n }\n }\n else if (typeof last === 'number') {\n concurrent = observables.pop();\n }\n if (!scheduler && observables.length === 1 && observables[0] instanceof Observable) {\n return observables[0];\n }\n return mergeAll(concurrent)(fromArray(observables, scheduler));\n}\n//# sourceMappingURL=merge.js.map","import { Observable } from '../Observable';\nimport { isArray } from '../util/isArray';\nimport { isFunction } from '../util/isFunction';\nimport { map } from '../operators/map';\nexport function fromEventPattern(addHandler, removeHandler, resultSelector) {\n if (resultSelector) {\n return fromEventPattern(addHandler, removeHandler).pipe(map(function (args) { return isArray(args) ? resultSelector.apply(void 0, args) : resultSelector(args); }));\n }\n return new Observable(function (subscriber) {\n var handler = function () {\n var e = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n e[_i] = arguments[_i];\n }\n return subscriber.next(e.length === 1 ? e[0] : e);\n };\n var retValue;\n try {\n retValue = addHandler(handler);\n }\n catch (err) {\n subscriber.error(err);\n return undefined;\n }\n if (!isFunction(removeHandler)) {\n return undefined;\n }\n return function () { return removeHandler(handler, retValue); };\n });\n}\n//# sourceMappingURL=fromEventPattern.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nexport function filter(predicate, thisArg) {\n return function filterOperatorFunction(source) {\n return source.lift(new FilterOperator(predicate, thisArg));\n };\n}\nvar FilterOperator = (function () {\n function FilterOperator(predicate, thisArg) {\n this.predicate = predicate;\n this.thisArg = thisArg;\n }\n FilterOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new FilterSubscriber(subscriber, this.predicate, this.thisArg));\n };\n return FilterOperator;\n}());\nvar FilterSubscriber = (function (_super) {\n __extends(FilterSubscriber, _super);\n function FilterSubscriber(destination, predicate, thisArg) {\n var _this = _super.call(this, destination) || this;\n _this.predicate = predicate;\n _this.thisArg = thisArg;\n _this.count = 0;\n return _this;\n }\n FilterSubscriber.prototype._next = function (value) {\n var result;\n try {\n result = this.predicate.call(this.thisArg, value, this.count++);\n }\n catch (err) {\n this.destination.error(err);\n return;\n }\n if (result) {\n this.destination.next(value);\n }\n };\n return FilterSubscriber;\n}(Subscriber));\n//# sourceMappingURL=filter.js.map","import { __extends } from \"tslib\";\nimport { Subject } from './Subject';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nvar BehaviorSubject = (function (_super) {\n __extends(BehaviorSubject, _super);\n function BehaviorSubject(_value) {\n var _this = _super.call(this) || this;\n _this._value = _value;\n return _this;\n }\n Object.defineProperty(BehaviorSubject.prototype, \"value\", {\n get: function () {\n return this.getValue();\n },\n enumerable: true,\n configurable: true\n });\n BehaviorSubject.prototype._subscribe = function (subscriber) {\n var subscription = _super.prototype._subscribe.call(this, subscriber);\n if (subscription && !subscription.closed) {\n subscriber.next(this._value);\n }\n return subscription;\n };\n BehaviorSubject.prototype.getValue = function () {\n if (this.hasError) {\n throw this.thrownError;\n }\n else if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n else {\n return this._value;\n }\n };\n BehaviorSubject.prototype.next = function (value) {\n _super.prototype.next.call(this, this._value = value);\n };\n return BehaviorSubject;\n}(Subject));\nexport { BehaviorSubject };\n//# sourceMappingURL=BehaviorSubject.js.map","import { map } from './map';\nexport function pluck() {\n var properties = [];\n for (var _i = 0; _i < arguments.length; _i++) {\n properties[_i] = arguments[_i];\n }\n var length = properties.length;\n if (length === 0) {\n throw new Error('list of properties cannot be empty.');\n }\n return map(function (x) {\n var currentProp = x;\n for (var i = 0; i < length; i++) {\n var p = currentProp[properties[i]];\n if (typeof p !== 'undefined') {\n currentProp = p;\n }\n else {\n return undefined;\n }\n }\n return currentProp;\n });\n}\n//# sourceMappingURL=pluck.js.map","import { __extends } from \"tslib\";\nimport { OuterSubscriber } from '../OuterSubscriber';\nimport { subscribeToResult } from '../util/subscribeToResult';\nexport var defaultThrottleConfig = {\n leading: true,\n trailing: false\n};\nexport function throttle(durationSelector, config) {\n if (config === void 0) { config = defaultThrottleConfig; }\n return function (source) { return source.lift(new ThrottleOperator(durationSelector, !!config.leading, !!config.trailing)); };\n}\nvar ThrottleOperator = (function () {\n function ThrottleOperator(durationSelector, leading, trailing) {\n this.durationSelector = durationSelector;\n this.leading = leading;\n this.trailing = trailing;\n }\n ThrottleOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new ThrottleSubscriber(subscriber, this.durationSelector, this.leading, this.trailing));\n };\n return ThrottleOperator;\n}());\nvar ThrottleSubscriber = (function (_super) {\n __extends(ThrottleSubscriber, _super);\n function ThrottleSubscriber(destination, durationSelector, _leading, _trailing) {\n var _this = _super.call(this, destination) || this;\n _this.destination = destination;\n _this.durationSelector = durationSelector;\n _this._leading = _leading;\n _this._trailing = _trailing;\n _this._sendValue = null;\n _this._hasValue = false;\n return _this;\n }\n ThrottleSubscriber.prototype._next = function (value) {\n this._hasValue = true;\n this._sendValue = value;\n if (!this._throttled) {\n if (this._leading) {\n this.send();\n }\n else {\n this.throttle(value);\n }\n }\n };\n ThrottleSubscriber.prototype.send = function () {\n var _a = this, _hasValue = _a._hasValue, _sendValue = _a._sendValue;\n if (_hasValue) {\n this.destination.next(_sendValue);\n this.throttle(_sendValue);\n }\n this._hasValue = false;\n this._sendValue = null;\n };\n ThrottleSubscriber.prototype.throttle = function (value) {\n var duration = this.tryDurationSelector(value);\n if (!!duration) {\n this.add(this._throttled = subscribeToResult(this, duration));\n }\n };\n ThrottleSubscriber.prototype.tryDurationSelector = function (value) {\n try {\n return this.durationSelector(value);\n }\n catch (err) {\n this.destination.error(err);\n return null;\n }\n };\n ThrottleSubscriber.prototype.throttlingDone = function () {\n var _a = this, _throttled = _a._throttled, _trailing = _a._trailing;\n if (_throttled) {\n _throttled.unsubscribe();\n }\n this._throttled = null;\n if (_trailing) {\n this.send();\n }\n };\n ThrottleSubscriber.prototype.notifyNext = function (outerValue, innerValue, outerIndex, innerIndex, innerSub) {\n this.throttlingDone();\n };\n ThrottleSubscriber.prototype.notifyComplete = function () {\n this.throttlingDone();\n };\n return ThrottleSubscriber;\n}(OuterSubscriber));\n//# sourceMappingURL=throttle.js.map","import { switchMap } from './switchMap';\nexport function switchMapTo(innerObservable, resultSelector) {\n return resultSelector ? switchMap(function () { return innerObservable; }, resultSelector) : switchMap(function () { return innerObservable; });\n}\n//# sourceMappingURL=switchMapTo.js.map","import { __extends } from \"tslib\";\nimport { OuterSubscriber } from '../OuterSubscriber';\nimport { subscribeToResult } from '../util/subscribeToResult';\nexport function sample(notifier) {\n return function (source) { return source.lift(new SampleOperator(notifier)); };\n}\nvar SampleOperator = (function () {\n function SampleOperator(notifier) {\n this.notifier = notifier;\n }\n SampleOperator.prototype.call = function (subscriber, source) {\n var sampleSubscriber = new SampleSubscriber(subscriber);\n var subscription = source.subscribe(sampleSubscriber);\n subscription.add(subscribeToResult(sampleSubscriber, this.notifier));\n return subscription;\n };\n return SampleOperator;\n}());\nvar SampleSubscriber = (function (_super) {\n __extends(SampleSubscriber, _super);\n function SampleSubscriber() {\n var _this = _super !== null && _super.apply(this, arguments) || this;\n _this.hasValue = false;\n return _this;\n }\n SampleSubscriber.prototype._next = function (value) {\n this.value = value;\n this.hasValue = true;\n };\n SampleSubscriber.prototype.notifyNext = function (outerValue, innerValue, outerIndex, innerIndex, innerSub) {\n this.emitValue();\n };\n SampleSubscriber.prototype.notifyComplete = function () {\n this.emitValue();\n };\n SampleSubscriber.prototype.emitValue = function () {\n if (this.hasValue) {\n this.hasValue = false;\n this.destination.next(this.value);\n }\n };\n return SampleSubscriber;\n}(OuterSubscriber));\n//# sourceMappingURL=sample.js.map","import { Observable } from '../Observable';\nimport { noop } from '../util/noop';\nexport var NEVER = new Observable(noop);\nexport function never() {\n return NEVER;\n}\n//# sourceMappingURL=never.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nexport function skip(count) {\n return function (source) { return source.lift(new SkipOperator(count)); };\n}\nvar SkipOperator = (function () {\n function SkipOperator(total) {\n this.total = total;\n }\n SkipOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new SkipSubscriber(subscriber, this.total));\n };\n return SkipOperator;\n}());\nvar SkipSubscriber = (function (_super) {\n __extends(SkipSubscriber, _super);\n function SkipSubscriber(destination, total) {\n var _this = _super.call(this, destination) || this;\n _this.total = total;\n _this.count = 0;\n return _this;\n }\n SkipSubscriber.prototype._next = function (x) {\n if (++this.count > this.total) {\n this.destination.next(x);\n }\n };\n return SkipSubscriber;\n}(Subscriber));\n//# sourceMappingURL=skip.js.map","import { __extends } from \"tslib\";\nimport { OuterSubscriber } from '../OuterSubscriber';\nimport { InnerSubscriber } from '../InnerSubscriber';\nimport { subscribeToResult } from '../util/subscribeToResult';\nexport function catchError(selector) {\n return function catchErrorOperatorFunction(source) {\n var operator = new CatchOperator(selector);\n var caught = source.lift(operator);\n return (operator.caught = caught);\n };\n}\nvar CatchOperator = (function () {\n function CatchOperator(selector) {\n this.selector = selector;\n }\n CatchOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new CatchSubscriber(subscriber, this.selector, this.caught));\n };\n return CatchOperator;\n}());\nvar CatchSubscriber = (function (_super) {\n __extends(CatchSubscriber, _super);\n function CatchSubscriber(destination, selector, caught) {\n var _this = _super.call(this, destination) || this;\n _this.selector = selector;\n _this.caught = caught;\n return _this;\n }\n CatchSubscriber.prototype.error = function (err) {\n if (!this.isStopped) {\n var result = void 0;\n try {\n result = this.selector(err, this.caught);\n }\n catch (err2) {\n _super.prototype.error.call(this, err2);\n return;\n }\n this._unsubscribeAndRecycle();\n var innerSubscriber = new InnerSubscriber(this, undefined, undefined);\n this.add(innerSubscriber);\n var innerSubscription = subscribeToResult(this, result, undefined, undefined, innerSubscriber);\n if (innerSubscription !== innerSubscriber) {\n this.add(innerSubscription);\n }\n }\n };\n return CatchSubscriber;\n}(OuterSubscriber));\n//# sourceMappingURL=catchError.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nimport { async } from '../scheduler/async';\nexport function debounceTime(dueTime, scheduler) {\n if (scheduler === void 0) { scheduler = async; }\n return function (source) { return source.lift(new DebounceTimeOperator(dueTime, scheduler)); };\n}\nvar DebounceTimeOperator = (function () {\n function DebounceTimeOperator(dueTime, scheduler) {\n this.dueTime = dueTime;\n this.scheduler = scheduler;\n }\n DebounceTimeOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new DebounceTimeSubscriber(subscriber, this.dueTime, this.scheduler));\n };\n return DebounceTimeOperator;\n}());\nvar DebounceTimeSubscriber = (function (_super) {\n __extends(DebounceTimeSubscriber, _super);\n function DebounceTimeSubscriber(destination, dueTime, scheduler) {\n var _this = _super.call(this, destination) || this;\n _this.dueTime = dueTime;\n _this.scheduler = scheduler;\n _this.debouncedSubscription = null;\n _this.lastValue = null;\n _this.hasValue = false;\n return _this;\n }\n DebounceTimeSubscriber.prototype._next = function (value) {\n this.clearDebounce();\n this.lastValue = value;\n this.hasValue = true;\n this.add(this.debouncedSubscription = this.scheduler.schedule(dispatchNext, this.dueTime, this));\n };\n DebounceTimeSubscriber.prototype._complete = function () {\n this.debouncedNext();\n this.destination.complete();\n };\n DebounceTimeSubscriber.prototype.debouncedNext = function () {\n this.clearDebounce();\n if (this.hasValue) {\n var lastValue = this.lastValue;\n this.lastValue = null;\n this.hasValue = false;\n this.destination.next(lastValue);\n }\n };\n DebounceTimeSubscriber.prototype.clearDebounce = function () {\n var debouncedSubscription = this.debouncedSubscription;\n if (debouncedSubscription !== null) {\n this.remove(debouncedSubscription);\n debouncedSubscription.unsubscribe();\n this.debouncedSubscription = null;\n }\n };\n return DebounceTimeSubscriber;\n}(Subscriber));\nfunction dispatchNext(subscriber) {\n subscriber.debouncedNext();\n}\n//# sourceMappingURL=debounceTime.js.map","import { defer } from './defer';\nimport { EMPTY } from './empty';\nexport function iif(condition, trueResult, falseResult) {\n if (trueResult === void 0) { trueResult = EMPTY; }\n if (falseResult === void 0) { falseResult = EMPTY; }\n return defer(function () { return condition() ? trueResult : falseResult; });\n}\n//# sourceMappingURL=iif.js.map","import _curry1 from \"./internal/_curry1.js\";\nimport keys from \"./keys.js\";\n/**\n * Returns a list of all the enumerable own properties of the supplied object.\n * Note that the order of the output array is not guaranteed across different\n * JS platforms.\n *\n * @func\n * @memberOf R\n * @since v0.1.0\n * @category Object\n * @sig {k: v} -> [v]\n * @param {Object} obj The object to extract values from\n * @return {Array} An array of the values of the object's own properties.\n * @see R.valuesIn, R.keys\n * @example\n *\n * R.values({a: 1, b: 2, c: 3}); //=> [1, 2, 3]\n */\n\nvar values =\n/*#__PURE__*/\n_curry1(function values(obj) {\n var props = keys(obj);\n var len = props.length;\n var vals = [];\n var idx = 0;\n\n while (idx < len) {\n vals[idx] = obj[props[idx]];\n idx += 1;\n }\n\n return vals;\n});\n\nexport default values;","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nexport function refCount() {\n return function refCountOperatorFunction(source) {\n return source.lift(new RefCountOperator(source));\n };\n}\nvar RefCountOperator = (function () {\n function RefCountOperator(connectable) {\n this.connectable = connectable;\n }\n RefCountOperator.prototype.call = function (subscriber, source) {\n var connectable = this.connectable;\n connectable._refCount++;\n var refCounter = new RefCountSubscriber(subscriber, connectable);\n var subscription = source.subscribe(refCounter);\n if (!refCounter.closed) {\n refCounter.connection = connectable.connect();\n }\n return subscription;\n };\n return RefCountOperator;\n}());\nvar RefCountSubscriber = (function (_super) {\n __extends(RefCountSubscriber, _super);\n function RefCountSubscriber(destination, connectable) {\n var _this = _super.call(this, destination) || this;\n _this.connectable = connectable;\n _this.connection = null;\n return _this;\n }\n RefCountSubscriber.prototype._unsubscribe = function () {\n var connectable = this.connectable;\n if (!connectable) {\n this.connection = null;\n return;\n }\n this.connectable = null;\n var refCount = connectable._refCount;\n if (refCount <= 0) {\n this.connection = null;\n return;\n }\n connectable._refCount = refCount - 1;\n if (refCount > 1) {\n this.connection = null;\n return;\n }\n var connection = this.connection;\n var sharedConnection = connectable._connection;\n this.connection = null;\n if (sharedConnection && (!connection || sharedConnection === connection)) {\n sharedConnection.unsubscribe();\n }\n };\n return RefCountSubscriber;\n}(Subscriber));\n//# sourceMappingURL=refCount.js.map","import { __extends } from \"tslib\";\nimport { SubjectSubscriber } from '../Subject';\nimport { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { Subscription } from '../Subscription';\nimport { refCount as higherOrderRefCount } from '../operators/refCount';\nvar ConnectableObservable = (function (_super) {\n __extends(ConnectableObservable, _super);\n function ConnectableObservable(source, subjectFactory) {\n var _this = _super.call(this) || this;\n _this.source = source;\n _this.subjectFactory = subjectFactory;\n _this._refCount = 0;\n _this._isComplete = false;\n return _this;\n }\n ConnectableObservable.prototype._subscribe = function (subscriber) {\n return this.getSubject().subscribe(subscriber);\n };\n ConnectableObservable.prototype.getSubject = function () {\n var subject = this._subject;\n if (!subject || subject.isStopped) {\n this._subject = this.subjectFactory();\n }\n return this._subject;\n };\n ConnectableObservable.prototype.connect = function () {\n var connection = this._connection;\n if (!connection) {\n this._isComplete = false;\n connection = this._connection = new Subscription();\n connection.add(this.source\n .subscribe(new ConnectableSubscriber(this.getSubject(), this)));\n if (connection.closed) {\n this._connection = null;\n connection = Subscription.EMPTY;\n }\n }\n return connection;\n };\n ConnectableObservable.prototype.refCount = function () {\n return higherOrderRefCount()(this);\n };\n return ConnectableObservable;\n}(Observable));\nexport { ConnectableObservable };\nexport var connectableObservableDescriptor = (function () {\n var connectableProto = ConnectableObservable.prototype;\n return {\n operator: { value: null },\n _refCount: { value: 0, writable: true },\n _subject: { value: null, writable: true },\n _connection: { value: null, writable: true },\n _subscribe: { value: connectableProto._subscribe },\n _isComplete: { value: connectableProto._isComplete, writable: true },\n getSubject: { value: connectableProto.getSubject },\n connect: { value: connectableProto.connect },\n refCount: { value: connectableProto.refCount }\n };\n})();\nvar ConnectableSubscriber = (function (_super) {\n __extends(ConnectableSubscriber, _super);\n function ConnectableSubscriber(destination, connectable) {\n var _this = _super.call(this, destination) || this;\n _this.connectable = connectable;\n return _this;\n }\n ConnectableSubscriber.prototype._error = function (err) {\n this._unsubscribe();\n _super.prototype._error.call(this, err);\n };\n ConnectableSubscriber.prototype._complete = function () {\n this.connectable._isComplete = true;\n this._unsubscribe();\n _super.prototype._complete.call(this);\n };\n ConnectableSubscriber.prototype._unsubscribe = function () {\n var connectable = this.connectable;\n if (connectable) {\n this.connectable = null;\n var connection = connectable._connection;\n connectable._refCount = 0;\n connectable._subject = null;\n connectable._connection = null;\n if (connection) {\n connection.unsubscribe();\n }\n }\n };\n return ConnectableSubscriber;\n}(SubjectSubscriber));\nvar RefCountOperator = (function () {\n function RefCountOperator(connectable) {\n this.connectable = connectable;\n }\n RefCountOperator.prototype.call = function (subscriber, source) {\n var connectable = this.connectable;\n connectable._refCount++;\n var refCounter = new RefCountSubscriber(subscriber, connectable);\n var subscription = source.subscribe(refCounter);\n if (!refCounter.closed) {\n refCounter.connection = connectable.connect();\n }\n return subscription;\n };\n return RefCountOperator;\n}());\nvar RefCountSubscriber = (function (_super) {\n __extends(RefCountSubscriber, _super);\n function RefCountSubscriber(destination, connectable) {\n var _this = _super.call(this, destination) || this;\n _this.connectable = connectable;\n return _this;\n }\n RefCountSubscriber.prototype._unsubscribe = function () {\n var connectable = this.connectable;\n if (!connectable) {\n this.connection = null;\n return;\n }\n this.connectable = null;\n var refCount = connectable._refCount;\n if (refCount <= 0) {\n this.connection = null;\n return;\n }\n connectable._refCount = refCount - 1;\n if (refCount > 1) {\n this.connection = null;\n return;\n }\n var connection = this.connection;\n var sharedConnection = connectable._connection;\n this.connection = null;\n if (sharedConnection && (!connection || sharedConnection === connection)) {\n sharedConnection.unsubscribe();\n }\n };\n return RefCountSubscriber;\n}(Subscriber));\n//# sourceMappingURL=ConnectableObservable.js.map","import { connectableObservableDescriptor } from '../observable/ConnectableObservable';\nexport function multicast(subjectOrSubjectFactory, selector) {\n return function multicastOperatorFunction(source) {\n var subjectFactory;\n if (typeof subjectOrSubjectFactory === 'function') {\n subjectFactory = subjectOrSubjectFactory;\n }\n else {\n subjectFactory = function subjectFactory() {\n return subjectOrSubjectFactory;\n };\n }\n if (typeof selector === 'function') {\n return source.lift(new MulticastOperator(subjectFactory, selector));\n }\n var connectable = Object.create(source, connectableObservableDescriptor);\n connectable.source = source;\n connectable.subjectFactory = subjectFactory;\n return connectable;\n };\n}\nvar MulticastOperator = (function () {\n function MulticastOperator(subjectFactory, selector) {\n this.subjectFactory = subjectFactory;\n this.selector = selector;\n }\n MulticastOperator.prototype.call = function (subscriber, source) {\n var selector = this.selector;\n var subject = this.subjectFactory();\n var subscription = selector(subject).subscribe(subscriber);\n subscription.add(source.subscribe(subject));\n return subscription;\n };\n return MulticastOperator;\n}());\nexport { MulticastOperator };\n//# sourceMappingURL=multicast.js.map","import { multicast } from './multicast';\nimport { refCount } from './refCount';\nimport { Subject } from '../Subject';\nfunction shareSubjectFactory() {\n return new Subject();\n}\nexport function share() {\n return function (source) { return refCount()(multicast(shareSubjectFactory)(source)); };\n}\n//# sourceMappingURL=share.js.map","export default function _identity(x) {\n return x;\n}","import _curry1 from \"./internal/_curry1.js\";\nimport _identity from \"./internal/_identity.js\";\n/**\n * A function that does nothing but return the parameter supplied to it. Good\n * as a default or placeholder function.\n *\n * @func\n * @memberOf R\n * @since v0.1.0\n * @category Function\n * @sig a -> a\n * @param {*} x The value to return.\n * @return {*} The input value, `x`.\n * @example\n *\n * R.identity(1); //=> 1\n *\n * const obj = {};\n * R.identity(obj) === obj; //=> true\n * @symb R.identity(a) = a\n */\n\nvar identity =\n/*#__PURE__*/\n_curry1(_identity);\n\nexport default identity;","import { __extends } from \"tslib\";\nimport { root } from '../../util/root';\nimport { Observable } from '../../Observable';\nimport { Subscriber } from '../../Subscriber';\nimport { map } from '../../operators/map';\nfunction getCORSRequest() {\n if (root.XMLHttpRequest) {\n return new root.XMLHttpRequest();\n }\n else if (!!root.XDomainRequest) {\n return new root.XDomainRequest();\n }\n else {\n throw new Error('CORS is not supported by your browser');\n }\n}\nfunction getXMLHttpRequest() {\n if (root.XMLHttpRequest) {\n return new root.XMLHttpRequest();\n }\n else {\n var progId = void 0;\n try {\n var progIds = ['Msxml2.XMLHTTP', 'Microsoft.XMLHTTP', 'Msxml2.XMLHTTP.4.0'];\n for (var i = 0; i < 3; i++) {\n try {\n progId = progIds[i];\n if (new root.ActiveXObject(progId)) {\n break;\n }\n }\n catch (e) {\n }\n }\n return new root.ActiveXObject(progId);\n }\n catch (e) {\n throw new Error('XMLHttpRequest is not supported by your browser');\n }\n }\n}\nexport function ajaxGet(url, headers) {\n return new AjaxObservable({ method: 'GET', url: url, headers: headers });\n}\nexport function ajaxPost(url, body, headers) {\n return new AjaxObservable({ method: 'POST', url: url, body: body, headers: headers });\n}\nexport function ajaxDelete(url, headers) {\n return new AjaxObservable({ method: 'DELETE', url: url, headers: headers });\n}\nexport function ajaxPut(url, body, headers) {\n return new AjaxObservable({ method: 'PUT', url: url, body: body, headers: headers });\n}\nexport function ajaxPatch(url, body, headers) {\n return new AjaxObservable({ method: 'PATCH', url: url, body: body, headers: headers });\n}\nvar mapResponse = map(function (x, index) { return x.response; });\nexport function ajaxGetJSON(url, headers) {\n return mapResponse(new AjaxObservable({\n method: 'GET',\n url: url,\n responseType: 'json',\n headers: headers\n }));\n}\nvar AjaxObservable = (function (_super) {\n __extends(AjaxObservable, _super);\n function AjaxObservable(urlOrRequest) {\n var _this = _super.call(this) || this;\n var request = {\n async: true,\n createXHR: function () {\n return this.crossDomain ? getCORSRequest() : getXMLHttpRequest();\n },\n crossDomain: true,\n withCredentials: false,\n headers: {},\n method: 'GET',\n responseType: 'json',\n timeout: 0\n };\n if (typeof urlOrRequest === 'string') {\n request.url = urlOrRequest;\n }\n else {\n for (var prop in urlOrRequest) {\n if (urlOrRequest.hasOwnProperty(prop)) {\n request[prop] = urlOrRequest[prop];\n }\n }\n }\n _this.request = request;\n return _this;\n }\n AjaxObservable.prototype._subscribe = function (subscriber) {\n return new AjaxSubscriber(subscriber, this.request);\n };\n AjaxObservable.create = (function () {\n var create = function (urlOrRequest) {\n return new AjaxObservable(urlOrRequest);\n };\n create.get = ajaxGet;\n create.post = ajaxPost;\n create.delete = ajaxDelete;\n create.put = ajaxPut;\n create.patch = ajaxPatch;\n create.getJSON = ajaxGetJSON;\n return create;\n })();\n return AjaxObservable;\n}(Observable));\nexport { AjaxObservable };\nvar AjaxSubscriber = (function (_super) {\n __extends(AjaxSubscriber, _super);\n function AjaxSubscriber(destination, request) {\n var _this = _super.call(this, destination) || this;\n _this.request = request;\n _this.done = false;\n var headers = request.headers = request.headers || {};\n if (!request.crossDomain && !_this.getHeader(headers, 'X-Requested-With')) {\n headers['X-Requested-With'] = 'XMLHttpRequest';\n }\n var contentTypeHeader = _this.getHeader(headers, 'Content-Type');\n if (!contentTypeHeader && !(root.FormData && request.body instanceof root.FormData) && typeof request.body !== 'undefined') {\n headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8';\n }\n request.body = _this.serializeBody(request.body, _this.getHeader(request.headers, 'Content-Type'));\n _this.send();\n return _this;\n }\n AjaxSubscriber.prototype.next = function (e) {\n this.done = true;\n var _a = this, xhr = _a.xhr, request = _a.request, destination = _a.destination;\n var result;\n try {\n result = new AjaxResponse(e, xhr, request);\n }\n catch (err) {\n return destination.error(err);\n }\n destination.next(result);\n };\n AjaxSubscriber.prototype.send = function () {\n var _a = this, request = _a.request, _b = _a.request, user = _b.user, method = _b.method, url = _b.url, async = _b.async, password = _b.password, headers = _b.headers, body = _b.body;\n try {\n var xhr = this.xhr = request.createXHR();\n this.setupEvents(xhr, request);\n if (user) {\n xhr.open(method, url, async, user, password);\n }\n else {\n xhr.open(method, url, async);\n }\n if (async) {\n xhr.timeout = request.timeout;\n xhr.responseType = request.responseType;\n }\n if ('withCredentials' in xhr) {\n xhr.withCredentials = !!request.withCredentials;\n }\n this.setHeaders(xhr, headers);\n if (body) {\n xhr.send(body);\n }\n else {\n xhr.send();\n }\n }\n catch (err) {\n this.error(err);\n }\n };\n AjaxSubscriber.prototype.serializeBody = function (body, contentType) {\n if (!body || typeof body === 'string') {\n return body;\n }\n else if (root.FormData && body instanceof root.FormData) {\n return body;\n }\n if (contentType) {\n var splitIndex = contentType.indexOf(';');\n if (splitIndex !== -1) {\n contentType = contentType.substring(0, splitIndex);\n }\n }\n switch (contentType) {\n case 'application/x-www-form-urlencoded':\n return Object.keys(body).map(function (key) { return encodeURIComponent(key) + \"=\" + encodeURIComponent(body[key]); }).join('&');\n case 'application/json':\n return JSON.stringify(body);\n default:\n return body;\n }\n };\n AjaxSubscriber.prototype.setHeaders = function (xhr, headers) {\n for (var key in headers) {\n if (headers.hasOwnProperty(key)) {\n xhr.setRequestHeader(key, headers[key]);\n }\n }\n };\n AjaxSubscriber.prototype.getHeader = function (headers, headerName) {\n for (var key in headers) {\n if (key.toLowerCase() === headerName.toLowerCase()) {\n return headers[key];\n }\n }\n return undefined;\n };\n AjaxSubscriber.prototype.setupEvents = function (xhr, request) {\n var progressSubscriber = request.progressSubscriber;\n function xhrTimeout(e) {\n var _a = xhrTimeout, subscriber = _a.subscriber, progressSubscriber = _a.progressSubscriber, request = _a.request;\n if (progressSubscriber) {\n progressSubscriber.error(e);\n }\n var error;\n try {\n error = new AjaxTimeoutError(this, request);\n }\n catch (err) {\n error = err;\n }\n subscriber.error(error);\n }\n xhr.ontimeout = xhrTimeout;\n xhrTimeout.request = request;\n xhrTimeout.subscriber = this;\n xhrTimeout.progressSubscriber = progressSubscriber;\n if (xhr.upload && 'withCredentials' in xhr) {\n if (progressSubscriber) {\n var xhrProgress_1;\n xhrProgress_1 = function (e) {\n var progressSubscriber = xhrProgress_1.progressSubscriber;\n progressSubscriber.next(e);\n };\n if (root.XDomainRequest) {\n xhr.onprogress = xhrProgress_1;\n }\n else {\n xhr.upload.onprogress = xhrProgress_1;\n }\n xhrProgress_1.progressSubscriber = progressSubscriber;\n }\n var xhrError_1;\n xhrError_1 = function (e) {\n var _a = xhrError_1, progressSubscriber = _a.progressSubscriber, subscriber = _a.subscriber, request = _a.request;\n if (progressSubscriber) {\n progressSubscriber.error(e);\n }\n var error;\n try {\n error = new AjaxError('ajax error', this, request);\n }\n catch (err) {\n error = err;\n }\n subscriber.error(error);\n };\n xhr.onerror = xhrError_1;\n xhrError_1.request = request;\n xhrError_1.subscriber = this;\n xhrError_1.progressSubscriber = progressSubscriber;\n }\n function xhrReadyStateChange(e) {\n return;\n }\n xhr.onreadystatechange = xhrReadyStateChange;\n xhrReadyStateChange.subscriber = this;\n xhrReadyStateChange.progressSubscriber = progressSubscriber;\n xhrReadyStateChange.request = request;\n function xhrLoad(e) {\n var _a = xhrLoad, subscriber = _a.subscriber, progressSubscriber = _a.progressSubscriber, request = _a.request;\n if (this.readyState === 4) {\n var status_1 = this.status === 1223 ? 204 : this.status;\n var response = (this.responseType === 'text' ? (this.response || this.responseText) : this.response);\n if (status_1 === 0) {\n status_1 = response ? 200 : 0;\n }\n if (status_1 < 400) {\n if (progressSubscriber) {\n progressSubscriber.complete();\n }\n subscriber.next(e);\n subscriber.complete();\n }\n else {\n if (progressSubscriber) {\n progressSubscriber.error(e);\n }\n var error = void 0;\n try {\n error = new AjaxError('ajax error ' + status_1, this, request);\n }\n catch (err) {\n error = err;\n }\n subscriber.error(error);\n }\n }\n }\n xhr.onload = xhrLoad;\n xhrLoad.subscriber = this;\n xhrLoad.progressSubscriber = progressSubscriber;\n xhrLoad.request = request;\n };\n AjaxSubscriber.prototype.unsubscribe = function () {\n var _a = this, done = _a.done, xhr = _a.xhr;\n if (!done && xhr && xhr.readyState !== 4 && typeof xhr.abort === 'function') {\n xhr.abort();\n }\n _super.prototype.unsubscribe.call(this);\n };\n return AjaxSubscriber;\n}(Subscriber));\nexport { AjaxSubscriber };\nvar AjaxResponse = (function () {\n function AjaxResponse(originalEvent, xhr, request) {\n this.originalEvent = originalEvent;\n this.xhr = xhr;\n this.request = request;\n this.status = xhr.status;\n this.responseType = xhr.responseType || request.responseType;\n this.response = parseXhrResponse(this.responseType, xhr);\n }\n return AjaxResponse;\n}());\nexport { AjaxResponse };\nvar AjaxErrorImpl = (function () {\n function AjaxErrorImpl(message, xhr, request) {\n Error.call(this);\n this.message = message;\n this.name = 'AjaxError';\n this.xhr = xhr;\n this.request = request;\n this.status = xhr.status;\n this.responseType = xhr.responseType || request.responseType;\n this.response = parseXhrResponse(this.responseType, xhr);\n return this;\n }\n AjaxErrorImpl.prototype = Object.create(Error.prototype);\n return AjaxErrorImpl;\n})();\nexport var AjaxError = AjaxErrorImpl;\nfunction parseJson(xhr) {\n if ('response' in xhr) {\n return xhr.responseType ? xhr.response : JSON.parse(xhr.response || xhr.responseText || 'null');\n }\n else {\n return JSON.parse(xhr.responseText || 'null');\n }\n}\nfunction parseXhrResponse(responseType, xhr) {\n switch (responseType) {\n case 'json':\n return parseJson(xhr);\n case 'xml':\n return xhr.responseXML;\n case 'text':\n default:\n return ('response' in xhr) ? xhr.response : xhr.responseText;\n }\n}\nvar AjaxTimeoutErrorImpl = (function () {\n function AjaxTimeoutErrorImpl(xhr, request) {\n AjaxError.call(this, 'ajax timeout', xhr, request);\n this.name = 'AjaxTimeoutError';\n return this;\n }\n AjaxTimeoutErrorImpl.prototype = Object.create(AjaxError.prototype);\n return AjaxTimeoutErrorImpl;\n})();\nexport var AjaxTimeoutError = AjaxTimeoutErrorImpl;\n//# sourceMappingURL=AjaxObservable.js.map","import { AjaxObservable } from './AjaxObservable';\nexport var ajax = (function () { return AjaxObservable.create; })();\n//# sourceMappingURL=ajax.js.map","var ArgumentOutOfRangeErrorImpl = (function () {\n function ArgumentOutOfRangeErrorImpl() {\n Error.call(this);\n this.message = 'argument out of range';\n this.name = 'ArgumentOutOfRangeError';\n return this;\n }\n ArgumentOutOfRangeErrorImpl.prototype = Object.create(Error.prototype);\n return ArgumentOutOfRangeErrorImpl;\n})();\nexport var ArgumentOutOfRangeError = ArgumentOutOfRangeErrorImpl;\n//# sourceMappingURL=ArgumentOutOfRangeError.js.map","import { __extends } from \"tslib\";\nimport { Subscriber } from '../Subscriber';\nimport { ArgumentOutOfRangeError } from '../util/ArgumentOutOfRangeError';\nimport { EMPTY } from '../observable/empty';\nexport function take(count) {\n return function (source) {\n if (count === 0) {\n return EMPTY;\n }\n else {\n return source.lift(new TakeOperator(count));\n }\n };\n}\nvar TakeOperator = (function () {\n function TakeOperator(total) {\n this.total = total;\n if (this.total < 0) {\n throw new ArgumentOutOfRangeError;\n }\n }\n TakeOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new TakeSubscriber(subscriber, this.total));\n };\n return TakeOperator;\n}());\nvar TakeSubscriber = (function (_super) {\n __extends(TakeSubscriber, _super);\n function TakeSubscriber(destination, total) {\n var _this = _super.call(this, destination) || this;\n _this.total = total;\n _this.count = 0;\n return _this;\n }\n TakeSubscriber.prototype._next = function (value) {\n var total = this.total;\n var count = ++this.count;\n if (count <= total) {\n this.destination.next(value);\n if (count === total) {\n this.destination.complete();\n this.unsubscribe();\n }\n }\n };\n return TakeSubscriber;\n}(Subscriber));\n//# sourceMappingURL=take.js.map","import { __extends } from \"tslib\";\nimport { async } from '../scheduler/async';\nimport { isDate } from '../util/isDate';\nimport { Subscriber } from '../Subscriber';\nimport { Notification } from '../Notification';\nexport function delay(delay, scheduler) {\n if (scheduler === void 0) { scheduler = async; }\n var absoluteDelay = isDate(delay);\n var delayFor = absoluteDelay ? (+delay - scheduler.now()) : Math.abs(delay);\n return function (source) { return source.lift(new DelayOperator(delayFor, scheduler)); };\n}\nvar DelayOperator = (function () {\n function DelayOperator(delay, scheduler) {\n this.delay = delay;\n this.scheduler = scheduler;\n }\n DelayOperator.prototype.call = function (subscriber, source) {\n return source.subscribe(new DelaySubscriber(subscriber, this.delay, this.scheduler));\n };\n return DelayOperator;\n}());\nvar DelaySubscriber = (function (_super) {\n __extends(DelaySubscriber, _super);\n function DelaySubscriber(destination, delay, scheduler) {\n var _this = _super.call(this, destination) || this;\n _this.delay = delay;\n _this.scheduler = scheduler;\n _this.queue = [];\n _this.active = false;\n _this.errored = false;\n return _this;\n }\n DelaySubscriber.dispatch = function (state) {\n var source = state.source;\n var queue = source.queue;\n var scheduler = state.scheduler;\n var destination = state.destination;\n while (queue.length > 0 && (queue[0].time - scheduler.now()) <= 0) {\n queue.shift().notification.observe(destination);\n }\n if (queue.length > 0) {\n var delay_1 = Math.max(0, queue[0].time - scheduler.now());\n this.schedule(state, delay_1);\n }\n else if (source.isStopped) {\n source.destination.complete();\n source.active = false;\n }\n else {\n this.unsubscribe();\n source.active = false;\n }\n };\n DelaySubscriber.prototype._schedule = function (scheduler) {\n this.active = true;\n var destination = this.destination;\n destination.add(scheduler.schedule(DelaySubscriber.dispatch, this.delay, {\n source: this, destination: this.destination, scheduler: scheduler\n }));\n };\n DelaySubscriber.prototype.scheduleNotification = function (notification) {\n if (this.errored === true) {\n return;\n }\n var scheduler = this.scheduler;\n var message = new DelayMessage(scheduler.now() + this.delay, notification);\n this.queue.push(message);\n if (this.active === false) {\n this._schedule(scheduler);\n }\n };\n DelaySubscriber.prototype._next = function (value) {\n this.scheduleNotification(Notification.createNext(value));\n };\n DelaySubscriber.prototype._error = function (err) {\n this.errored = true;\n this.queue = [];\n this.destination.error(err);\n this.unsubscribe();\n };\n DelaySubscriber.prototype._complete = function () {\n if (this.queue.length === 0) {\n this.destination.complete();\n }\n this.unsubscribe();\n };\n return DelaySubscriber;\n}(Subscriber));\nvar DelayMessage = (function () {\n function DelayMessage(time, notification) {\n this.time = time;\n this.notification = notification;\n }\n return DelayMessage;\n}());\n//# sourceMappingURL=delay.js.map","export function isDate(value) {\n return value instanceof Date && !isNaN(+value);\n}\n//# sourceMappingURL=isDate.js.map"],"sourceRoot":""} \ No newline at end of file diff --git a/assets/javascripts/worker/search.58d22e8e.min.js b/assets/javascripts/worker/search.58d22e8e.min.js new file mode 100644 index 00000000..1418ab0e --- /dev/null +++ b/assets/javascripts/worker/search.58d22e8e.min.js @@ -0,0 +1,59 @@ +!function(e){var t={};function r(n){if(t[n])return t[n].exports;var i=t[n]={i:n,l:!1,exports:{}};return e[n].call(i.exports,i,i.exports,r),i.l=!0,i.exports}r.m=e,r.c=t,r.d=function(e,t,n){r.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:n})},r.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},r.t=function(e,t){if(1&t&&(e=r(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var n=Object.create(null);if(r.r(n),Object.defineProperty(n,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var i in e)r.d(n,i,function(t){return e[t]}.bind(null,i));return n},r.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return r.d(t,"a",t),t},r.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},r.p="",r(r.s=4)}([function(e,t,r){"use strict"; +/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var n=/["'&<>]/;e.exports=function(e){var t,r=""+e,i=n.exec(r);if(!i)return r;var s="",o=0,a=0;for(o=i.index;o0){var u=I.utils.clone(t)||{};u.position=[o,a],u.index=i.length,i.push(new I.Token(r.slice(o,s),u))}o=s+1}}return i},I.tokenizer.separator=/[\s\-]+/ +/*! + * lunr.Pipeline + * Copyright (C) 2019 Oliver Nightingale + */,I.Pipeline=function(){this._stack=[]},I.Pipeline.registeredFunctions=Object.create(null),I.Pipeline.registerFunction=function(e,t){t in this.registeredFunctions&&I.utils.warn("Overwriting existing registered function: "+t),e.label=t,I.Pipeline.registeredFunctions[e.label]=e},I.Pipeline.warnIfFunctionNotRegistered=function(e){e.label&&e.label in this.registeredFunctions||I.utils.warn("Function is not registered with pipeline. This may cause problems when serialising the index.\n",e)},I.Pipeline.load=function(e){var t=new I.Pipeline;return e.forEach((function(e){var r=I.Pipeline.registeredFunctions[e];if(!r)throw new Error("Cannot load unregistered function: "+e);t.add(r)})),t},I.Pipeline.prototype.add=function(){var e=Array.prototype.slice.call(arguments);e.forEach((function(e){I.Pipeline.warnIfFunctionNotRegistered(e),this._stack.push(e)}),this)},I.Pipeline.prototype.after=function(e,t){I.Pipeline.warnIfFunctionNotRegistered(t);var r=this._stack.indexOf(e);if(-1==r)throw new Error("Cannot find existingFn");r+=1,this._stack.splice(r,0,t)},I.Pipeline.prototype.before=function(e,t){I.Pipeline.warnIfFunctionNotRegistered(t);var r=this._stack.indexOf(e);if(-1==r)throw new Error("Cannot find existingFn");this._stack.splice(r,0,t)},I.Pipeline.prototype.remove=function(e){var t=this._stack.indexOf(e);-1!=t&&this._stack.splice(t,1)},I.Pipeline.prototype.run=function(e){for(var t=this._stack.length,r=0;r1&&(se&&(r=i),s!=e);)n=r-t,i=t+Math.floor(n/2),s=this.elements[2*i];return s==e||s>e?2*i:sa?l+=2:o==a&&(t+=r[u+1]*n[l+1],u+=2,l+=2);return t},I.Vector.prototype.similarity=function(e){return this.dot(e)/this.magnitude()||0},I.Vector.prototype.toArray=function(){for(var e=new Array(this.elements.length/2),t=1,r=0;t0){var s,o=i.str.charAt(0);o in i.node.edges?s=i.node.edges[o]:(s=new I.TokenSet,i.node.edges[o]=s),1==i.str.length&&(s.final=!0),n.push({node:s,editsRemaining:i.editsRemaining,str:i.str.slice(1)})}if(0!=i.editsRemaining){if("*"in i.node.edges)var a=i.node.edges["*"];else{a=new I.TokenSet;i.node.edges["*"]=a}if(0==i.str.length&&(a.final=!0),n.push({node:a,editsRemaining:i.editsRemaining-1,str:i.str}),i.str.length>1&&n.push({node:i.node,editsRemaining:i.editsRemaining-1,str:i.str.slice(1)}),1==i.str.length&&(i.node.final=!0),i.str.length>=1){if("*"in i.node.edges)var u=i.node.edges["*"];else{u=new I.TokenSet;i.node.edges["*"]=u}1==i.str.length&&(u.final=!0),n.push({node:u,editsRemaining:i.editsRemaining-1,str:i.str.slice(1)})}if(i.str.length>1){var l,c=i.str.charAt(0),h=i.str.charAt(1);h in i.node.edges?l=i.node.edges[h]:(l=new I.TokenSet,i.node.edges[h]=l),1==i.str.length&&(l.final=!0),n.push({node:l,editsRemaining:i.editsRemaining-1,str:c+i.str.slice(2)})}}}return r},I.TokenSet.fromString=function(e){for(var t=new I.TokenSet,r=t,n=0,i=e.length;n=e;t--){var r=this.uncheckedNodes[t],n=r.child.toString();n in this.minimizedNodes?r.parent.edges[r.char]=this.minimizedNodes[n]:(r.child._str=n,this.minimizedNodes[n]=r.child),this.uncheckedNodes.pop()}} +/*! + * lunr.Index + * Copyright (C) 2019 Oliver Nightingale + */,I.Index=function(e){this.invertedIndex=e.invertedIndex,this.fieldVectors=e.fieldVectors,this.tokenSet=e.tokenSet,this.fields=e.fields,this.pipeline=e.pipeline},I.Index.prototype.search=function(e){return this.query((function(t){new I.QueryParser(e,t).parse()}))},I.Index.prototype.query=function(e){for(var t=new I.Query(this.fields),r=Object.create(null),n=Object.create(null),i=Object.create(null),s=Object.create(null),o=Object.create(null),a=0;a1?1:e},I.Builder.prototype.k1=function(e){this._k1=e},I.Builder.prototype.add=function(e,t){var r=e[this._ref],n=Object.keys(this._fields);this._documents[r]=t||{},this.documentCount+=1;for(var i=0;i=this.length)return I.QueryLexer.EOS;var e=this.str.charAt(this.pos);return this.pos+=1,e},I.QueryLexer.prototype.width=function(){return this.pos-this.start},I.QueryLexer.prototype.ignore=function(){this.start==this.pos&&(this.pos+=1),this.start=this.pos},I.QueryLexer.prototype.backup=function(){this.pos-=1},I.QueryLexer.prototype.acceptDigitRun=function(){var e,t;do{t=(e=this.next()).charCodeAt(0)}while(t>47&&t<58);e!=I.QueryLexer.EOS&&this.backup()},I.QueryLexer.prototype.more=function(){return this.pos1&&(e.backup(),e.emit(I.QueryLexer.TERM)),e.ignore(),e.more())return I.QueryLexer.lexText},I.QueryLexer.lexEditDistance=function(e){return e.ignore(),e.acceptDigitRun(),e.emit(I.QueryLexer.EDIT_DISTANCE),I.QueryLexer.lexText},I.QueryLexer.lexBoost=function(e){return e.ignore(),e.acceptDigitRun(),e.emit(I.QueryLexer.BOOST),I.QueryLexer.lexText},I.QueryLexer.lexEOS=function(e){e.width()>0&&e.emit(I.QueryLexer.TERM)},I.QueryLexer.termSeparator=I.tokenizer.separator,I.QueryLexer.lexText=function(e){for(;;){var t=e.next();if(t==I.QueryLexer.EOS)return I.QueryLexer.lexEOS;if(92!=t.charCodeAt(0)){if(":"==t)return I.QueryLexer.lexField;if("~"==t)return e.backup(),e.width()>0&&e.emit(I.QueryLexer.TERM),I.QueryLexer.lexEditDistance;if("^"==t)return e.backup(),e.width()>0&&e.emit(I.QueryLexer.TERM),I.QueryLexer.lexBoost;if("+"==t&&1===e.width())return e.emit(I.QueryLexer.PRESENCE),I.QueryLexer.lexText;if("-"==t&&1===e.width())return e.emit(I.QueryLexer.PRESENCE),I.QueryLexer.lexText;if(t.match(I.QueryLexer.termSeparator))return I.QueryLexer.lexTerm}else e.escapeCharacter()}},I.QueryParser=function(e,t){this.lexer=new I.QueryLexer(e),this.query=t,this.currentClause={},this.lexemeIdx=0},I.QueryParser.prototype.parse=function(){this.lexer.run(),this.lexemes=this.lexer.lexemes;for(var e=I.QueryParser.parseClause;e;)e=e(this);return this.query},I.QueryParser.prototype.peekLexeme=function(){return this.lexemes[this.lexemeIdx]},I.QueryParser.prototype.consumeLexeme=function(){var e=this.peekLexeme();return this.lexemeIdx+=1,e},I.QueryParser.prototype.nextClause=function(){var e=this.currentClause;this.query.clause(e),this.currentClause={}},I.QueryParser.parseClause=function(e){var t=e.peekLexeme();if(null!=t)switch(t.type){case I.QueryLexer.PRESENCE:return I.QueryParser.parsePresence;case I.QueryLexer.FIELD:return I.QueryParser.parseField;case I.QueryLexer.TERM:return I.QueryParser.parseTerm;default:var r="expected either a field or a term, found "+t.type;throw t.str.length>=1&&(r+=" with value '"+t.str+"'"),new I.QueryParseError(r,t.start,t.end)}},I.QueryParser.parsePresence=function(e){var t=e.consumeLexeme();if(null!=t){switch(t.str){case"-":e.currentClause.presence=I.Query.presence.PROHIBITED;break;case"+":e.currentClause.presence=I.Query.presence.REQUIRED;break;default:var r="unrecognised presence operator'"+t.str+"'";throw new I.QueryParseError(r,t.start,t.end)}var n=e.peekLexeme();if(null==n){r="expecting term or field, found nothing";throw new I.QueryParseError(r,t.start,t.end)}switch(n.type){case I.QueryLexer.FIELD:return I.QueryParser.parseField;case I.QueryLexer.TERM:return I.QueryParser.parseTerm;default:r="expecting term or field, found '"+n.type+"'";throw new I.QueryParseError(r,n.start,n.end)}}},I.QueryParser.parseField=function(e){var t=e.consumeLexeme();if(null!=t){if(-1==e.query.allFields.indexOf(t.str)){var r=e.query.allFields.map((function(e){return"'"+e+"'"})).join(", "),n="unrecognised field '"+t.str+"', possible fields: "+r;throw new I.QueryParseError(n,t.start,t.end)}e.currentClause.fields=[t.str];var i=e.peekLexeme();if(null==i){n="expecting term, found nothing";throw new I.QueryParseError(n,t.start,t.end)}switch(i.type){case I.QueryLexer.TERM:return I.QueryParser.parseTerm;default:n="expecting term, found '"+i.type+"'";throw new I.QueryParseError(n,i.start,i.end)}}},I.QueryParser.parseTerm=function(e){var t=e.consumeLexeme();if(null!=t){e.currentClause.term=t.str.toLowerCase(),-1!=t.str.indexOf("*")&&(e.currentClause.usePipeline=!1);var r=e.peekLexeme();if(null!=r)switch(r.type){case I.QueryLexer.TERM:return e.nextClause(),I.QueryParser.parseTerm;case I.QueryLexer.FIELD:return e.nextClause(),I.QueryParser.parseField;case I.QueryLexer.EDIT_DISTANCE:return I.QueryParser.parseEditDistance;case I.QueryLexer.BOOST:return I.QueryParser.parseBoost;case I.QueryLexer.PRESENCE:return e.nextClause(),I.QueryParser.parsePresence;default:var n="Unexpected lexeme type '"+r.type+"'";throw new I.QueryParseError(n,r.start,r.end)}else e.nextClause()}},I.QueryParser.parseEditDistance=function(e){var t=e.consumeLexeme();if(null!=t){var r=parseInt(t.str,10);if(isNaN(r)){var n="edit distance must be numeric";throw new I.QueryParseError(n,t.start,t.end)}e.currentClause.editDistance=r;var i=e.peekLexeme();if(null!=i)switch(i.type){case I.QueryLexer.TERM:return e.nextClause(),I.QueryParser.parseTerm;case I.QueryLexer.FIELD:return e.nextClause(),I.QueryParser.parseField;case I.QueryLexer.EDIT_DISTANCE:return I.QueryParser.parseEditDistance;case I.QueryLexer.BOOST:return I.QueryParser.parseBoost;case I.QueryLexer.PRESENCE:return e.nextClause(),I.QueryParser.parsePresence;default:n="Unexpected lexeme type '"+i.type+"'";throw new I.QueryParseError(n,i.start,i.end)}else e.nextClause()}},I.QueryParser.parseBoost=function(e){var t=e.consumeLexeme();if(null!=t){var r=parseInt(t.str,10);if(isNaN(r)){var n="boost must be numeric";throw new I.QueryParseError(n,t.start,t.end)}e.currentClause.boost=r;var i=e.peekLexeme();if(null!=i)switch(i.type){case I.QueryLexer.TERM:return e.nextClause(),I.QueryParser.parseTerm;case I.QueryLexer.FIELD:return e.nextClause(),I.QueryParser.parseField;case I.QueryLexer.EDIT_DISTANCE:return I.QueryParser.parseEditDistance;case I.QueryLexer.BOOST:return I.QueryParser.parseBoost;case I.QueryLexer.PRESENCE:return e.nextClause(),I.QueryParser.parsePresence;default:n="Unexpected lexeme type '"+i.type+"'";throw new I.QueryParseError(n,i.start,i.end)}else e.nextClause()}},void 0===(i="function"==typeof(n=function(){return I})?n.call(t,r,t,e):n)||(e.exports=i)}()},function(e,t,r){"use strict";r.r(t),r.d(t,"handler",(function(){return h}));var n=function(){return(n=Object.assign||function(e){for(var t,r=1,n=arguments.length;r=e.length&&(e=void 0),{value:e&&e[n++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function s(e,t){var r="function"==typeof Symbol&&e[Symbol.iterator];if(!r)return e;var n,i,s=r.call(e),o=[];try{for(;(void 0===t||t-- >0)&&!(n=s.next()).done;)o.push(n.value)}catch(e){i={error:e}}finally{try{n&&!n.done&&(r=s.return)&&r.call(s)}finally{if(i)throw i.error}}return o}function o(){for(var e=[],t=0;t"+r+""};return function(i){i=i.replace(/[\s*+-:~^]+/g," ").trim();var s=new RegExp("(^|"+e.separator+")("+i.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(t,"|")+")","img");return function(e){return n(n({},e),{title:e.title.replace(s,r),text:e.text.replace(s,r)})}}}(t),this.index=void 0===l?lunr((function(){var e,n,s,a,l;u=u||["trimmer","stopWordFilter"],this.pipeline.reset();try{for(var c=i(u),h=c.next();!h.done;h=c.next()){var d=h.value;this.pipeline.add(lunr[d])}}catch(t){e={error:t}}finally{try{h&&!h.done&&(n=c.return)&&n.call(c)}finally{if(e)throw e.error}}1===t.lang.length&&"en"!==t.lang[0]?this.use(lunr[t.lang[0]]):t.lang.length>1&&this.use((s=lunr).multiLanguage.apply(s,o(t.lang))),this.field("title",{boost:1e3}),this.field("text"),this.ref("location");try{for(var f=i(r),p=f.next();!p.done;p=f.next()){var y=p.value;this.add(y)}}catch(e){a={error:e}}finally{try{p&&!p.done&&(l=f.return)&&l.call(f)}finally{if(a)throw a.error}}})):lunr.Index.load("string"==typeof l?JSON.parse(l):l)}return e.prototype.query=function(e){var t=this;if(e)try{var r=this.index.search(e).reduce((function(e,r){var n=t.documents.get(r.ref);if(void 0!==n)if("parent"in n){var i=n.parent.location;e.set(i,o(e.get(i)||[],[r]))}else{i=n.location;e.set(i,e.get(i)||[])}return e}),new Map),n=this.highlight(e);return o(r).map((function(e){var r=s(e,2),i=r[0],o=r[1];return{article:n(t.documents.get(i)),sections:o.map((function(e){return n(t.documents.get(e.ref))}))}}))}catch(t){console.warn("Invalid query: "+e+" – see https://bit.ly/2s3ChXG")}return[]},e}();function h(e){switch(e.type){case u.SETUP:return function(e){var t,r,n="../lunr",s=[];try{for(var a=i(e.lang),u=a.next();!u.done;u=a.next()){var l=u.value;"ja"===l&&s.push(n+"/tinyseg.min.js"),"en"!==l&&s.push(n+"/min/lunr."+l+".min.js")}}catch(e){t={error:e}}finally{try{u&&!u.done&&(r=a.return)&&r.call(a)}finally{if(t)throw t.error}}e.lang.length>1&&s.push(n+"/min/lunr.multi.min.js"),s.length&&importScripts.apply(void 0,o([n+"/min/lunr.stemmer.support.min.js"],s))}(e.data.config),l=new c(e.data),{type:u.READY};case u.QUERY:return{type:u.RESULT,data:l?l.query(e.data):[]};default:throw new TypeError("Invalid message type")}}!function(e){e[e.SETUP=0]="SETUP",e[e.READY=1]="READY",e[e.QUERY=2]="QUERY",e[e.RESULT=3]="RESULT"}(u||(u={})),addEventListener("message",(function(e){postMessage(h(e.data))}))}]); +//# sourceMappingURL=search.58d22e8e.min.js.map \ No newline at end of file diff --git a/assets/javascripts/worker/search.58d22e8e.min.js.map b/assets/javascripts/worker/search.58d22e8e.min.js.map new file mode 100644 index 00000000..177b4c3e --- /dev/null +++ b/assets/javascripts/worker/search.58d22e8e.min.js.map @@ -0,0 +1 @@ +{"version":3,"sources":["webpack:///webpack/bootstrap","webpack:///./node_modules/escape-html/index.js","webpack:///./node_modules/lunr/lunr.js-exposed","webpack:///(webpack)/buildin/global.js","webpack:///./node_modules/lunr/lunr.js","webpack:///./node_modules/tslib/tslib.es6.js","webpack:///./src/assets/javascripts/integrations/search/_/index.ts","webpack:///./src/assets/javascripts/integrations/search/worker/message/index.ts","webpack:///./src/assets/javascripts/integrations/search/worker/main/index.ts","webpack:///./src/assets/javascripts/integrations/search/document/index.ts","webpack:///./src/assets/javascripts/integrations/search/highlighter/index.ts"],"names":["installedModules","__webpack_require__","moduleId","exports","module","i","l","modules","call","m","c","d","name","getter","o","Object","defineProperty","enumerable","get","r","Symbol","toStringTag","value","t","mode","__esModule","ns","create","key","bind","n","object","property","prototype","hasOwnProperty","p","s","matchHtmlRegExp","string","escape","str","match","exec","html","index","lastIndex","length","charCodeAt","substring","g","this","Function","e","window","global","step2list","step3list","v","C","re_mgr0","re_mgr1","re_meq1","re_s_v","re_1a","re2_1a","re_1b","re2_1b","re_1b_2","re2_1b_2","re3_1b_2","re4_1b_2","re_1c","re_2","re_3","re_4","re2_4","re_5","re_5_1","re3_5","porterStemmer","lunr","config","builder","Builder","pipeline","add","trimmer","stopWordFilter","stemmer","searchPipeline","build","version","utils","warn","message","console","asString","obj","toString","clone","keys","val","Array","isArray","slice","TypeError","FieldRef","docRef","fieldName","stringValue","_stringValue","joiner","fromString","indexOf","fieldRef","undefined","Set","elements","complete","intersect","other","union","contains","empty","a","b","intersection","element","push","concat","idf","posting","documentCount","documentsWithTerm","x","Math","log","abs","Token","metadata","update","fn","tokenizer","map","toLowerCase","len","tokens","sliceEnd","sliceStart","sliceLength","charAt","separator","tokenMetadata","Pipeline","_stack","registeredFunctions","registerFunction","label","warnIfFunctionNotRegistered","load","serialised","forEach","fnName","Error","fns","arguments","after","existingFn","newFn","pos","splice","before","remove","run","stackLength","memo","j","result","k","runString","token","reset","toJSON","Vector","_magnitude","positionForIndex","start","end","pivotPoint","floor","pivotIndex","insert","insertIdx","upsert","position","magnitude","sumOfSquares","elementsLength","sqrt","dot","otherVector","dotProduct","aLen","bLen","aVal","bVal","similarity","toArray","output","RegExp","w","stem","suffix","firstch","re","re2","re3","re4","substr","toUpperCase","test","replace","fp","generateStopWordFilter","stopWords","words","reduce","stopWord","TokenSet","final","edges","id","_nextId","fromArray","arr","finish","root","fromClause","clause","fromFuzzyString","term","editDistance","stack","node","editsRemaining","frame","pop","noEditNode","char","insertionNode","substitutionNode","transposeNode","charA","charB","next","prefix","edge","_str","labels","sort","qNode","qEdges","qLen","nEdges","nLen","q","qEdge","nEdge","previousWord","uncheckedNodes","minimizedNodes","word","commonPrefix","minimize","child","nextNode","parent","downTo","childKey","Index","attrs","invertedIndex","fieldVectors","tokenSet","fields","search","queryString","query","QueryParser","parse","Query","matchingFields","queryVectors","termFieldCache","requiredMatches","prohibitedMatches","clauses","terms","clauseMatches","usePipeline","termTokenSet","expandedTerms","presence","REQUIRED","field","expandedTerm","termIndex","_index","fieldPosting","matchingDocumentRefs","termField","matchingDocumentsSet","PROHIBITED","boost","fieldMatch","matchingDocumentRef","matchingFieldRef","MatchData","allRequiredMatches","allProhibitedMatches","matchingFieldRefs","results","matches","isNegated","docMatch","fieldVector","score","matchData","combine","ref","serializedIndex","serializedVectors","serializedInvertedIndex","tokenSetBuilder","tuple","_ref","_fields","_documents","fieldTermFrequencies","fieldLengths","_b","_k1","metadataWhitelist","attributes","RangeError","number","k1","doc","extractor","fieldTerms","metadataKey","calculateAverageFieldLengths","fieldRefs","numberOfFields","accumulator","documentsWithField","averageFieldLength","createFieldVectors","fieldRefsLength","termIdfCache","fieldLength","termFrequencies","termsLength","fieldBoost","docBoost","scoreWithPrecision","tf","round","createTokenSet","use","args","unshift","apply","clonedMetadata","metadataKeys","otherMatchData","allFields","wildcard","String","NONE","LEADING","TRAILING","OPTIONAL","options","QueryParseError","QueryLexer","lexemes","escapeCharPositions","state","lexText","sliceString","subSlices","join","emit","type","escapeCharacter","EOS","width","ignore","backup","acceptDigitRun","charCode","more","FIELD","TERM","EDIT_DISTANCE","BOOST","PRESENCE","lexField","lexer","lexTerm","lexEditDistance","lexBoost","lexEOS","termSeparator","currentClause","lexemeIdx","parseClause","peekLexeme","consumeLexeme","lexeme","nextClause","completedClause","parser","parsePresence","parseField","parseTerm","errorMessage","nextLexeme","possibleFields","f","parseEditDistance","parseBoost","parseInt","isNaN","__assign","assign","__values","iterator","done","__read","ar","error","__spread","SearchMessageType","docs","documents","Map","path","hash","location","title","text","linked","set","setupSearchDocumentMap","highlight","_","data","trim","document","setupSearchHighlighter","lang","multiLanguage","JSON","groups","sections","article","section","err","handler","SETUP","base","scripts","importScripts","setupLunrLanguages","READY","QUERY","RESULT","addEventListener","ev","postMessage"],"mappings":"aACE,IAAIA,EAAmB,GAGvB,SAASC,EAAoBC,GAG5B,GAAGF,EAAiBE,GACnB,OAAOF,EAAiBE,GAAUC,QAGnC,IAAIC,EAASJ,EAAiBE,GAAY,CACzCG,EAAGH,EACHI,GAAG,EACHH,QAAS,IAUV,OANAI,EAAQL,GAAUM,KAAKJ,EAAOD,QAASC,EAAQA,EAAOD,QAASF,GAG/DG,EAAOE,GAAI,EAGJF,EAAOD,QAKfF,EAAoBQ,EAAIF,EAGxBN,EAAoBS,EAAIV,EAGxBC,EAAoBU,EAAI,SAASR,EAASS,EAAMC,GAC3CZ,EAAoBa,EAAEX,EAASS,IAClCG,OAAOC,eAAeb,EAASS,EAAM,CAAEK,YAAY,EAAMC,IAAKL,KAKhEZ,EAAoBkB,EAAI,SAAShB,GACX,oBAAXiB,QAA0BA,OAAOC,aAC1CN,OAAOC,eAAeb,EAASiB,OAAOC,YAAa,CAAEC,MAAO,WAE7DP,OAAOC,eAAeb,EAAS,aAAc,CAAEmB,OAAO,KAQvDrB,EAAoBsB,EAAI,SAASD,EAAOE,GAEvC,GADU,EAAPA,IAAUF,EAAQrB,EAAoBqB,IAC/B,EAAPE,EAAU,OAAOF,EACpB,GAAW,EAAPE,GAA8B,iBAAVF,GAAsBA,GAASA,EAAMG,WAAY,OAAOH,EAChF,IAAII,EAAKX,OAAOY,OAAO,MAGvB,GAFA1B,EAAoBkB,EAAEO,GACtBX,OAAOC,eAAeU,EAAI,UAAW,CAAET,YAAY,EAAMK,MAAOA,IACtD,EAAPE,GAA4B,iBAATF,EAAmB,IAAI,IAAIM,KAAON,EAAOrB,EAAoBU,EAAEe,EAAIE,EAAK,SAASA,GAAO,OAAON,EAAMM,IAAQC,KAAK,KAAMD,IAC9I,OAAOF,GAIRzB,EAAoB6B,EAAI,SAAS1B,GAChC,IAAIS,EAAST,GAAUA,EAAOqB,WAC7B,WAAwB,OAAOrB,EAAgB,SAC/C,WAA8B,OAAOA,GAEtC,OADAH,EAAoBU,EAAEE,EAAQ,IAAKA,GAC5BA,GAIRZ,EAAoBa,EAAI,SAASiB,EAAQC,GAAY,OAAOjB,OAAOkB,UAAUC,eAAe1B,KAAKuB,EAAQC,IAGzG/B,EAAoBkC,EAAI,GAIjBlC,EAAoBA,EAAoBmC,EAAI,G;;;;;;;GCnErD,IAAIC,EAAkB,UAOtBjC,EAAOD,QAUP,SAAoBmC,GAClB,IAOIC,EAPAC,EAAM,GAAKF,EACXG,EAAQJ,EAAgBK,KAAKF,GAEjC,IAAKC,EACH,OAAOD,EAIT,IAAIG,EAAO,GACPC,EAAQ,EACRC,EAAY,EAEhB,IAAKD,EAAQH,EAAMG,MAAOA,EAAQJ,EAAIM,OAAQF,IAAS,CACrD,OAAQJ,EAAIO,WAAWH,IACrB,KAAK,GACHL,EAAS,SACT,MACF,KAAK,GACHA,EAAS,QACT,MACF,KAAK,GACHA,EAAS,QACT,MACF,KAAK,GACHA,EAAS,OACT,MACF,KAAK,GACHA,EAAS,OACT,MACF,QACE,SAGAM,IAAcD,IAChBD,GAAQH,EAAIQ,UAAUH,EAAWD,IAGnCC,EAAYD,EAAQ,EACpBD,GAAQJ,EAGV,OAAOM,IAAcD,EACjBD,EAAOH,EAAIQ,UAAUH,EAAWD,GAChCD,I,iBC5EN,YAAAvC,EAAA,eAAkC,EAAQ,K,+BCA1C,IAAI6C,EAGJA,EAAI,WACH,OAAOC,KADJ,GAIJ,IAECD,EAAIA,GAAK,IAAIE,SAAS,cAAb,GACR,MAAOC,GAEc,iBAAXC,SAAqBJ,EAAII,QAOrCjD,EAAOD,QAAU8C,G,gBCnBjB;;;;;IAMC,WAiCD,IAoC6BK,EAw2BvBC,EAwBFC,EAWAC,EACAC,EAQEC,EACAC,EACAC,EACAC,EAEAC,EACAC,EACAC,EACAC,EACAC,EACAC,EACAC,EACAC,EAEAC,EACAC,EAEAC,EAEAC,EACAC,EAEAC,EACAC,EACAC,EAEAC,EAl9BFC,EAAO,SAAUC,GACnB,IAAIC,EAAU,IAAIF,EAAKG,QAavB,OAXAD,EAAQE,SAASC,IACfL,EAAKM,QACLN,EAAKO,eACLP,EAAKQ,SAGPN,EAAQO,eAAeJ,IACrBL,EAAKQ,SAGPP,EAAOzE,KAAK0E,EAASA,GACdA,EAAQQ,SAGjBV,EAAKW,QAAU;;;;IAUfX,EAAKY,MAAQ,GASbZ,EAAKY,MAAMC,MAAkBvC,EAQ1BJ,KANM,SAAU4C,GACXxC,EAAOyC,SAAWA,QAAQF,MAC5BE,QAAQF,KAAKC,KAiBnBd,EAAKY,MAAMI,SAAW,SAAUC,GAC9B,OAAIA,QACK,GAEAA,EAAIC,YAoBflB,EAAKY,MAAMO,MAAQ,SAAUF,GAC3B,GAAIA,QACF,OAAOA,EAMT,IAHA,IAAIE,EAAQpF,OAAOY,OAAO,MACtByE,EAAOrF,OAAOqF,KAAKH,GAEd5F,EAAI,EAAGA,EAAI+F,EAAKtD,OAAQzC,IAAK,CACpC,IAAIuB,EAAMwE,EAAK/F,GACXgG,EAAMJ,EAAIrE,GAEd,GAAI0E,MAAMC,QAAQF,GAChBF,EAAMvE,GAAOyE,EAAIG,YADnB,CAKA,GAAmB,iBAARH,GACQ,iBAARA,GACQ,kBAARA,EAKX,MAAM,IAAII,UAAU,yDAJlBN,EAAMvE,GAAOyE,GAOjB,OAAOF,GAETnB,EAAK0B,SAAW,SAAUC,EAAQC,EAAWC,GAC3C3D,KAAKyD,OAASA,EACdzD,KAAK0D,UAAYA,EACjB1D,KAAK4D,aAAeD,GAGtB7B,EAAK0B,SAASK,OAAS,IAEvB/B,EAAK0B,SAASM,WAAa,SAAU5E,GACnC,IAAIN,EAAIM,EAAE6E,QAAQjC,EAAK0B,SAASK,QAEhC,IAAW,IAAPjF,EACF,KAAM,6BAGR,IAAIoF,EAAW9E,EAAEoE,MAAM,EAAG1E,GACtB6E,EAASvE,EAAEoE,MAAM1E,EAAI,GAEzB,OAAO,IAAIkD,EAAK0B,SAAUC,EAAQO,EAAU9E,IAG9C4C,EAAK0B,SAASzE,UAAUiE,SAAW,WAKjC,OAJyBiB,MAArBjE,KAAK4D,eACP5D,KAAK4D,aAAe5D,KAAK0D,UAAY5B,EAAK0B,SAASK,OAAS7D,KAAKyD,QAG5DzD,KAAK4D;;;;IAYd9B,EAAKoC,IAAM,SAAUC,GAGnB,GAFAnE,KAAKmE,SAAWtG,OAAOY,OAAO,MAE1B0F,EAAU,CACZnE,KAAKJ,OAASuE,EAASvE,OAEvB,IAAK,IAAIzC,EAAI,EAAGA,EAAI6C,KAAKJ,OAAQzC,IAC/B6C,KAAKmE,SAASA,EAAShH,KAAM,OAG/B6C,KAAKJ,OAAS,GAWlBkC,EAAKoC,IAAIE,SAAW,CAClBC,UAAW,SAAUC,GACnB,OAAOA,GAGTC,MAAO,SAAUD,GACf,OAAOA,GAGTE,SAAU,WACR,OAAO,IAWX1C,EAAKoC,IAAIO,MAAQ,CACfJ,UAAW,WACT,OAAOrE,MAGTuE,MAAO,SAAUD,GACf,OAAOA,GAGTE,SAAU,WACR,OAAO,IAUX1C,EAAKoC,IAAInF,UAAUyF,SAAW,SAAU3F,GACtC,QAASmB,KAAKmE,SAAStF,IAWzBiD,EAAKoC,IAAInF,UAAUsF,UAAY,SAAUC,GACvC,IAAII,EAAGC,EAAGR,EAAUS,EAAe,GAEnC,GAAIN,IAAUxC,EAAKoC,IAAIE,SACrB,OAAOpE,KAGT,GAAIsE,IAAUxC,EAAKoC,IAAIO,MACrB,OAAOH,EAGLtE,KAAKJ,OAAS0E,EAAM1E,QACtB8E,EAAI1E,KACJ2E,EAAIL,IAEJI,EAAIJ,EACJK,EAAI3E,MAGNmE,EAAWtG,OAAOqF,KAAKwB,EAAEP,UAEzB,IAAK,IAAIhH,EAAI,EAAGA,EAAIgH,EAASvE,OAAQzC,IAAK,CACxC,IAAI0H,EAAUV,EAAShH,GACnB0H,KAAWF,EAAER,UACfS,EAAaE,KAAKD,GAItB,OAAO,IAAI/C,EAAKoC,IAAKU,IAUvB9C,EAAKoC,IAAInF,UAAUwF,MAAQ,SAAUD,GACnC,OAAIA,IAAUxC,EAAKoC,IAAIE,SACdtC,EAAKoC,IAAIE,SAGdE,IAAUxC,EAAKoC,IAAIO,MACdzE,KAGF,IAAI8B,EAAKoC,IAAIrG,OAAOqF,KAAKlD,KAAKmE,UAAUY,OAAOlH,OAAOqF,KAAKoB,EAAMH,aAU1ErC,EAAKkD,IAAM,SAAUC,EAASC,GAC5B,IAAIC,EAAoB,EAExB,IAAK,IAAIzB,KAAauB,EACH,UAAbvB,IACJyB,GAAqBtH,OAAOqF,KAAK+B,EAAQvB,IAAY9D,QAGvD,IAAIwF,GAAKF,EAAgBC,EAAoB,KAAQA,EAAoB,IAEzE,OAAOE,KAAKC,IAAI,EAAID,KAAKE,IAAIH,KAW/BtD,EAAK0D,MAAQ,SAAUlG,EAAKmG,GAC1BzF,KAAKV,IAAMA,GAAO,GAClBU,KAAKyF,SAAWA,GAAY,IAQ9B3D,EAAK0D,MAAMzG,UAAUiE,SAAW,WAC9B,OAAOhD,KAAKV,KAuBdwC,EAAK0D,MAAMzG,UAAU2G,OAAS,SAAUC,GAEtC,OADA3F,KAAKV,IAAMqG,EAAG3F,KAAKV,IAAKU,KAAKyF,UACtBzF,MAUT8B,EAAK0D,MAAMzG,UAAUkE,MAAQ,SAAU0C,GAErC,OADAA,EAAKA,GAAM,SAAUzG,GAAK,OAAOA,GAC1B,IAAI4C,EAAK0D,MAAOG,EAAG3F,KAAKV,IAAKU,KAAKyF,UAAWzF,KAAKyF;;;;IAyB3D3D,EAAK8D,UAAY,SAAU7C,EAAK0C,GAC9B,GAAW,MAAP1C,GAAsBkB,MAAPlB,EACjB,MAAO,GAGT,GAAIK,MAAMC,QAAQN,GAChB,OAAOA,EAAI8C,KAAI,SAAUxH,GACvB,OAAO,IAAIyD,EAAK0D,MACd1D,EAAKY,MAAMI,SAASzE,GAAGyH,cACvBhE,EAAKY,MAAMO,MAAMwC,OASvB,IAJA,IAAInG,EAAMyD,EAAIC,WAAW8C,cACrBC,EAAMzG,EAAIM,OACVoG,EAAS,GAEJC,EAAW,EAAGC,EAAa,EAAGD,GAAYF,EAAKE,IAAY,CAClE,IACIE,EAAcF,EAAWC,EAE7B,GAHW5G,EAAI8G,OAAOH,GAGZ1G,MAAMuC,EAAK8D,UAAUS,YAAcJ,GAAYF,EAAM,CAE7D,GAAII,EAAc,EAAG,CACnB,IAAIG,EAAgBxE,EAAKY,MAAMO,MAAMwC,IAAa,GAClDa,EAAwB,SAAI,CAACJ,EAAYC,GACzCG,EAAqB,MAAIN,EAAOpG,OAEhCoG,EAAOlB,KACL,IAAIhD,EAAK0D,MACPlG,EAAIgE,MAAM4C,EAAYD,GACtBK,IAKNJ,EAAaD,EAAW,GAK5B,OAAOD,GAUTlE,EAAK8D,UAAUS,UAAY;;;;IAmC3BvE,EAAKyE,SAAW,WACdvG,KAAKwG,OAAS,IAGhB1E,EAAKyE,SAASE,oBAAsB5I,OAAOY,OAAO,MAmClDqD,EAAKyE,SAASG,iBAAmB,SAAUf,EAAIgB,GACzCA,KAAS3G,KAAKyG,qBAChB3E,EAAKY,MAAMC,KAAK,6CAA+CgE,GAGjEhB,EAAGgB,MAAQA,EACX7E,EAAKyE,SAASE,oBAAoBd,EAAGgB,OAAShB,GAShD7D,EAAKyE,SAASK,4BAA8B,SAAUjB,GACjCA,EAAGgB,OAAUhB,EAAGgB,SAAS3G,KAAKyG,qBAG/C3E,EAAKY,MAAMC,KAAK,kGAAmGgD,IAcvH7D,EAAKyE,SAASM,KAAO,SAAUC,GAC7B,IAAI5E,EAAW,IAAIJ,EAAKyE,SAYxB,OAVAO,EAAWC,SAAQ,SAAUC,GAC3B,IAAIrB,EAAK7D,EAAKyE,SAASE,oBAAoBO,GAE3C,IAAIrB,EAGF,MAAM,IAAIsB,MAAM,sCAAwCD,GAFxD9E,EAASC,IAAIwD,MAMVzD,GAUTJ,EAAKyE,SAASxH,UAAUoD,IAAM,WAC5B,IAAI+E,EAAM9D,MAAMrE,UAAUuE,MAAMhG,KAAK6J,WAErCD,EAAIH,SAAQ,SAAUpB,GACpB7D,EAAKyE,SAASK,4BAA4BjB,GAC1C3F,KAAKwG,OAAO1B,KAAKa,KAChB3F,OAYL8B,EAAKyE,SAASxH,UAAUqI,MAAQ,SAAUC,EAAYC,GACpDxF,EAAKyE,SAASK,4BAA4BU,GAE1C,IAAIC,EAAMvH,KAAKwG,OAAOzC,QAAQsD,GAC9B,IAAY,GAARE,EACF,MAAM,IAAIN,MAAM,0BAGlBM,GAAY,EACZvH,KAAKwG,OAAOgB,OAAOD,EAAK,EAAGD,IAY7BxF,EAAKyE,SAASxH,UAAU0I,OAAS,SAAUJ,EAAYC,GACrDxF,EAAKyE,SAASK,4BAA4BU,GAE1C,IAAIC,EAAMvH,KAAKwG,OAAOzC,QAAQsD,GAC9B,IAAY,GAARE,EACF,MAAM,IAAIN,MAAM,0BAGlBjH,KAAKwG,OAAOgB,OAAOD,EAAK,EAAGD,IAQ7BxF,EAAKyE,SAASxH,UAAU2I,OAAS,SAAU/B,GACzC,IAAI4B,EAAMvH,KAAKwG,OAAOzC,QAAQ4B,IAClB,GAAR4B,GAIJvH,KAAKwG,OAAOgB,OAAOD,EAAK,IAU1BzF,EAAKyE,SAASxH,UAAU4I,IAAM,SAAU3B,GAGtC,IAFA,IAAI4B,EAAc5H,KAAKwG,OAAO5G,OAErBzC,EAAI,EAAGA,EAAIyK,EAAazK,IAAK,CAIpC,IAHA,IAAIwI,EAAK3F,KAAKwG,OAAOrJ,GACjB0K,EAAO,GAEFC,EAAI,EAAGA,EAAI9B,EAAOpG,OAAQkI,IAAK,CACtC,IAAIC,EAASpC,EAAGK,EAAO8B,GAAIA,EAAG9B,GAE9B,GAAI+B,SAAmD,KAAXA,EAE5C,GAAI3E,MAAMC,QAAQ0E,GAChB,IAAK,IAAIC,EAAI,EAAGA,EAAID,EAAOnI,OAAQoI,IACjCH,EAAK/C,KAAKiD,EAAOC,SAGnBH,EAAK/C,KAAKiD,GAId/B,EAAS6B,EAGX,OAAO7B,GAaTlE,EAAKyE,SAASxH,UAAUkJ,UAAY,SAAU3I,EAAKmG,GACjD,IAAIyC,EAAQ,IAAIpG,EAAK0D,MAAOlG,EAAKmG,GAEjC,OAAOzF,KAAK2H,IAAI,CAACO,IAAQrC,KAAI,SAAUxH,GACrC,OAAOA,EAAE2E,eAQblB,EAAKyE,SAASxH,UAAUoJ,MAAQ,WAC9BnI,KAAKwG,OAAS,IAUhB1E,EAAKyE,SAASxH,UAAUqJ,OAAS,WAC/B,OAAOpI,KAAKwG,OAAOX,KAAI,SAAUF,GAG/B,OAFA7D,EAAKyE,SAASK,4BAA4BjB,GAEnCA,EAAGgB;;;;IAwBd7E,EAAKuG,OAAS,SAAUlE,GACtBnE,KAAKsI,WAAa,EAClBtI,KAAKmE,SAAWA,GAAY,IAc9BrC,EAAKuG,OAAOtJ,UAAUwJ,iBAAmB,SAAU7I,GAEjD,GAA4B,GAAxBM,KAAKmE,SAASvE,OAChB,OAAO,EAST,IANA,IAAI4I,EAAQ,EACRC,EAAMzI,KAAKmE,SAASvE,OAAS,EAC7BuG,EAAcsC,EAAMD,EACpBE,EAAarD,KAAKsD,MAAMxC,EAAc,GACtCyC,EAAa5I,KAAKmE,SAAsB,EAAbuE,GAExBvC,EAAc,IACfyC,EAAalJ,IACf8I,EAAQE,GAGNE,EAAalJ,IACf+I,EAAMC,GAGJE,GAAclJ,IAIlByG,EAAcsC,EAAMD,EACpBE,EAAaF,EAAQnD,KAAKsD,MAAMxC,EAAc,GAC9CyC,EAAa5I,KAAKmE,SAAsB,EAAbuE,GAG7B,OAAIE,GAAclJ,GAIdkJ,EAAalJ,EAHK,EAAbgJ,EAOLE,EAAalJ,EACW,GAAlBgJ,EAAa,QADvB,GAcF5G,EAAKuG,OAAOtJ,UAAU8J,OAAS,SAAUC,EAAW3F,GAClDnD,KAAK+I,OAAOD,EAAW3F,GAAK,WAC1B,KAAM,sBAYVrB,EAAKuG,OAAOtJ,UAAUgK,OAAS,SAAUD,EAAW3F,EAAKwC,GACvD3F,KAAKsI,WAAa,EAClB,IAAIU,EAAWhJ,KAAKuI,iBAAiBO,GAEjC9I,KAAKmE,SAAS6E,IAAaF,EAC7B9I,KAAKmE,SAAS6E,EAAW,GAAKrD,EAAG3F,KAAKmE,SAAS6E,EAAW,GAAI7F,GAE9DnD,KAAKmE,SAASqD,OAAOwB,EAAU,EAAGF,EAAW3F,IASjDrB,EAAKuG,OAAOtJ,UAAUkK,UAAY,WAChC,GAAIjJ,KAAKsI,WAAY,OAAOtI,KAAKsI,WAKjC,IAHA,IAAIY,EAAe,EACfC,EAAiBnJ,KAAKmE,SAASvE,OAE1BzC,EAAI,EAAGA,EAAIgM,EAAgBhM,GAAK,EAAG,CAC1C,IAAIgG,EAAMnD,KAAKmE,SAAShH,GACxB+L,GAAgB/F,EAAMA,EAGxB,OAAOnD,KAAKsI,WAAajD,KAAK+D,KAAKF,IASrCpH,EAAKuG,OAAOtJ,UAAUsK,IAAM,SAAUC,GAOpC,IANA,IAAIC,EAAa,EACb7E,EAAI1E,KAAKmE,SAAUQ,EAAI2E,EAAYnF,SACnCqF,EAAO9E,EAAE9E,OAAQ6J,EAAO9E,EAAE/E,OAC1B8J,EAAO,EAAGC,EAAO,EACjBxM,EAAI,EAAG2K,EAAI,EAER3K,EAAIqM,GAAQ1B,EAAI2B,IACrBC,EAAOhF,EAAEvH,KAAIwM,EAAOhF,EAAEmD,IAEpB3K,GAAK,EACIuM,EAAOC,EAChB7B,GAAK,EACI4B,GAAQC,IACjBJ,GAAc7E,EAAEvH,EAAI,GAAKwH,EAAEmD,EAAI,GAC/B3K,GAAK,EACL2K,GAAK,GAIT,OAAOyB,GAUTzH,EAAKuG,OAAOtJ,UAAU6K,WAAa,SAAUN,GAC3C,OAAOtJ,KAAKqJ,IAAIC,GAAetJ,KAAKiJ,aAAe,GAQrDnH,EAAKuG,OAAOtJ,UAAU8K,QAAU,WAG9B,IAFA,IAAIC,EAAS,IAAI1G,MAAOpD,KAAKmE,SAASvE,OAAS,GAEtCzC,EAAI,EAAG2K,EAAI,EAAG3K,EAAI6C,KAAKmE,SAASvE,OAAQzC,GAAK,EAAG2K,IACvDgC,EAAOhC,GAAK9H,KAAKmE,SAAShH,GAG5B,OAAO2M,GAQThI,EAAKuG,OAAOtJ,UAAUqJ,OAAS,WAC7B,OAAOpI,KAAKmE;;;;;IAoBdrC,EAAKQ,SACCjC,EAAY,CACZ,QAAY,MACZ,OAAW,OACX,KAAS,OACT,KAAS,OACT,KAAS,MACT,IAAQ,MACR,KAAS,KACT,MAAU,MACV,IAAQ,IACR,MAAU,MACV,QAAY,MACZ,MAAU,MACV,KAAS,MACT,MAAU,KACV,QAAY,MACZ,QAAY,MACZ,QAAY,MACZ,MAAU,KACV,MAAU,MACV,OAAW,MACX,KAAS,OAGXC,EAAY,CACV,MAAU,KACV,MAAU,GACV,MAAU,KACV,MAAU,KACV,KAAS,KACT,IAAQ,GACR,KAAS,IAIXC,EAAI,WACJC,EAAIhD,qBAQFiD,EAAU,IAAIsJ,OALT,4DAMLrJ,EAAU,IAAIqJ,OAJT,8FAKLpJ,EAAU,IAAIoJ,OANT,gFAOLnJ,EAAS,IAAImJ,OALT,kCAOJlJ,EAAQ,kBACRC,EAAS,iBACTC,EAAQ,aACRC,EAAS,kBACTC,EAAU,KACVC,EAAW,cACXC,EAAW,IAAI4I,OAAO,sBACtB3I,EAAW,IAAI2I,OAAO,IAAMvJ,EAAID,EAAI,gBAEpCc,EAAQ,mBACRC,EAAO,2IAEPC,EAAO,iDAEPC,EAAO,sFACPC,EAAQ,oBAERC,EAAO,WACPC,EAAS,MACTC,EAAQ,IAAImI,OAAO,IAAMvJ,EAAID,EAAI,gBAEjCsB,EAAgB,SAAuBmI,GACzC,IAAIC,EACFC,EACAC,EACAC,EACAC,EACAC,EACAC,EAEF,GAAIP,EAAEpK,OAAS,EAAK,OAAOoK,EAiB3B,GAde,MADfG,EAAUH,EAAEQ,OAAO,EAAE,MAEnBR,EAAIG,EAAQM,cAAgBT,EAAEQ,OAAO,IAKvCH,EAAMvJ,GADNsJ,EAAKvJ,GAGE6J,KAAKV,GAAMA,EAAIA,EAAEW,QAAQP,EAAG,QAC1BC,EAAIK,KAAKV,KAAMA,EAAIA,EAAEW,QAAQN,EAAI,SAI1CA,EAAMrJ,GADNoJ,EAAKrJ,GAEE2J,KAAKV,GAAI,CACd,IAAIY,EAAKR,EAAG5K,KAAKwK,IACjBI,EAAK3J,GACEiK,KAAKE,EAAG,MACbR,EAAKnJ,EACL+I,EAAIA,EAAEW,QAAQP,EAAG,UAEVC,EAAIK,KAAKV,KAElBC,GADIW,EAAKP,EAAI7K,KAAKwK,IACR,IACVK,EAAMzJ,GACE8J,KAAKT,KAGXK,EAAMnJ,EACNoJ,EAAMnJ,GAFNiJ,EAAMnJ,GAGEwJ,KAJRV,EAAIC,GAIeD,GAAQ,IAClBM,EAAII,KAAKV,IAAMI,EAAKnJ,EAAS+I,EAAIA,EAAEW,QAAQP,EAAG,KAC9CG,EAAIG,KAAKV,KAAMA,GAAQ,OAiFpC,OA5EAI,EAAK/I,GACEqJ,KAAKV,KAGVA,GADAC,GADIW,EAAKR,EAAG5K,KAAKwK,IACP,IACC,MAIbI,EAAK9I,GACEoJ,KAAKV,KAEVC,GADIW,EAAKR,EAAG5K,KAAKwK,IACP,GACVE,EAASU,EAAG,IACZR,EAAK3J,GACEiK,KAAKT,KACVD,EAAIC,EAAO5J,EAAU6J,MAKzBE,EAAK7I,GACEmJ,KAAKV,KAEVC,GADIW,EAAKR,EAAG5K,KAAKwK,IACP,GACVE,EAASU,EAAG,IACZR,EAAK3J,GACEiK,KAAKT,KACVD,EAAIC,EAAO3J,EAAU4J,KAMzBG,EAAM5I,GADN2I,EAAK5I,GAEEkJ,KAAKV,IAEVC,GADIW,EAAKR,EAAG5K,KAAKwK,IACP,IACVI,EAAK1J,GACEgK,KAAKT,KACVD,EAAIC,IAEGI,EAAIK,KAAKV,KAElBC,GADIW,EAAKP,EAAI7K,KAAKwK,IACR,GAAKY,EAAG,IAClBP,EAAM3J,GACEgK,KAAKT,KACXD,EAAIC,KAKRG,EAAK1I,GACEgJ,KAAKV,KAEVC,GADIW,EAAKR,EAAG5K,KAAKwK,IACP,GAEVK,EAAM1J,EACN2J,EAAM1I,IAFNwI,EAAK1J,GAGEgK,KAAKT,IAAUI,EAAIK,KAAKT,KAAWK,EAAII,KAAKT,MACjDD,EAAIC,IAKRI,EAAM3J,GADN0J,EAAKzI,GAEE+I,KAAKV,IAAMK,EAAIK,KAAKV,KACzBI,EAAKnJ,EACL+I,EAAIA,EAAEW,QAAQP,EAAG,KAKJ,KAAXD,IACFH,EAAIG,EAAQrE,cAAgBkE,EAAEQ,OAAO,IAGhCR,GAGF,SAAU9B,GACf,OAAOA,EAAMxC,OAAO7D,KAIxBC,EAAKyE,SAASG,iBAAiB5E,EAAKQ,QAAS;;;;IAmB7CR,EAAK+I,uBAAyB,SAAUC,GACtC,IAAIC,EAAQD,EAAUE,QAAO,SAAUnD,EAAMoD,GAE3C,OADApD,EAAKoD,GAAYA,EACVpD,IACN,IAEH,OAAO,SAAUK,GACf,GAAIA,GAAS6C,EAAM7C,EAAMlF,cAAgBkF,EAAMlF,WAAY,OAAOkF,IAiBtEpG,EAAKO,eAAiBP,EAAK+I,uBAAuB,CAChD,IACA,OACA,QACA,SACA,QACA,MACA,SACA,OACA,KACA,QACA,KACA,MACA,MACA,MACA,KACA,KACA,KACA,UACA,OACA,MACA,KACA,MACA,SACA,QACA,OACA,MACA,KACA,OACA,SACA,OACA,OACA,QACA,MACA,OACA,MACA,MACA,MACA,MACA,OACA,KACA,MACA,OACA,MACA,MACA,MACA,UACA,IACA,KACA,KACA,OACA,KACA,KACA,MACA,OACA,QACA,MACA,OACA,SACA,MACA,KACA,QACA,OACA,OACA,KACA,UACA,KACA,MACA,MACA,KACA,MACA,QACA,KACA,OACA,KACA,QACA,MACA,MACA,SACA,OACA,MACA,OACA,MACA,SACA,QACA,KACA,OACA,OACA,OACA,MACA,QACA,OACA,OACA,QACA,QACA,OACA,OACA,MACA,KACA,MACA,OACA,KACA,QACA,MACA,KACA,OACA,OACA,OACA,QACA,QACA,QACA,MACA,OACA,MACA,OACA,OACA,QACA,MACA,MACA,SAGF/I,EAAKyE,SAASG,iBAAiB5E,EAAKO,eAAgB;;;;IAqBpDP,EAAKM,QAAU,SAAU8F,GACvB,OAAOA,EAAMxC,QAAO,SAAUxG,GAC5B,OAAOA,EAAEyL,QAAQ,OAAQ,IAAIA,QAAQ,OAAQ,QAIjD7I,EAAKyE,SAASG,iBAAiB5E,EAAKM,QAAS;;;;IA2B7CN,EAAKoJ,SAAW,WACdlL,KAAKmL,OAAQ,EACbnL,KAAKoL,MAAQ,GACbpL,KAAKqL,GAAKvJ,EAAKoJ,SAASI,QACxBxJ,EAAKoJ,SAASI,SAAW,GAW3BxJ,EAAKoJ,SAASI,QAAU,EASxBxJ,EAAKoJ,SAASK,UAAY,SAAUC,GAGlC,IAFA,IAAIxJ,EAAU,IAAIF,EAAKoJ,SAASjJ,QAEvB9E,EAAI,EAAG4I,EAAMyF,EAAI5L,OAAQzC,EAAI4I,EAAK5I,IACzC6E,EAAQ6G,OAAO2C,EAAIrO,IAIrB,OADA6E,EAAQyJ,SACDzJ,EAAQ0J,MAYjB5J,EAAKoJ,SAASS,WAAa,SAAUC,GACnC,MAAI,iBAAkBA,EACb9J,EAAKoJ,SAASW,gBAAgBD,EAAOE,KAAMF,EAAOG,cAElDjK,EAAKoJ,SAASpH,WAAW8H,EAAOE,OAmB3ChK,EAAKoJ,SAASW,gBAAkB,SAAUvM,EAAKyM,GAS7C,IARA,IAAIL,EAAO,IAAI5J,EAAKoJ,SAEhBc,EAAQ,CAAC,CACXC,KAAMP,EACNQ,eAAgBH,EAChBzM,IAAKA,IAGA0M,EAAMpM,QAAQ,CACnB,IAAIuM,EAAQH,EAAMI,MAGlB,GAAID,EAAM7M,IAAIM,OAAS,EAAG,CACxB,IACIyM,EADAC,EAAOH,EAAM7M,IAAI8G,OAAO,GAGxBkG,KAAQH,EAAMF,KAAKb,MACrBiB,EAAaF,EAAMF,KAAKb,MAAMkB,IAE9BD,EAAa,IAAIvK,EAAKoJ,SACtBiB,EAAMF,KAAKb,MAAMkB,GAAQD,GAGH,GAApBF,EAAM7M,IAAIM,SACZyM,EAAWlB,OAAQ,GAGrBa,EAAMlH,KAAK,CACTmH,KAAMI,EACNH,eAAgBC,EAAMD,eACtB5M,IAAK6M,EAAM7M,IAAIgE,MAAM,KAIzB,GAA4B,GAAxB6I,EAAMD,eAAV,CAKA,GAAI,MAAOC,EAAMF,KAAKb,MACpB,IAAImB,EAAgBJ,EAAMF,KAAKb,MAAM,SAChC,CACDmB,EAAgB,IAAIzK,EAAKoJ,SAC7BiB,EAAMF,KAAKb,MAAM,KAAOmB,EAiC1B,GA9BwB,GAApBJ,EAAM7M,IAAIM,SACZ2M,EAAcpB,OAAQ,GAGxBa,EAAMlH,KAAK,CACTmH,KAAMM,EACNL,eAAgBC,EAAMD,eAAiB,EACvC5M,IAAK6M,EAAM7M,MAMT6M,EAAM7M,IAAIM,OAAS,GACrBoM,EAAMlH,KAAK,CACTmH,KAAME,EAAMF,KACZC,eAAgBC,EAAMD,eAAiB,EACvC5M,IAAK6M,EAAM7M,IAAIgE,MAAM,KAMD,GAApB6I,EAAM7M,IAAIM,SACZuM,EAAMF,KAAKd,OAAQ,GAMjBgB,EAAM7M,IAAIM,QAAU,EAAG,CACzB,GAAI,MAAOuM,EAAMF,KAAKb,MACpB,IAAIoB,EAAmBL,EAAMF,KAAKb,MAAM,SACnC,CACDoB,EAAmB,IAAI1K,EAAKoJ,SAChCiB,EAAMF,KAAKb,MAAM,KAAOoB,EAGF,GAApBL,EAAM7M,IAAIM,SACZ4M,EAAiBrB,OAAQ,GAG3Ba,EAAMlH,KAAK,CACTmH,KAAMO,EACNN,eAAgBC,EAAMD,eAAiB,EACvC5M,IAAK6M,EAAM7M,IAAIgE,MAAM,KAOzB,GAAI6I,EAAM7M,IAAIM,OAAS,EAAG,CACxB,IAEI6M,EAFAC,EAAQP,EAAM7M,IAAI8G,OAAO,GACzBuG,EAAQR,EAAM7M,IAAI8G,OAAO,GAGzBuG,KAASR,EAAMF,KAAKb,MACtBqB,EAAgBN,EAAMF,KAAKb,MAAMuB,IAEjCF,EAAgB,IAAI3K,EAAKoJ,SACzBiB,EAAMF,KAAKb,MAAMuB,GAASF,GAGJ,GAApBN,EAAM7M,IAAIM,SACZ6M,EAActB,OAAQ,GAGxBa,EAAMlH,KAAK,CACTmH,KAAMQ,EACNP,eAAgBC,EAAMD,eAAiB,EACvC5M,IAAKoN,EAAQP,EAAM7M,IAAIgE,MAAM,OAKnC,OAAOoI,GAaT5J,EAAKoJ,SAASpH,WAAa,SAAUxE,GAYnC,IAXA,IAAI2M,EAAO,IAAInK,EAAKoJ,SAChBQ,EAAOO,EAUF9O,EAAI,EAAG4I,EAAMzG,EAAIM,OAAQzC,EAAI4I,EAAK5I,IAAK,CAC9C,IAAImP,EAAOhN,EAAInC,GACXgO,EAAShO,GAAK4I,EAAM,EAExB,GAAY,KAARuG,EACFL,EAAKb,MAAMkB,GAAQL,EACnBA,EAAKd,MAAQA,MAER,CACL,IAAIyB,EAAO,IAAI9K,EAAKoJ,SACpB0B,EAAKzB,MAAQA,EAEbc,EAAKb,MAAMkB,GAAQM,EACnBX,EAAOW,GAIX,OAAOlB,GAaT5J,EAAKoJ,SAASnM,UAAU8K,QAAU,WAQhC,IAPA,IAAIkB,EAAQ,GAERiB,EAAQ,CAAC,CACXa,OAAQ,GACRZ,KAAMjM,OAGDgM,EAAMpM,QAAQ,CACnB,IAAIuM,EAAQH,EAAMI,MACdhB,EAAQvN,OAAOqF,KAAKiJ,EAAMF,KAAKb,OAC/BrF,EAAMqF,EAAMxL,OAEZuM,EAAMF,KAAKd,QAKbgB,EAAMU,OAAOzG,OAAO,GACpB2E,EAAMjG,KAAKqH,EAAMU,SAGnB,IAAK,IAAI1P,EAAI,EAAGA,EAAI4I,EAAK5I,IAAK,CAC5B,IAAI2P,EAAO1B,EAAMjO,GAEjB6O,EAAMlH,KAAK,CACT+H,OAAQV,EAAMU,OAAO9H,OAAO+H,GAC5Bb,KAAME,EAAMF,KAAKb,MAAM0B,MAK7B,OAAO/B,GAaTjJ,EAAKoJ,SAASnM,UAAUiE,SAAW,WASjC,GAAIhD,KAAK+M,KACP,OAAO/M,KAAK+M,KAOd,IAJA,IAAIzN,EAAMU,KAAKmL,MAAQ,IAAM,IACzB6B,EAASnP,OAAOqF,KAAKlD,KAAKoL,OAAO6B,OACjClH,EAAMiH,EAAOpN,OAERzC,EAAI,EAAGA,EAAI4I,EAAK5I,IAAK,CAC5B,IAAIwJ,EAAQqG,EAAO7P,GAGnBmC,EAAMA,EAAMqH,EAFD3G,KAAKoL,MAAMzE,GAEG0E,GAG3B,OAAO/L,GAaTwC,EAAKoJ,SAASnM,UAAUsF,UAAY,SAAUM,GAU5C,IATA,IAAImF,EAAS,IAAIhI,EAAKoJ,SAClBiB,OAAQlI,EAER+H,EAAQ,CAAC,CACXkB,MAAOvI,EACPmF,OAAQA,EACRmC,KAAMjM,OAGDgM,EAAMpM,QAAQ,CACnBuM,EAAQH,EAAMI,MAWd,IALA,IAAIe,EAAStP,OAAOqF,KAAKiJ,EAAMe,MAAM9B,OACjCgC,EAAOD,EAAOvN,OACdyN,EAASxP,OAAOqF,KAAKiJ,EAAMF,KAAKb,OAChCkC,EAAOD,EAAOzN,OAET2N,EAAI,EAAGA,EAAIH,EAAMG,IAGxB,IAFA,IAAIC,EAAQL,EAAOI,GAEV3O,EAAI,EAAGA,EAAI0O,EAAM1O,IAAK,CAC7B,IAAI6O,EAAQJ,EAAOzO,GAEnB,GAAI6O,GAASD,GAAkB,KAATA,EAAc,CAClC,IAAIvB,EAAOE,EAAMF,KAAKb,MAAMqC,GACxBP,EAAQf,EAAMe,MAAM9B,MAAMoC,GAC1BrC,EAAQc,EAAKd,OAAS+B,EAAM/B,MAC5ByB,OAAO3I,EAEPwJ,KAAStB,EAAMrC,OAAOsB,OAIxBwB,EAAOT,EAAMrC,OAAOsB,MAAMqC,IACrBtC,MAAQyB,EAAKzB,OAASA,IAM3ByB,EAAO,IAAI9K,EAAKoJ,UACXC,MAAQA,EACbgB,EAAMrC,OAAOsB,MAAMqC,GAASb,GAG9BZ,EAAMlH,KAAK,CACToI,MAAOA,EACPpD,OAAQ8C,EACRX,KAAMA,MAOhB,OAAOnC,GAEThI,EAAKoJ,SAASjJ,QAAU,WACtBjC,KAAK0N,aAAe,GACpB1N,KAAK0L,KAAO,IAAI5J,EAAKoJ,SACrBlL,KAAK2N,eAAiB,GACtB3N,KAAK4N,eAAiB,IAGxB9L,EAAKoJ,SAASjJ,QAAQlD,UAAU8J,OAAS,SAAUgF,GACjD,IAAI5B,EACA6B,EAAe,EAEnB,GAAID,EAAO7N,KAAK0N,aACd,MAAM,IAAIzG,MAAO,+BAGnB,IAAK,IAAI9J,EAAI,EAAGA,EAAI0Q,EAAKjO,QAAUzC,EAAI6C,KAAK0N,aAAa9N,QACnDiO,EAAK1Q,IAAM6C,KAAK0N,aAAavQ,GAD8BA,IAE/D2Q,IAGF9N,KAAK+N,SAASD,GAGZ7B,EADgC,GAA9BjM,KAAK2N,eAAe/N,OACfI,KAAK0L,KAEL1L,KAAK2N,eAAe3N,KAAK2N,eAAe/N,OAAS,GAAGoO,MAG7D,IAAS7Q,EAAI2Q,EAAc3Q,EAAI0Q,EAAKjO,OAAQzC,IAAK,CAC/C,IAAI8Q,EAAW,IAAInM,EAAKoJ,SACpBoB,EAAOuB,EAAK1Q,GAEhB8O,EAAKb,MAAMkB,GAAQ2B,EAEnBjO,KAAK2N,eAAe7I,KAAK,CACvBoJ,OAAQjC,EACRK,KAAMA,EACN0B,MAAOC,IAGThC,EAAOgC,EAGThC,EAAKd,OAAQ,EACbnL,KAAK0N,aAAeG,GAGtB/L,EAAKoJ,SAASjJ,QAAQlD,UAAU0M,OAAS,WACvCzL,KAAK+N,SAAS,IAGhBjM,EAAKoJ,SAASjJ,QAAQlD,UAAUgP,SAAW,SAAUI,GACnD,IAAK,IAAIhR,EAAI6C,KAAK2N,eAAe/N,OAAS,EAAGzC,GAAKgR,EAAQhR,IAAK,CAC7D,IAAI8O,EAAOjM,KAAK2N,eAAexQ,GAC3BiR,EAAWnC,EAAK+B,MAAMhL,WAEtBoL,KAAYpO,KAAK4N,eACnB3B,EAAKiC,OAAO9C,MAAMa,EAAKK,MAAQtM,KAAK4N,eAAeQ,IAInDnC,EAAK+B,MAAMjB,KAAOqB,EAElBpO,KAAK4N,eAAeQ,GAAYnC,EAAK+B,OAGvChO,KAAK2N,eAAevB;;;;IAwBxBtK,EAAKuM,MAAQ,SAAUC,GACrBtO,KAAKuO,cAAgBD,EAAMC,cAC3BvO,KAAKwO,aAAeF,EAAME,aAC1BxO,KAAKyO,SAAWH,EAAMG,SACtBzO,KAAK0O,OAASJ,EAAMI,OACpB1O,KAAKkC,SAAWoM,EAAMpM,UA0ExBJ,EAAKuM,MAAMtP,UAAU4P,OAAS,SAAUC,GACtC,OAAO5O,KAAK6O,OAAM,SAAUA,GACb,IAAI/M,EAAKgN,YAAYF,EAAaC,GACxCE,YA6BXjN,EAAKuM,MAAMtP,UAAU8P,MAAQ,SAAUlJ,GAoBrC,IAZA,IAAIkJ,EAAQ,IAAI/M,EAAKkN,MAAMhP,KAAK0O,QAC5BO,EAAiBpR,OAAOY,OAAO,MAC/ByQ,EAAerR,OAAOY,OAAO,MAC7B0Q,EAAiBtR,OAAOY,OAAO,MAC/B2Q,EAAkBvR,OAAOY,OAAO,MAChC4Q,EAAoBxR,OAAOY,OAAO,MAO7BtB,EAAI,EAAGA,EAAI6C,KAAK0O,OAAO9O,OAAQzC,IACtC+R,EAAalP,KAAK0O,OAAOvR,IAAM,IAAI2E,EAAKuG,OAG1C1C,EAAGrI,KAAKuR,EAAOA,GAEf,IAAS1R,EAAI,EAAGA,EAAI0R,EAAMS,QAAQ1P,OAAQzC,IAAK,CAS7C,IAAIyO,EAASiD,EAAMS,QAAQnS,GACvBoS,EAAQ,KACRC,EAAgB1N,EAAKoC,IAAIE,SAG3BmL,EADE3D,EAAO6D,YACDzP,KAAKkC,SAAS+F,UAAU2D,EAAOE,KAAM,CAC3C4C,OAAQ9C,EAAO8C,SAGT,CAAC9C,EAAOE,MAGlB,IAAK,IAAIvO,EAAI,EAAGA,EAAIgS,EAAM3P,OAAQrC,IAAK,CACrC,IAAIuO,EAAOyD,EAAMhS,GAQjBqO,EAAOE,KAAOA,EAOd,IAAI4D,EAAe5N,EAAKoJ,SAASS,WAAWC,GACxC+D,EAAgB3P,KAAKyO,SAASpK,UAAUqL,GAAc7F,UAQ1D,GAA6B,IAAzB8F,EAAc/P,QAAgBgM,EAAOgE,WAAa9N,EAAKkN,MAAMY,SAASC,SAAU,CAClF,IAAK,IAAI7H,EAAI,EAAGA,EAAI4D,EAAO8C,OAAO9O,OAAQoI,IAAK,CAE7CoH,EADIU,EAAQlE,EAAO8C,OAAO1G,IACDlG,EAAKoC,IAAIO,MAGpC,MAGF,IAAK,IAAIqD,EAAI,EAAGA,EAAI6H,EAAc/P,OAAQkI,IAKxC,KAAIiI,EAAeJ,EAAc7H,GAC7B7C,EAAUjF,KAAKuO,cAAcwB,GAC7BC,EAAY/K,EAAQgL,OAExB,IAASjI,EAAI,EAAGA,EAAI4D,EAAO8C,OAAO9O,OAAQoI,IAAK,CAS7C,IACIkI,EAAejL,EADf6K,EAAQlE,EAAO8C,OAAO1G,IAEtBmI,EAAuBtS,OAAOqF,KAAKgN,GACnCE,EAAYL,EAAe,IAAMD,EACjCO,EAAuB,IAAIvO,EAAKoC,IAAIiM,GAoBxC,GAbIvE,EAAOgE,UAAY9N,EAAKkN,MAAMY,SAASC,WACzCL,EAAgBA,EAAcjL,MAAM8L,QAELpM,IAA3BmL,EAAgBU,KAClBV,EAAgBU,GAAShO,EAAKoC,IAAIE,WASlCwH,EAAOgE,UAAY9N,EAAKkN,MAAMY,SAASU,YA4B3C,GANApB,EAAaY,GAAO/G,OAAOiH,EAAWpE,EAAO2E,OAAO,SAAU7L,EAAGC,GAAK,OAAOD,EAAIC,MAM7EwK,EAAeiB,GAAnB,CAIA,IAAK,IAAIhT,EAAI,EAAGA,EAAI+S,EAAqBvQ,OAAQxC,IAAK,CAOpD,IAGIoT,EAHAC,EAAsBN,EAAqB/S,GAC3CsT,EAAmB,IAAI5O,EAAK0B,SAAUiN,EAAqBX,GAC3DrK,EAAWyK,EAAaO,QAG4BxM,KAAnDuM,EAAavB,EAAeyB,IAC/BzB,EAAeyB,GAAoB,IAAI5O,EAAK6O,UAAWZ,EAAcD,EAAOrK,GAE5E+K,EAAWrO,IAAI4N,EAAcD,EAAOrK,GAKxC0J,EAAeiB,IAAa,aAnDOnM,IAA7BoL,EAAkBS,KACpBT,EAAkBS,GAAShO,EAAKoC,IAAIO,OAGtC4K,EAAkBS,GAAST,EAAkBS,GAAOvL,MAAM8L,KA0DlE,GAAIzE,EAAOgE,WAAa9N,EAAKkN,MAAMY,SAASC,SAC1C,IAAS7H,EAAI,EAAGA,EAAI4D,EAAO8C,OAAO9O,OAAQoI,IAAK,CAE7CoH,EADIU,EAAQlE,EAAO8C,OAAO1G,IACDoH,EAAgBU,GAAOzL,UAAUmL,IAUhE,IAAIoB,EAAqB9O,EAAKoC,IAAIE,SAC9ByM,EAAuB/O,EAAKoC,IAAIO,MAEpC,IAAStH,EAAI,EAAGA,EAAI6C,KAAK0O,OAAO9O,OAAQzC,IAAK,CAC3C,IAAI2S,EAEAV,EAFAU,EAAQ9P,KAAK0O,OAAOvR,MAGtByT,EAAqBA,EAAmBvM,UAAU+K,EAAgBU,KAGhET,EAAkBS,KACpBe,EAAuBA,EAAqBtM,MAAM8K,EAAkBS,KAIxE,IAAIgB,EAAoBjT,OAAOqF,KAAK+L,GAChC8B,EAAU,GACVC,EAAUnT,OAAOY,OAAO,MAY5B,GAAIoQ,EAAMoC,YAAa,CACrBH,EAAoBjT,OAAOqF,KAAKlD,KAAKwO,cAErC,IAASrR,EAAI,EAAGA,EAAI2T,EAAkBlR,OAAQzC,IAAK,CAC7CuT,EAAmBI,EAAkB3T,GAAzC,IACI6G,EAAWlC,EAAK0B,SAASM,WAAW4M,GACxCzB,EAAeyB,GAAoB,IAAI5O,EAAK6O,WAIhD,IAASxT,EAAI,EAAGA,EAAI2T,EAAkBlR,OAAQzC,IAAK,CASjD,IACIsG,GADAO,EAAWlC,EAAK0B,SAASM,WAAWgN,EAAkB3T,KACpCsG,OAEtB,GAAKmN,EAAmBpM,SAASf,KAI7BoN,EAAqBrM,SAASf,GAAlC,CAIA,IAEIyN,EAFAC,EAAcnR,KAAKwO,aAAaxK,GAChCoN,EAAQlC,EAAalL,EAASN,WAAWkG,WAAWuH,GAGxD,QAAqClN,KAAhCiN,EAAWF,EAAQvN,IACtByN,EAASE,OAASA,EAClBF,EAASG,UAAUC,QAAQrC,EAAejL,QACrC,CACL,IAAIzE,EAAQ,CACVgS,IAAK9N,EACL2N,MAAOA,EACPC,UAAWpC,EAAejL,IAE5BgN,EAAQvN,GAAUlE,EAClBwR,EAAQjM,KAAKvF,KAOjB,OAAOwR,EAAQ9D,MAAK,SAAUvI,EAAGC,GAC/B,OAAOA,EAAEyM,MAAQ1M,EAAE0M,UAYvBtP,EAAKuM,MAAMtP,UAAUqJ,OAAS,WAC5B,IAAImG,EAAgB1Q,OAAOqF,KAAKlD,KAAKuO,eAClCtB,OACApH,KAAI,SAAUiG,GACb,MAAO,CAACA,EAAM9L,KAAKuO,cAAczC,MAChC9L,MAEDwO,EAAe3Q,OAAOqF,KAAKlD,KAAKwO,cACjC3I,KAAI,SAAU0L,GACb,MAAO,CAACA,EAAKvR,KAAKwO,aAAa+C,GAAKnJ,YACnCpI,MAEL,MAAO,CACLyC,QAASX,EAAKW,QACdiM,OAAQ1O,KAAK0O,OACbF,aAAcA,EACdD,cAAeA,EACfrM,SAAUlC,KAAKkC,SAASkG,WAU5BtG,EAAKuM,MAAMxH,KAAO,SAAU2K,GAC1B,IAAIlD,EAAQ,GACRE,EAAe,GACfiD,EAAoBD,EAAgBhD,aACpCD,EAAgB1Q,OAAOY,OAAO,MAC9BiT,EAA0BF,EAAgBjD,cAC1CoD,EAAkB,IAAI7P,EAAKoJ,SAASjJ,QACpCC,EAAWJ,EAAKyE,SAASM,KAAK2K,EAAgBtP,UAE9CsP,EAAgB/O,SAAWX,EAAKW,SAClCX,EAAKY,MAAMC,KAAK,4EAA8Eb,EAAKW,QAAU,sCAAwC+O,EAAgB/O,QAAU,KAGjL,IAAK,IAAItF,EAAI,EAAGA,EAAIsU,EAAkB7R,OAAQzC,IAAK,CACjD,IACIoU,GADAK,EAAQH,EAAkBtU,IACd,GACZgH,EAAWyN,EAAM,GAErBpD,EAAa+C,GAAO,IAAIzP,EAAKuG,OAAOlE,GAGtC,IAAShH,EAAI,EAAGA,EAAIuU,EAAwB9R,OAAQzC,IAAK,CACvD,IAAIyU,EACA9F,GADA8F,EAAQF,EAAwBvU,IACnB,GACb8H,EAAU2M,EAAM,GAEpBD,EAAgB9I,OAAOiD,GACvByC,EAAczC,GAAQ7G,EAYxB,OATA0M,EAAgBlG,SAEhB6C,EAAMI,OAAS8C,EAAgB9C,OAE/BJ,EAAME,aAAeA,EACrBF,EAAMC,cAAgBA,EACtBD,EAAMG,SAAWkD,EAAgBjG,KACjC4C,EAAMpM,SAAWA,EAEV,IAAIJ,EAAKuM,MAAMC;;;;IA+BxBxM,EAAKG,QAAU,WACbjC,KAAK6R,KAAO,KACZ7R,KAAK8R,QAAUjU,OAAOY,OAAO,MAC7BuB,KAAK+R,WAAalU,OAAOY,OAAO,MAChCuB,KAAKuO,cAAgB1Q,OAAOY,OAAO,MACnCuB,KAAKgS,qBAAuB,GAC5BhS,KAAKiS,aAAe,GACpBjS,KAAK4F,UAAY9D,EAAK8D,UACtB5F,KAAKkC,SAAW,IAAIJ,EAAKyE,SACzBvG,KAAKuC,eAAiB,IAAIT,EAAKyE,SAC/BvG,KAAKkF,cAAgB,EACrBlF,KAAKkS,GAAK,IACVlS,KAAKmS,IAAM,IACXnS,KAAKgQ,UAAY,EACjBhQ,KAAKoS,kBAAoB,IAe3BtQ,EAAKG,QAAQlD,UAAUwS,IAAM,SAAUA,GACrCvR,KAAK6R,KAAON,GAmCdzP,EAAKG,QAAQlD,UAAU+Q,MAAQ,SAAUpM,EAAW2O,GAClD,GAAI,KAAK3H,KAAKhH,GACZ,MAAM,IAAI4O,WAAY,UAAY5O,EAAY,oCAGhD1D,KAAK8R,QAAQpO,GAAa2O,GAAc,IAW1CvQ,EAAKG,QAAQlD,UAAU4F,EAAI,SAAU4N,GAEjCvS,KAAKkS,GADHK,EAAS,EACD,EACDA,EAAS,EACR,EAEAA,GAWdzQ,EAAKG,QAAQlD,UAAUyT,GAAK,SAAUD,GACpCvS,KAAKmS,IAAMI,GAoBbzQ,EAAKG,QAAQlD,UAAUoD,IAAM,SAAUsQ,EAAKJ,GAC1C,IAAI5O,EAASgP,EAAIzS,KAAK6R,MAClBnD,EAAS7Q,OAAOqF,KAAKlD,KAAK8R,SAE9B9R,KAAK+R,WAAWtO,GAAU4O,GAAc,GACxCrS,KAAKkF,eAAiB,EAEtB,IAAK,IAAI/H,EAAI,EAAGA,EAAIuR,EAAO9O,OAAQzC,IAAK,CACtC,IAAIuG,EAAYgL,EAAOvR,GACnBuV,EAAY1S,KAAK8R,QAAQpO,GAAWgP,UACpC5C,EAAQ4C,EAAYA,EAAUD,GAAOA,EAAI/O,GACzCsC,EAAShG,KAAK4F,UAAUkK,EAAO,CAC7BpB,OAAQ,CAAChL,KAEX6L,EAAQvP,KAAKkC,SAASyF,IAAI3B,GAC1BhC,EAAW,IAAIlC,EAAK0B,SAAUC,EAAQC,GACtCiP,EAAa9U,OAAOY,OAAO,MAE/BuB,KAAKgS,qBAAqBhO,GAAY2O,EACtC3S,KAAKiS,aAAajO,GAAY,EAG9BhE,KAAKiS,aAAajO,IAAauL,EAAM3P,OAGrC,IAAK,IAAIkI,EAAI,EAAGA,EAAIyH,EAAM3P,OAAQkI,IAAK,CACrC,IAAIgE,EAAOyD,EAAMzH,GAUjB,GARwB7D,MAApB0O,EAAW7G,KACb6G,EAAW7G,GAAQ,GAGrB6G,EAAW7G,IAAS,EAIY7H,MAA5BjE,KAAKuO,cAAczC,GAAoB,CACzC,IAAI7G,EAAUpH,OAAOY,OAAO,MAC5BwG,EAAgB,OAAIjF,KAAKgQ,UACzBhQ,KAAKgQ,WAAa,EAElB,IAAK,IAAIhI,EAAI,EAAGA,EAAI0G,EAAO9O,OAAQoI,IACjC/C,EAAQyJ,EAAO1G,IAAMnK,OAAOY,OAAO,MAGrCuB,KAAKuO,cAAczC,GAAQ7G,EAIsBhB,MAA/CjE,KAAKuO,cAAczC,GAAMpI,GAAWD,KACtCzD,KAAKuO,cAAczC,GAAMpI,GAAWD,GAAU5F,OAAOY,OAAO,OAK9D,IAAK,IAAIrB,EAAI,EAAGA,EAAI4C,KAAKoS,kBAAkBxS,OAAQxC,IAAK,CACtD,IAAIwV,EAAc5S,KAAKoS,kBAAkBhV,GACrCqI,EAAWqG,EAAKrG,SAASmN,GAEmC3O,MAA5DjE,KAAKuO,cAAczC,GAAMpI,GAAWD,GAAQmP,KAC9C5S,KAAKuO,cAAczC,GAAMpI,GAAWD,GAAQmP,GAAe,IAG7D5S,KAAKuO,cAAczC,GAAMpI,GAAWD,GAAQmP,GAAa9N,KAAKW,OAYtE3D,EAAKG,QAAQlD,UAAU8T,6BAA+B,WAOpD,IALA,IAAIC,EAAYjV,OAAOqF,KAAKlD,KAAKiS,cAC7Bc,EAAiBD,EAAUlT,OAC3BoT,EAAc,GACdC,EAAqB,GAEhB9V,EAAI,EAAGA,EAAI4V,EAAgB5V,IAAK,CACvC,IAAI6G,EAAWlC,EAAK0B,SAASM,WAAWgP,EAAU3V,IAC9C2S,EAAQ9L,EAASN,UAErBuP,EAAmBnD,KAAWmD,EAAmBnD,GAAS,GAC1DmD,EAAmBnD,IAAU,EAE7BkD,EAAYlD,KAAWkD,EAAYlD,GAAS,GAC5CkD,EAAYlD,IAAU9P,KAAKiS,aAAajO,GAG1C,IAAI0K,EAAS7Q,OAAOqF,KAAKlD,KAAK8R,SAE9B,IAAS3U,EAAI,EAAGA,EAAIuR,EAAO9O,OAAQzC,IAAK,CACtC,IAAIuG,EAAYgL,EAAOvR,GACvB6V,EAAYtP,GAAasP,EAAYtP,GAAauP,EAAmBvP,GAGvE1D,KAAKkT,mBAAqBF,GAQ5BlR,EAAKG,QAAQlD,UAAUoU,mBAAqB,WAM1C,IALA,IAAI3E,EAAe,GACfsE,EAAYjV,OAAOqF,KAAKlD,KAAKgS,sBAC7BoB,EAAkBN,EAAUlT,OAC5ByT,EAAexV,OAAOY,OAAO,MAExBtB,EAAI,EAAGA,EAAIiW,EAAiBjW,IAAK,CAaxC,IAZA,IAAI6G,EAAWlC,EAAK0B,SAASM,WAAWgP,EAAU3V,IAC9CuG,EAAYM,EAASN,UACrB4P,EAActT,KAAKiS,aAAajO,GAChCmN,EAAc,IAAIrP,EAAKuG,OACvBkL,EAAkBvT,KAAKgS,qBAAqBhO,GAC5CuL,EAAQ1R,OAAOqF,KAAKqQ,GACpBC,EAAcjE,EAAM3P,OAGpB6T,EAAazT,KAAK8R,QAAQpO,GAAW6M,OAAS,EAC9CmD,EAAW1T,KAAK+R,WAAW/N,EAASP,QAAQ8M,OAAS,EAEhDzI,EAAI,EAAGA,EAAI0L,EAAa1L,IAAK,CACpC,IAGI9C,EAAKoM,EAAOuC,EAHZ7H,EAAOyD,EAAMzH,GACb8L,EAAKL,EAAgBzH,GACrBkE,EAAYhQ,KAAKuO,cAAczC,GAAMmE,YAGdhM,IAAvBoP,EAAavH,IACf9G,EAAMlD,EAAKkD,IAAIhF,KAAKuO,cAAczC,GAAO9L,KAAKkF,eAC9CmO,EAAavH,GAAQ9G,GAErBA,EAAMqO,EAAavH,GAGrBsF,EAAQpM,IAAQhF,KAAKmS,IAAM,GAAKyB,IAAO5T,KAAKmS,KAAO,EAAInS,KAAKkS,GAAKlS,KAAKkS,IAAMoB,EAActT,KAAKkT,mBAAmBxP,KAAekQ,GACjIxC,GAASqC,EACTrC,GAASsC,EACTC,EAAqBtO,KAAKwO,MAAc,IAARzC,GAAgB,IAQhDD,EAAYtI,OAAOmH,EAAW2D,GAGhCnF,EAAaxK,GAAYmN,EAG3BnR,KAAKwO,aAAeA,GAQtB1M,EAAKG,QAAQlD,UAAU+U,eAAiB,WACtC9T,KAAKyO,SAAW3M,EAAKoJ,SAASK,UAC5B1N,OAAOqF,KAAKlD,KAAKuO,eAAetB,SAYpCnL,EAAKG,QAAQlD,UAAUyD,MAAQ,WAK7B,OAJAxC,KAAK6S,+BACL7S,KAAKmT,qBACLnT,KAAK8T,iBAEE,IAAIhS,EAAKuM,MAAM,CACpBE,cAAevO,KAAKuO,cACpBC,aAAcxO,KAAKwO,aACnBC,SAAUzO,KAAKyO,SACfC,OAAQ7Q,OAAOqF,KAAKlD,KAAK8R,SACzB5P,SAAUlC,KAAKuC,kBAkBnBT,EAAKG,QAAQlD,UAAUgV,IAAM,SAAUpO,GACrC,IAAIqO,EAAO5Q,MAAMrE,UAAUuE,MAAMhG,KAAK6J,UAAW,GACjD6M,EAAKC,QAAQjU,MACb2F,EAAGuO,MAAMlU,KAAMgU,IAcjBlS,EAAK6O,UAAY,SAAU7E,EAAMgE,EAAOrK,GAStC,IARA,IAAI0O,EAAiBtW,OAAOY,OAAO,MAC/B2V,EAAevW,OAAOqF,KAAKuC,GAAY,IAOlCtI,EAAI,EAAGA,EAAIiX,EAAaxU,OAAQzC,IAAK,CAC5C,IAAIuB,EAAM0V,EAAajX,GACvBgX,EAAezV,GAAO+G,EAAS/G,GAAK4E,QAGtCtD,KAAKyF,SAAW5H,OAAOY,OAAO,WAEjBwF,IAAT6H,IACF9L,KAAKyF,SAASqG,GAAQjO,OAAOY,OAAO,MACpCuB,KAAKyF,SAASqG,GAAMgE,GAASqE,IAajCrS,EAAK6O,UAAU5R,UAAUuS,QAAU,SAAU+C,GAG3C,IAFA,IAAI9E,EAAQ1R,OAAOqF,KAAKmR,EAAe5O,UAE9BtI,EAAI,EAAGA,EAAIoS,EAAM3P,OAAQzC,IAAK,CACrC,IAAI2O,EAAOyD,EAAMpS,GACbuR,EAAS7Q,OAAOqF,KAAKmR,EAAe5O,SAASqG,IAEtB7H,MAAvBjE,KAAKyF,SAASqG,KAChB9L,KAAKyF,SAASqG,GAAQjO,OAAOY,OAAO,OAGtC,IAAK,IAAIqJ,EAAI,EAAGA,EAAI4G,EAAO9O,OAAQkI,IAAK,CACtC,IAAIgI,EAAQpB,EAAO5G,GACf5E,EAAOrF,OAAOqF,KAAKmR,EAAe5O,SAASqG,GAAMgE,IAEnB7L,MAA9BjE,KAAKyF,SAASqG,GAAMgE,KACtB9P,KAAKyF,SAASqG,GAAMgE,GAASjS,OAAOY,OAAO,OAG7C,IAAK,IAAIuJ,EAAI,EAAGA,EAAI9E,EAAKtD,OAAQoI,IAAK,CACpC,IAAItJ,EAAMwE,EAAK8E,GAEwB/D,MAAnCjE,KAAKyF,SAASqG,GAAMgE,GAAOpR,GAC7BsB,KAAKyF,SAASqG,GAAMgE,GAAOpR,GAAO2V,EAAe5O,SAASqG,GAAMgE,GAAOpR,GAEvEsB,KAAKyF,SAASqG,GAAMgE,GAAOpR,GAAOsB,KAAKyF,SAASqG,GAAMgE,GAAOpR,GAAKqG,OAAOsP,EAAe5O,SAASqG,GAAMgE,GAAOpR,QAexHoD,EAAK6O,UAAU5R,UAAUoD,IAAM,SAAU2J,EAAMgE,EAAOrK,GACpD,KAAMqG,KAAQ9L,KAAKyF,UAGjB,OAFAzF,KAAKyF,SAASqG,GAAQjO,OAAOY,OAAO,WACpCuB,KAAKyF,SAASqG,GAAMgE,GAASrK,GAI/B,GAAMqK,KAAS9P,KAAKyF,SAASqG,GAO7B,IAFA,IAAIsI,EAAevW,OAAOqF,KAAKuC,GAEtBtI,EAAI,EAAGA,EAAIiX,EAAaxU,OAAQzC,IAAK,CAC5C,IAAIuB,EAAM0V,EAAajX,GAEnBuB,KAAOsB,KAAKyF,SAASqG,GAAMgE,GAC7B9P,KAAKyF,SAASqG,GAAMgE,GAAOpR,GAAOsB,KAAKyF,SAASqG,GAAMgE,GAAOpR,GAAKqG,OAAOU,EAAS/G,IAElFsB,KAAKyF,SAASqG,GAAMgE,GAAOpR,GAAO+G,EAAS/G,QAZ7CsB,KAAKyF,SAASqG,GAAMgE,GAASrK,GA2BjC3D,EAAKkN,MAAQ,SAAUsF,GACrBtU,KAAKsP,QAAU,GACftP,KAAKsU,UAAYA,GA2BnBxS,EAAKkN,MAAMuF,SAAW,IAAIC,OAAQ,KAClC1S,EAAKkN,MAAMuF,SAASE,KAAO,EAC3B3S,EAAKkN,MAAMuF,SAASG,QAAU,EAC9B5S,EAAKkN,MAAMuF,SAASI,SAAW,EAa/B7S,EAAKkN,MAAMY,SAAW,CAIpBgF,SAAU,EAMV/E,SAAU,EAMVS,WAAY,GA0BdxO,EAAKkN,MAAMjQ,UAAU6M,OAAS,SAAUA,GA+BtC,MA9BM,WAAYA,IAChBA,EAAO8C,OAAS1O,KAAKsU,WAGjB,UAAW1I,IACfA,EAAO2E,MAAQ,GAGX,gBAAiB3E,IACrBA,EAAO6D,aAAc,GAGjB,aAAc7D,IAClBA,EAAO2I,SAAWzS,EAAKkN,MAAMuF,SAASE,MAGnC7I,EAAO2I,SAAWzS,EAAKkN,MAAMuF,SAASG,SAAa9I,EAAOE,KAAK1F,OAAO,IAAMtE,EAAKkN,MAAMuF,WAC1F3I,EAAOE,KAAO,IAAMF,EAAOE,MAGxBF,EAAO2I,SAAWzS,EAAKkN,MAAMuF,SAASI,UAAc/I,EAAOE,KAAKxI,OAAO,IAAMxB,EAAKkN,MAAMuF,WAC3F3I,EAAOE,KAAYF,EAAOE,KAAO,KAG7B,aAAcF,IAClBA,EAAOgE,SAAW9N,EAAKkN,MAAMY,SAASgF,UAGxC5U,KAAKsP,QAAQxK,KAAK8G,GAEX5L,MAUT8B,EAAKkN,MAAMjQ,UAAUkS,UAAY,WAC/B,IAAK,IAAI9T,EAAI,EAAGA,EAAI6C,KAAKsP,QAAQ1P,OAAQzC,IACvC,GAAI6C,KAAKsP,QAAQnS,GAAGyS,UAAY9N,EAAKkN,MAAMY,SAASU,WAClD,OAAO,EAIX,OAAO,GA6BTxO,EAAKkN,MAAMjQ,UAAU+M,KAAO,SAAUA,EAAM+I,GAC1C,GAAIzR,MAAMC,QAAQyI,GAEhB,OADAA,EAAK/E,SAAQ,SAAU1I,GAAK2B,KAAK8L,KAAKzN,EAAGyD,EAAKY,MAAMO,MAAM4R,MAAa7U,MAChEA,KAGT,IAAI4L,EAASiJ,GAAW,GAKxB,OAJAjJ,EAAOE,KAAOA,EAAK9I,WAEnBhD,KAAK4L,OAAOA,GAEL5L,MAET8B,EAAKgT,gBAAkB,SAAUlS,EAAS4F,EAAOC,GAC/CzI,KAAKtC,KAAO,kBACZsC,KAAK4C,QAAUA,EACf5C,KAAKwI,MAAQA,EACbxI,KAAKyI,IAAMA,GAGb3G,EAAKgT,gBAAgB/V,UAAY,IAAIkI,MACrCnF,EAAKiT,WAAa,SAAUzV,GAC1BU,KAAKgV,QAAU,GACfhV,KAAKV,IAAMA,EACXU,KAAKJ,OAASN,EAAIM,OAClBI,KAAKuH,IAAM,EACXvH,KAAKwI,MAAQ,EACbxI,KAAKiV,oBAAsB,IAG7BnT,EAAKiT,WAAWhW,UAAU4I,IAAM,WAG9B,IAFA,IAAIuN,EAAQpT,EAAKiT,WAAWI,QAErBD,GACLA,EAAQA,EAAMlV,OAIlB8B,EAAKiT,WAAWhW,UAAUqW,YAAc,WAKtC,IAJA,IAAIC,EAAY,GACZnP,EAAalG,KAAKwI,MAClBvC,EAAWjG,KAAKuH,IAEXpK,EAAI,EAAGA,EAAI6C,KAAKiV,oBAAoBrV,OAAQzC,IACnD8I,EAAWjG,KAAKiV,oBAAoB9X,GACpCkY,EAAUvQ,KAAK9E,KAAKV,IAAIgE,MAAM4C,EAAYD,IAC1CC,EAAaD,EAAW,EAM1B,OAHAoP,EAAUvQ,KAAK9E,KAAKV,IAAIgE,MAAM4C,EAAYlG,KAAKuH,MAC/CvH,KAAKiV,oBAAoBrV,OAAS,EAE3ByV,EAAUC,KAAK,KAGxBxT,EAAKiT,WAAWhW,UAAUwW,KAAO,SAAUC,GACzCxV,KAAKgV,QAAQlQ,KAAK,CAChB0Q,KAAMA,EACNlW,IAAKU,KAAKoV,cACV5M,MAAOxI,KAAKwI,MACZC,IAAKzI,KAAKuH,MAGZvH,KAAKwI,MAAQxI,KAAKuH,KAGpBzF,EAAKiT,WAAWhW,UAAU0W,gBAAkB,WAC1CzV,KAAKiV,oBAAoBnQ,KAAK9E,KAAKuH,IAAM,GACzCvH,KAAKuH,KAAO,GAGdzF,EAAKiT,WAAWhW,UAAU6N,KAAO,WAC/B,GAAI5M,KAAKuH,KAAOvH,KAAKJ,OACnB,OAAOkC,EAAKiT,WAAWW,IAGzB,IAAIpJ,EAAOtM,KAAKV,IAAI8G,OAAOpG,KAAKuH,KAEhC,OADAvH,KAAKuH,KAAO,EACL+E,GAGTxK,EAAKiT,WAAWhW,UAAU4W,MAAQ,WAChC,OAAO3V,KAAKuH,IAAMvH,KAAKwI,OAGzB1G,EAAKiT,WAAWhW,UAAU6W,OAAS,WAC7B5V,KAAKwI,OAASxI,KAAKuH,MACrBvH,KAAKuH,KAAO,GAGdvH,KAAKwI,MAAQxI,KAAKuH,KAGpBzF,EAAKiT,WAAWhW,UAAU8W,OAAS,WACjC7V,KAAKuH,KAAO,GAGdzF,EAAKiT,WAAWhW,UAAU+W,eAAiB,WACzC,IAAIxJ,EAAMyJ,EAEV,GAEEA,GADAzJ,EAAOtM,KAAK4M,QACI/M,WAAW,SACpBkW,EAAW,IAAMA,EAAW,IAEjCzJ,GAAQxK,EAAKiT,WAAWW,KAC1B1V,KAAK6V,UAIT/T,EAAKiT,WAAWhW,UAAUiX,KAAO,WAC/B,OAAOhW,KAAKuH,IAAMvH,KAAKJ,QAGzBkC,EAAKiT,WAAWW,IAAM,MACtB5T,EAAKiT,WAAWkB,MAAQ,QACxBnU,EAAKiT,WAAWmB,KAAO,OACvBpU,EAAKiT,WAAWoB,cAAgB,gBAChCrU,EAAKiT,WAAWqB,MAAQ,QACxBtU,EAAKiT,WAAWsB,SAAW,WAE3BvU,EAAKiT,WAAWuB,SAAW,SAAUC,GAInC,OAHAA,EAAMV,SACNU,EAAMhB,KAAKzT,EAAKiT,WAAWkB,OAC3BM,EAAMX,SACC9T,EAAKiT,WAAWI,SAGzBrT,EAAKiT,WAAWyB,QAAU,SAAUD,GAQlC,GAPIA,EAAMZ,QAAU,IAClBY,EAAMV,SACNU,EAAMhB,KAAKzT,EAAKiT,WAAWmB,OAG7BK,EAAMX,SAEFW,EAAMP,OACR,OAAOlU,EAAKiT,WAAWI,SAI3BrT,EAAKiT,WAAW0B,gBAAkB,SAAUF,GAI1C,OAHAA,EAAMX,SACNW,EAAMT,iBACNS,EAAMhB,KAAKzT,EAAKiT,WAAWoB,eACpBrU,EAAKiT,WAAWI,SAGzBrT,EAAKiT,WAAW2B,SAAW,SAAUH,GAInC,OAHAA,EAAMX,SACNW,EAAMT,iBACNS,EAAMhB,KAAKzT,EAAKiT,WAAWqB,OACpBtU,EAAKiT,WAAWI,SAGzBrT,EAAKiT,WAAW4B,OAAS,SAAUJ,GAC7BA,EAAMZ,QAAU,GAClBY,EAAMhB,KAAKzT,EAAKiT,WAAWmB,OAe/BpU,EAAKiT,WAAW6B,cAAgB9U,EAAK8D,UAAUS,UAE/CvE,EAAKiT,WAAWI,QAAU,SAAUoB,GAClC,OAAa,CACX,IAAIjK,EAAOiK,EAAM3J,OAEjB,GAAIN,GAAQxK,EAAKiT,WAAWW,IAC1B,OAAO5T,EAAKiT,WAAW4B,OAIzB,GAA0B,IAAtBrK,EAAKzM,WAAW,GAApB,CAKA,GAAY,KAARyM,EACF,OAAOxK,EAAKiT,WAAWuB,SAGzB,GAAY,KAARhK,EAKF,OAJAiK,EAAMV,SACFU,EAAMZ,QAAU,GAClBY,EAAMhB,KAAKzT,EAAKiT,WAAWmB,MAEtBpU,EAAKiT,WAAW0B,gBAGzB,GAAY,KAARnK,EAKF,OAJAiK,EAAMV,SACFU,EAAMZ,QAAU,GAClBY,EAAMhB,KAAKzT,EAAKiT,WAAWmB,MAEtBpU,EAAKiT,WAAW2B,SAMzB,GAAY,KAARpK,GAAiC,IAAlBiK,EAAMZ,QAEvB,OADAY,EAAMhB,KAAKzT,EAAKiT,WAAWsB,UACpBvU,EAAKiT,WAAWI,QAMzB,GAAY,KAAR7I,GAAiC,IAAlBiK,EAAMZ,QAEvB,OADAY,EAAMhB,KAAKzT,EAAKiT,WAAWsB,UACpBvU,EAAKiT,WAAWI,QAGzB,GAAI7I,EAAK/M,MAAMuC,EAAKiT,WAAW6B,eAC7B,OAAO9U,EAAKiT,WAAWyB,aAzCvBD,EAAMd,oBA8CZ3T,EAAKgN,YAAc,SAAUxP,EAAKuP,GAChC7O,KAAKuW,MAAQ,IAAIzU,EAAKiT,WAAYzV,GAClCU,KAAK6O,MAAQA,EACb7O,KAAK6W,cAAgB,GACrB7W,KAAK8W,UAAY,GAGnBhV,EAAKgN,YAAY/P,UAAUgQ,MAAQ,WACjC/O,KAAKuW,MAAM5O,MACX3H,KAAKgV,QAAUhV,KAAKuW,MAAMvB,QAI1B,IAFA,IAAIE,EAAQpT,EAAKgN,YAAYiI,YAEtB7B,GACLA,EAAQA,EAAMlV,MAGhB,OAAOA,KAAK6O,OAGd/M,EAAKgN,YAAY/P,UAAUiY,WAAa,WACtC,OAAOhX,KAAKgV,QAAQhV,KAAK8W,YAG3BhV,EAAKgN,YAAY/P,UAAUkY,cAAgB,WACzC,IAAIC,EAASlX,KAAKgX,aAElB,OADAhX,KAAK8W,WAAa,EACXI,GAGTpV,EAAKgN,YAAY/P,UAAUoY,WAAa,WACtC,IAAIC,EAAkBpX,KAAK6W,cAC3B7W,KAAK6O,MAAMjD,OAAOwL,GAClBpX,KAAK6W,cAAgB,IAGvB/U,EAAKgN,YAAYiI,YAAc,SAAUM,GACvC,IAAIH,EAASG,EAAOL,aAEpB,GAAc/S,MAAViT,EAIJ,OAAQA,EAAO1B,MACb,KAAK1T,EAAKiT,WAAWsB,SACnB,OAAOvU,EAAKgN,YAAYwI,cAC1B,KAAKxV,EAAKiT,WAAWkB,MACnB,OAAOnU,EAAKgN,YAAYyI,WAC1B,KAAKzV,EAAKiT,WAAWmB,KACnB,OAAOpU,EAAKgN,YAAY0I,UAC1B,QACE,IAAIC,EAAe,4CAA8CP,EAAO1B,KAMxE,MAJI0B,EAAO5X,IAAIM,QAAU,IACvB6X,GAAgB,gBAAkBP,EAAO5X,IAAM,KAG3C,IAAIwC,EAAKgT,gBAAiB2C,EAAcP,EAAO1O,MAAO0O,EAAOzO,OAIzE3G,EAAKgN,YAAYwI,cAAgB,SAAUD,GACzC,IAAIH,EAASG,EAAOJ,gBAEpB,GAAchT,MAAViT,EAAJ,CAIA,OAAQA,EAAO5X,KACb,IAAK,IACH+X,EAAOR,cAAcjH,SAAW9N,EAAKkN,MAAMY,SAASU,WACpD,MACF,IAAK,IACH+G,EAAOR,cAAcjH,SAAW9N,EAAKkN,MAAMY,SAASC,SACpD,MACF,QACE,IAAI4H,EAAe,kCAAoCP,EAAO5X,IAAM,IACpE,MAAM,IAAIwC,EAAKgT,gBAAiB2C,EAAcP,EAAO1O,MAAO0O,EAAOzO,KAGvE,IAAIiP,EAAaL,EAAOL,aAExB,GAAkB/S,MAAdyT,EAAyB,CACvBD,EAAe,yCACnB,MAAM,IAAI3V,EAAKgT,gBAAiB2C,EAAcP,EAAO1O,MAAO0O,EAAOzO,KAGrE,OAAQiP,EAAWlC,MACjB,KAAK1T,EAAKiT,WAAWkB,MACnB,OAAOnU,EAAKgN,YAAYyI,WAC1B,KAAKzV,EAAKiT,WAAWmB,KACnB,OAAOpU,EAAKgN,YAAY0I,UAC1B,QACMC,EAAe,mCAAqCC,EAAWlC,KAAO,IAC1E,MAAM,IAAI1T,EAAKgT,gBAAiB2C,EAAcC,EAAWlP,MAAOkP,EAAWjP,QAIjF3G,EAAKgN,YAAYyI,WAAa,SAAUF,GACtC,IAAIH,EAASG,EAAOJ,gBAEpB,GAAchT,MAAViT,EAAJ,CAIA,IAAmD,GAA/CG,EAAOxI,MAAMyF,UAAUvQ,QAAQmT,EAAO5X,KAAY,CACpD,IAAIqY,EAAiBN,EAAOxI,MAAMyF,UAAUzO,KAAI,SAAU+R,GAAK,MAAO,IAAMA,EAAI,OAAOtC,KAAK,MACxFmC,EAAe,uBAAyBP,EAAO5X,IAAM,uBAAyBqY,EAElF,MAAM,IAAI7V,EAAKgT,gBAAiB2C,EAAcP,EAAO1O,MAAO0O,EAAOzO,KAGrE4O,EAAOR,cAAcnI,OAAS,CAACwI,EAAO5X,KAEtC,IAAIoY,EAAaL,EAAOL,aAExB,GAAkB/S,MAAdyT,EAAyB,CACvBD,EAAe,gCACnB,MAAM,IAAI3V,EAAKgT,gBAAiB2C,EAAcP,EAAO1O,MAAO0O,EAAOzO,KAGrE,OAAQiP,EAAWlC,MACjB,KAAK1T,EAAKiT,WAAWmB,KACnB,OAAOpU,EAAKgN,YAAY0I,UAC1B,QACMC,EAAe,0BAA4BC,EAAWlC,KAAO,IACjE,MAAM,IAAI1T,EAAKgT,gBAAiB2C,EAAcC,EAAWlP,MAAOkP,EAAWjP,QAIjF3G,EAAKgN,YAAY0I,UAAY,SAAUH,GACrC,IAAIH,EAASG,EAAOJ,gBAEpB,GAAchT,MAAViT,EAAJ,CAIAG,EAAOR,cAAc/K,KAAOoL,EAAO5X,IAAIwG,eAEP,GAA5BoR,EAAO5X,IAAIyE,QAAQ,OACrBsT,EAAOR,cAAcpH,aAAc,GAGrC,IAAIiI,EAAaL,EAAOL,aAExB,GAAkB/S,MAAdyT,EAKJ,OAAQA,EAAWlC,MACjB,KAAK1T,EAAKiT,WAAWmB,KAEnB,OADAmB,EAAOF,aACArV,EAAKgN,YAAY0I,UAC1B,KAAK1V,EAAKiT,WAAWkB,MAEnB,OADAoB,EAAOF,aACArV,EAAKgN,YAAYyI,WAC1B,KAAKzV,EAAKiT,WAAWoB,cACnB,OAAOrU,EAAKgN,YAAY+I,kBAC1B,KAAK/V,EAAKiT,WAAWqB,MACnB,OAAOtU,EAAKgN,YAAYgJ,WAC1B,KAAKhW,EAAKiT,WAAWsB,SAEnB,OADAgB,EAAOF,aACArV,EAAKgN,YAAYwI,cAC1B,QACE,IAAIG,EAAe,2BAA6BC,EAAWlC,KAAO,IAClE,MAAM,IAAI1T,EAAKgT,gBAAiB2C,EAAcC,EAAWlP,MAAOkP,EAAWjP,UApB7E4O,EAAOF,eAwBXrV,EAAKgN,YAAY+I,kBAAoB,SAAUR,GAC7C,IAAIH,EAASG,EAAOJ,gBAEpB,GAAchT,MAAViT,EAAJ,CAIA,IAAInL,EAAegM,SAASb,EAAO5X,IAAK,IAExC,GAAI0Y,MAAMjM,GAAe,CACvB,IAAI0L,EAAe,gCACnB,MAAM,IAAI3V,EAAKgT,gBAAiB2C,EAAcP,EAAO1O,MAAO0O,EAAOzO,KAGrE4O,EAAOR,cAAc9K,aAAeA,EAEpC,IAAI2L,EAAaL,EAAOL,aAExB,GAAkB/S,MAAdyT,EAKJ,OAAQA,EAAWlC,MACjB,KAAK1T,EAAKiT,WAAWmB,KAEnB,OADAmB,EAAOF,aACArV,EAAKgN,YAAY0I,UAC1B,KAAK1V,EAAKiT,WAAWkB,MAEnB,OADAoB,EAAOF,aACArV,EAAKgN,YAAYyI,WAC1B,KAAKzV,EAAKiT,WAAWoB,cACnB,OAAOrU,EAAKgN,YAAY+I,kBAC1B,KAAK/V,EAAKiT,WAAWqB,MACnB,OAAOtU,EAAKgN,YAAYgJ,WAC1B,KAAKhW,EAAKiT,WAAWsB,SAEnB,OADAgB,EAAOF,aACArV,EAAKgN,YAAYwI,cAC1B,QACMG,EAAe,2BAA6BC,EAAWlC,KAAO,IAClE,MAAM,IAAI1T,EAAKgT,gBAAiB2C,EAAcC,EAAWlP,MAAOkP,EAAWjP,UApB7E4O,EAAOF,eAwBXrV,EAAKgN,YAAYgJ,WAAa,SAAUT,GACtC,IAAIH,EAASG,EAAOJ,gBAEpB,GAAchT,MAAViT,EAAJ,CAIA,IAAI3G,EAAQwH,SAASb,EAAO5X,IAAK,IAEjC,GAAI0Y,MAAMzH,GAAQ,CAChB,IAAIkH,EAAe,wBACnB,MAAM,IAAI3V,EAAKgT,gBAAiB2C,EAAcP,EAAO1O,MAAO0O,EAAOzO,KAGrE4O,EAAOR,cAActG,MAAQA,EAE7B,IAAImH,EAAaL,EAAOL,aAExB,GAAkB/S,MAAdyT,EAKJ,OAAQA,EAAWlC,MACjB,KAAK1T,EAAKiT,WAAWmB,KAEnB,OADAmB,EAAOF,aACArV,EAAKgN,YAAY0I,UAC1B,KAAK1V,EAAKiT,WAAWkB,MAEnB,OADAoB,EAAOF,aACArV,EAAKgN,YAAYyI,WAC1B,KAAKzV,EAAKiT,WAAWoB,cACnB,OAAOrU,EAAKgN,YAAY+I,kBAC1B,KAAK/V,EAAKiT,WAAWqB,MACnB,OAAOtU,EAAKgN,YAAYgJ,WAC1B,KAAKhW,EAAKiT,WAAWsB,SAEnB,OADAgB,EAAOF,aACArV,EAAKgN,YAAYwI,cAC1B,QACMG,EAAe,2BAA6BC,EAAWlC,KAAO,IAClE,MAAM,IAAI1T,EAAKgT,gBAAiB2C,EAAcC,EAAWlP,MAAOkP,EAAWjP,UApB7E4O,EAAOF,oBA+BS,0BAAd,EAYI,WAMN,OAAOrV,IAlBS,kCAx3GnB,I,4ECuBM,IAAImW,EAAW,WAQlB,OAPAA,EAAWpa,OAAOqa,QAAU,SAAkB7Z,GAC1C,IAAK,IAAIa,EAAG/B,EAAI,EAAGyB,EAAIuI,UAAUvH,OAAQzC,EAAIyB,EAAGzB,IAE5C,IAAK,IAAI8B,KADTC,EAAIiI,UAAUhK,GACOU,OAAOkB,UAAUC,eAAe1B,KAAK4B,EAAGD,KAAIZ,EAAEY,GAAKC,EAAED,IAE9E,OAAOZ,IAEK6V,MAAMlU,KAAMmH,YAwEzB,SAASgR,EAASva,GACrB,IAAIsB,EAAsB,mBAAXhB,QAAyBA,OAAOka,SAAU7a,EAAI2B,GAAKtB,EAAEsB,GAAI/B,EAAI,EAC5E,GAAII,EAAG,OAAOA,EAAED,KAAKM,GACrB,GAAIA,GAAyB,iBAAbA,EAAEgC,OAAqB,MAAO,CAC1CgN,KAAM,WAEF,OADIhP,GAAKT,GAAKS,EAAEgC,SAAQhC,OAAI,GACrB,CAAEQ,MAAOR,GAAKA,EAAET,KAAMkb,MAAOza,KAG5C,MAAM,IAAI2F,UAAUrE,EAAI,0BAA4B,mCAGjD,SAASoZ,EAAO1a,EAAGgB,GACtB,IAAIrB,EAAsB,mBAAXW,QAAyBN,EAAEM,OAAOka,UACjD,IAAK7a,EAAG,OAAOK,EACf,IAAmBK,EAAYiC,EAA3B/C,EAAII,EAAED,KAAKM,GAAO2a,EAAK,GAC3B,IACI,WAAc,IAAN3Z,GAAgBA,KAAM,MAAQX,EAAId,EAAEyP,QAAQyL,MAAME,EAAGzT,KAAK7G,EAAEG,OAExE,MAAOoa,GAAStY,EAAI,CAAEsY,MAAOA,GAC7B,QACI,IACQva,IAAMA,EAAEoa,OAAS9a,EAAIJ,EAAU,SAAII,EAAED,KAAKH,GAElD,QAAU,GAAI+C,EAAG,MAAMA,EAAEsY,OAE7B,OAAOD,EAGJ,SAASE,IACZ,IAAK,IAAIF,EAAK,GAAIpb,EAAI,EAAGA,EAAIgK,UAAUvH,OAAQzC,IAC3Cob,EAAKA,EAAGxT,OAAOuT,EAAOnR,UAAUhK,KACpC,OAAOob,E,gBCrCX,ICzEkBG,ECGd/J,EFsEJ,aA2BE,WAAmB,G,IAAE5M,EAAA,EAAAA,OAAQ4W,EAAA,EAAAA,KAAMzW,EAAA,EAAAA,SAAUxC,EAAA,EAAAA,MAC3CM,KAAK4Y,UG/DF,SACLD,G,QAEMC,EAAY,IAAIC,I,IACtB,IAAkB,QAAAF,GAAI,8BAAE,CAAnB,IAAMlG,EAAG,QACN,6BAACqG,EAAA,KAAMC,EAAA,KAGPC,EAAWvG,EAAIuG,SACfC,EAAWxG,EAAIwG,MAGfC,EAAO,EAAWzG,EAAIyG,MACzBvO,QAAQ,mBAAoB,IAC5BA,QAAQ,OAAQ,KAGnB,GAAIoO,EAAM,CACR,IAAM7K,EAAS0K,EAAU5a,IAAI8a,GAGxB5K,EAAOiL,OAOVP,EAAUQ,IAAIJ,EAAU,CACtBA,SAAQ,EACRC,MAAK,EACLC,KAAI,EACJhL,OAAM,KAVRA,EAAO+K,MAASxG,EAAIwG,MACpB/K,EAAOgL,KAASA,EAChBhL,EAAOiL,QAAS,QAclBP,EAAUQ,IAAIJ,EAAU,CACtBA,SAAQ,EACRC,MAAK,EACLC,KAAI,EACJC,QAAQ,K,iGAId,OAAOP,EHiBYS,CAAuBV,GACxC3Y,KAAKsZ,UIvEF,SACLvX,GAEA,IAAMsE,EAAY,IAAI0D,OAAOhI,EAAOsE,UAAW,OACzCiT,EAAY,SAACC,EAAYC,EAAc1N,GAC3C,OAAU0N,EAAI,OAAO1N,EAAI,SAI3B,OAAO,SAAC1N,GACNA,EAAQA,EACLuM,QAAQ,eAAgB,KACxB8O,OAGH,IAAMla,EAAQ,IAAIwK,OAAO,MAAMhI,EAAOsE,UAAS,KAC7CjI,EACGuM,QAAQ,uBAAwB,QAChCA,QAAQtE,EAAW,KAAI,IACvB,OAGL,OAAO,SAAAqT,GAAY,OAAC,OACfA,GAAQ,CACXT,MAAOS,EAAST,MAAMtO,QAAQpL,EAAO+Z,GACrCJ,KAAOQ,EAASR,KAAKvO,QAAQpL,EAAO+Z,OJ8CrBK,CAAuB5X,GAItC/B,KAAKN,WADc,IAAVA,EACIoC,MAAK,W,cAChBI,EAAWA,GAAY,CAAC,UAAW,kBAGnClC,KAAKkC,SAASiG,Q,IACd,IAAiB,QAAAjG,GAAQ,+BAApB,IAAMyD,EAAE,QACX3F,KAAKkC,SAASC,IAAIL,KAAK6D,K,iGAGE,IAAvB5D,EAAO6X,KAAKha,QAAmC,OAAnBmC,EAAO6X,KAAK,GAC1C5Z,KAAK+T,IAAKjS,KAAaC,EAAO6X,KAAK,KAC1B7X,EAAO6X,KAAKha,OAAS,GAC9BI,KAAK+T,KAAK,EAAAjS,MAAa+X,cAAa,UAAI9X,EAAO6X,QAIjD5Z,KAAK8P,MAAM,QAAS,CAAES,MAAO,MAC7BvQ,KAAK8P,MAAM,QACX9P,KAAKuR,IAAI,Y,IAGT,IAAkB,QAAAoH,GAAI,+BAAjB,IAAMlG,EAAG,QACZzS,KAAKmC,IAAIsQ,I,qGAKA3Q,KAAKuM,MAAMxH,KACL,iBAAVnH,EACHoa,KAAK/K,MAAMrP,GACXA,GA8DZ,OAzCS,YAAAmP,MAAP,SAAazQ,GAAb,WACE,GAAIA,EACF,IAGE,IAAM2b,EAAS/Z,KAAKN,MAAMiP,OAAOvQ,GAC9B4M,QAAO,SAAC+F,EAAShJ,GAChB,IAAM2R,EAAW,EAAKd,UAAU5a,IAAI+J,EAAOwJ,KAC3C,QAAwB,IAAbmI,EACT,GAAI,WAAYA,EAAU,CACxB,IAAMnI,EAAMmI,EAASxL,OAAO8K,SAC5BjI,EAAQqI,IAAI7H,EAAK,EAAIR,EAAQ/S,IAAIuT,IAAQ,GAAI,CAAAxJ,SACxC,CACCwJ,EAAMmI,EAASV,SACrBjI,EAAQqI,IAAI7H,EAAKR,EAAQ/S,IAAIuT,IAAQ,IAGzC,OAAOR,IACN,IAAI8H,KAGH,EAAK7Y,KAAKsZ,UAAUlb,GAG1B,OAAO,EAAI2b,GAAQlU,KAAI,SAAC,G,IAAA,SAAC0L,EAAA,KAAKyI,EAAA,KAAc,OAC1CC,QAAS,EAAG,EAAKrB,UAAU5a,IAAIuT,IAC/ByI,SAAUA,EAASnU,KAAI,SAAAqU,GACrB,OAAO,EAAG,EAAKtB,UAAU5a,IAAIkc,EAAQ3I,aAKzC,MAAO4I,GAEPtX,QAAQF,KAAK,kBAAkBvE,EAAK,iCAKxC,MAAO,IAEX,EA7HA,GEvBO,SAASgc,EAAQxX,GACtB,OAAQA,EAAQ4S,MAGd,KAAKkD,EAAkB2B,MAGrB,OAxCN,SAA4BtY,G,QACpBuY,EAAO,UAGPC,EAAU,G,IAChB,IAAmB,QAAAxY,EAAO6X,MAAI,8BAAE,CAA3B,IAAMA,EAAI,QACA,OAATA,GAAeW,EAAQzV,KAAQwV,EAAI,mBAC1B,OAATV,GAAeW,EAAQzV,KAAQwV,EAAI,aAAaV,EAAI,Y,iGAItD7X,EAAO6X,KAAKha,OAAS,GACvB2a,EAAQzV,KAAQwV,EAAI,0BAGlBC,EAAQ3a,QACV4a,cAAa,gBACRF,EAAI,oCACJC,IAoBHE,CAAmB7X,EAAQ4W,KAAKzX,QAChC4M,EAAS,IAAI,EAAO/L,EAAQ4W,MACrB,CACLhE,KAAMkD,EAAkBgC,OAI5B,KAAKhC,EAAkBiC,MACrB,MAAO,CACLnF,KAAMkD,EAAkBkC,OACxBpB,KAAM7K,EAASA,EAAOE,MAAMjM,EAAQ4W,MAAQ,IAIhD,QACE,MAAM,IAAIjW,UAAU,0BDtE1B,SAAkBmV,GAChB,qBACA,qBACA,qBACA,uBAJF,CAAkBA,MAAiB,KC8EnCmC,iBAAiB,WAAW,SAAAC,GAC1BC,YAAYX,EAAQU,EAAGtB","file":"assets/javascripts/worker/search.58d22e8e.min.js","sourcesContent":[" \t// The module cache\n \tvar installedModules = {};\n\n \t// The require function\n \tfunction __webpack_require__(moduleId) {\n\n \t\t// Check if module is in cache\n \t\tif(installedModules[moduleId]) {\n \t\t\treturn installedModules[moduleId].exports;\n \t\t}\n \t\t// Create a new module (and put it into the cache)\n \t\tvar module = installedModules[moduleId] = {\n \t\t\ti: moduleId,\n \t\t\tl: false,\n \t\t\texports: {}\n \t\t};\n\n \t\t// Execute the module function\n \t\tmodules[moduleId].call(module.exports, module, module.exports, __webpack_require__);\n\n \t\t// Flag the module as loaded\n \t\tmodule.l = true;\n\n \t\t// Return the exports of the module\n \t\treturn module.exports;\n \t}\n\n\n \t// expose the modules object (__webpack_modules__)\n \t__webpack_require__.m = modules;\n\n \t// expose the module cache\n \t__webpack_require__.c = installedModules;\n\n \t// define getter function for harmony exports\n \t__webpack_require__.d = function(exports, name, getter) {\n \t\tif(!__webpack_require__.o(exports, name)) {\n \t\t\tObject.defineProperty(exports, name, { enumerable: true, get: getter });\n \t\t}\n \t};\n\n \t// define __esModule on exports\n \t__webpack_require__.r = function(exports) {\n \t\tif(typeof Symbol !== 'undefined' && Symbol.toStringTag) {\n \t\t\tObject.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });\n \t\t}\n \t\tObject.defineProperty(exports, '__esModule', { value: true });\n \t};\n\n \t// create a fake namespace object\n \t// mode & 1: value is a module id, require it\n \t// mode & 2: merge all properties of value into the ns\n \t// mode & 4: return value when already ns object\n \t// mode & 8|1: behave like require\n \t__webpack_require__.t = function(value, mode) {\n \t\tif(mode & 1) value = __webpack_require__(value);\n \t\tif(mode & 8) return value;\n \t\tif((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;\n \t\tvar ns = Object.create(null);\n \t\t__webpack_require__.r(ns);\n \t\tObject.defineProperty(ns, 'default', { enumerable: true, value: value });\n \t\tif(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));\n \t\treturn ns;\n \t};\n\n \t// getDefaultExport function for compatibility with non-harmony modules\n \t__webpack_require__.n = function(module) {\n \t\tvar getter = module && module.__esModule ?\n \t\t\tfunction getDefault() { return module['default']; } :\n \t\t\tfunction getModuleExports() { return module; };\n \t\t__webpack_require__.d(getter, 'a', getter);\n \t\treturn getter;\n \t};\n\n \t// Object.prototype.hasOwnProperty.call\n \t__webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };\n\n \t// __webpack_public_path__\n \t__webpack_require__.p = \"\";\n\n\n \t// Load entry module and return exports\n \treturn __webpack_require__(__webpack_require__.s = 4);\n","/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n","module.exports = global[\"lunr\"] = require(\"-!./lunr.js\");","var g;\n\n// This works in non-strict mode\ng = (function() {\n\treturn this;\n})();\n\ntry {\n\t// This works if eval is allowed (see CSP)\n\tg = g || new Function(\"return this\")();\n} catch (e) {\n\t// This works if the window reference is available\n\tif (typeof window === \"object\") g = window;\n}\n\n// g can still be undefined, but nothing to do about it...\n// We return undefined, instead of nothing here, so it's\n// easier to handle this case. if(!global) { ...}\n\nmodule.exports = g;\n","/**\n * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.8\n * Copyright (C) 2019 Oliver Nightingale\n * @license MIT\n */\n\n;(function(){\n\n/**\n * A convenience function for configuring and constructing\n * a new lunr Index.\n *\n * A lunr.Builder instance is created and the pipeline setup\n * with a trimmer, stop word filter and stemmer.\n *\n * This builder object is yielded to the configuration function\n * that is passed as a parameter, allowing the list of fields\n * and other builder parameters to be customised.\n *\n * All documents _must_ be added within the passed config function.\n *\n * @example\n * var idx = lunr(function () {\n * this.field('title')\n * this.field('body')\n * this.ref('id')\n *\n * documents.forEach(function (doc) {\n * this.add(doc)\n * }, this)\n * })\n *\n * @see {@link lunr.Builder}\n * @see {@link lunr.Pipeline}\n * @see {@link lunr.trimmer}\n * @see {@link lunr.stopWordFilter}\n * @see {@link lunr.stemmer}\n * @namespace {function} lunr\n */\nvar lunr = function (config) {\n var builder = new lunr.Builder\n\n builder.pipeline.add(\n lunr.trimmer,\n lunr.stopWordFilter,\n lunr.stemmer\n )\n\n builder.searchPipeline.add(\n lunr.stemmer\n )\n\n config.call(builder, builder)\n return builder.build()\n}\n\nlunr.version = \"2.3.8\"\n/*!\n * lunr.utils\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * A namespace containing utils for the rest of the lunr library\n * @namespace lunr.utils\n */\nlunr.utils = {}\n\n/**\n * Print a warning message to the console.\n *\n * @param {String} message The message to be printed.\n * @memberOf lunr.utils\n * @function\n */\nlunr.utils.warn = (function (global) {\n /* eslint-disable no-console */\n return function (message) {\n if (global.console && console.warn) {\n console.warn(message)\n }\n }\n /* eslint-enable no-console */\n})(this)\n\n/**\n * Convert an object to a string.\n *\n * In the case of `null` and `undefined` the function returns\n * the empty string, in all other cases the result of calling\n * `toString` on the passed object is returned.\n *\n * @param {Any} obj The object to convert to a string.\n * @return {String} string representation of the passed object.\n * @memberOf lunr.utils\n */\nlunr.utils.asString = function (obj) {\n if (obj === void 0 || obj === null) {\n return \"\"\n } else {\n return obj.toString()\n }\n}\n\n/**\n * Clones an object.\n *\n * Will create a copy of an existing object such that any mutations\n * on the copy cannot affect the original.\n *\n * Only shallow objects are supported, passing a nested object to this\n * function will cause a TypeError.\n *\n * Objects with primitives, and arrays of primitives are supported.\n *\n * @param {Object} obj The object to clone.\n * @return {Object} a clone of the passed object.\n * @throws {TypeError} when a nested object is passed.\n * @memberOf Utils\n */\nlunr.utils.clone = function (obj) {\n if (obj === null || obj === undefined) {\n return obj\n }\n\n var clone = Object.create(null),\n keys = Object.keys(obj)\n\n for (var i = 0; i < keys.length; i++) {\n var key = keys[i],\n val = obj[key]\n\n if (Array.isArray(val)) {\n clone[key] = val.slice()\n continue\n }\n\n if (typeof val === 'string' ||\n typeof val === 'number' ||\n typeof val === 'boolean') {\n clone[key] = val\n continue\n }\n\n throw new TypeError(\"clone is not deep and does not support nested objects\")\n }\n\n return clone\n}\nlunr.FieldRef = function (docRef, fieldName, stringValue) {\n this.docRef = docRef\n this.fieldName = fieldName\n this._stringValue = stringValue\n}\n\nlunr.FieldRef.joiner = \"/\"\n\nlunr.FieldRef.fromString = function (s) {\n var n = s.indexOf(lunr.FieldRef.joiner)\n\n if (n === -1) {\n throw \"malformed field ref string\"\n }\n\n var fieldRef = s.slice(0, n),\n docRef = s.slice(n + 1)\n\n return new lunr.FieldRef (docRef, fieldRef, s)\n}\n\nlunr.FieldRef.prototype.toString = function () {\n if (this._stringValue == undefined) {\n this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef\n }\n\n return this._stringValue\n}\n/*!\n * lunr.Set\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * A lunr set.\n *\n * @constructor\n */\nlunr.Set = function (elements) {\n this.elements = Object.create(null)\n\n if (elements) {\n this.length = elements.length\n\n for (var i = 0; i < this.length; i++) {\n this.elements[elements[i]] = true\n }\n } else {\n this.length = 0\n }\n}\n\n/**\n * A complete set that contains all elements.\n *\n * @static\n * @readonly\n * @type {lunr.Set}\n */\nlunr.Set.complete = {\n intersect: function (other) {\n return other\n },\n\n union: function (other) {\n return other\n },\n\n contains: function () {\n return true\n }\n}\n\n/**\n * An empty set that contains no elements.\n *\n * @static\n * @readonly\n * @type {lunr.Set}\n */\nlunr.Set.empty = {\n intersect: function () {\n return this\n },\n\n union: function (other) {\n return other\n },\n\n contains: function () {\n return false\n }\n}\n\n/**\n * Returns true if this set contains the specified object.\n *\n * @param {object} object - Object whose presence in this set is to be tested.\n * @returns {boolean} - True if this set contains the specified object.\n */\nlunr.Set.prototype.contains = function (object) {\n return !!this.elements[object]\n}\n\n/**\n * Returns a new set containing only the elements that are present in both\n * this set and the specified set.\n *\n * @param {lunr.Set} other - set to intersect with this set.\n * @returns {lunr.Set} a new set that is the intersection of this and the specified set.\n */\n\nlunr.Set.prototype.intersect = function (other) {\n var a, b, elements, intersection = []\n\n if (other === lunr.Set.complete) {\n return this\n }\n\n if (other === lunr.Set.empty) {\n return other\n }\n\n if (this.length < other.length) {\n a = this\n b = other\n } else {\n a = other\n b = this\n }\n\n elements = Object.keys(a.elements)\n\n for (var i = 0; i < elements.length; i++) {\n var element = elements[i]\n if (element in b.elements) {\n intersection.push(element)\n }\n }\n\n return new lunr.Set (intersection)\n}\n\n/**\n * Returns a new set combining the elements of this and the specified set.\n *\n * @param {lunr.Set} other - set to union with this set.\n * @return {lunr.Set} a new set that is the union of this and the specified set.\n */\n\nlunr.Set.prototype.union = function (other) {\n if (other === lunr.Set.complete) {\n return lunr.Set.complete\n }\n\n if (other === lunr.Set.empty) {\n return this\n }\n\n return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements)))\n}\n/**\n * A function to calculate the inverse document frequency for\n * a posting. This is shared between the builder and the index\n *\n * @private\n * @param {object} posting - The posting for a given term\n * @param {number} documentCount - The total number of documents.\n */\nlunr.idf = function (posting, documentCount) {\n var documentsWithTerm = 0\n\n for (var fieldName in posting) {\n if (fieldName == '_index') continue // Ignore the term index, its not a field\n documentsWithTerm += Object.keys(posting[fieldName]).length\n }\n\n var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5)\n\n return Math.log(1 + Math.abs(x))\n}\n\n/**\n * A token wraps a string representation of a token\n * as it is passed through the text processing pipeline.\n *\n * @constructor\n * @param {string} [str=''] - The string token being wrapped.\n * @param {object} [metadata={}] - Metadata associated with this token.\n */\nlunr.Token = function (str, metadata) {\n this.str = str || \"\"\n this.metadata = metadata || {}\n}\n\n/**\n * Returns the token string that is being wrapped by this object.\n *\n * @returns {string}\n */\nlunr.Token.prototype.toString = function () {\n return this.str\n}\n\n/**\n * A token update function is used when updating or optionally\n * when cloning a token.\n *\n * @callback lunr.Token~updateFunction\n * @param {string} str - The string representation of the token.\n * @param {Object} metadata - All metadata associated with this token.\n */\n\n/**\n * Applies the given function to the wrapped string token.\n *\n * @example\n * token.update(function (str, metadata) {\n * return str.toUpperCase()\n * })\n *\n * @param {lunr.Token~updateFunction} fn - A function to apply to the token string.\n * @returns {lunr.Token}\n */\nlunr.Token.prototype.update = function (fn) {\n this.str = fn(this.str, this.metadata)\n return this\n}\n\n/**\n * Creates a clone of this token. Optionally a function can be\n * applied to the cloned token.\n *\n * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token.\n * @returns {lunr.Token}\n */\nlunr.Token.prototype.clone = function (fn) {\n fn = fn || function (s) { return s }\n return new lunr.Token (fn(this.str, this.metadata), this.metadata)\n}\n/*!\n * lunr.tokenizer\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * A function for splitting a string into tokens ready to be inserted into\n * the search index. Uses `lunr.tokenizer.separator` to split strings, change\n * the value of this property to change how strings are split into tokens.\n *\n * This tokenizer will convert its parameter to a string by calling `toString` and\n * then will split this string on the character in `lunr.tokenizer.separator`.\n * Arrays will have their elements converted to strings and wrapped in a lunr.Token.\n *\n * Optional metadata can be passed to the tokenizer, this metadata will be cloned and\n * added as metadata to every token that is created from the object to be tokenized.\n *\n * @static\n * @param {?(string|object|object[])} obj - The object to convert into tokens\n * @param {?object} metadata - Optional metadata to associate with every token\n * @returns {lunr.Token[]}\n * @see {@link lunr.Pipeline}\n */\nlunr.tokenizer = function (obj, metadata) {\n if (obj == null || obj == undefined) {\n return []\n }\n\n if (Array.isArray(obj)) {\n return obj.map(function (t) {\n return new lunr.Token(\n lunr.utils.asString(t).toLowerCase(),\n lunr.utils.clone(metadata)\n )\n })\n }\n\n var str = obj.toString().toLowerCase(),\n len = str.length,\n tokens = []\n\n for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) {\n var char = str.charAt(sliceEnd),\n sliceLength = sliceEnd - sliceStart\n\n if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) {\n\n if (sliceLength > 0) {\n var tokenMetadata = lunr.utils.clone(metadata) || {}\n tokenMetadata[\"position\"] = [sliceStart, sliceLength]\n tokenMetadata[\"index\"] = tokens.length\n\n tokens.push(\n new lunr.Token (\n str.slice(sliceStart, sliceEnd),\n tokenMetadata\n )\n )\n }\n\n sliceStart = sliceEnd + 1\n }\n\n }\n\n return tokens\n}\n\n/**\n * The separator used to split a string into tokens. Override this property to change the behaviour of\n * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens.\n *\n * @static\n * @see lunr.tokenizer\n */\nlunr.tokenizer.separator = /[\\s\\-]+/\n/*!\n * lunr.Pipeline\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * lunr.Pipelines maintain an ordered list of functions to be applied to all\n * tokens in documents entering the search index and queries being ran against\n * the index.\n *\n * An instance of lunr.Index created with the lunr shortcut will contain a\n * pipeline with a stop word filter and an English language stemmer. Extra\n * functions can be added before or after either of these functions or these\n * default functions can be removed.\n *\n * When run the pipeline will call each function in turn, passing a token, the\n * index of that token in the original list of all tokens and finally a list of\n * all the original tokens.\n *\n * The output of functions in the pipeline will be passed to the next function\n * in the pipeline. To exclude a token from entering the index the function\n * should return undefined, the rest of the pipeline will not be called with\n * this token.\n *\n * For serialisation of pipelines to work, all functions used in an instance of\n * a pipeline should be registered with lunr.Pipeline. Registered functions can\n * then be loaded. If trying to load a serialised pipeline that uses functions\n * that are not registered an error will be thrown.\n *\n * If not planning on serialising the pipeline then registering pipeline functions\n * is not necessary.\n *\n * @constructor\n */\nlunr.Pipeline = function () {\n this._stack = []\n}\n\nlunr.Pipeline.registeredFunctions = Object.create(null)\n\n/**\n * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token\n * string as well as all known metadata. A pipeline function can mutate the token string\n * or mutate (or add) metadata for a given token.\n *\n * A pipeline function can indicate that the passed token should be discarded by returning\n * null, undefined or an empty string. This token will not be passed to any downstream pipeline\n * functions and will not be added to the index.\n *\n * Multiple tokens can be returned by returning an array of tokens. Each token will be passed\n * to any downstream pipeline functions and all will returned tokens will be added to the index.\n *\n * Any number of pipeline functions may be chained together using a lunr.Pipeline.\n *\n * @interface lunr.PipelineFunction\n * @param {lunr.Token} token - A token from the document being processed.\n * @param {number} i - The index of this token in the complete list of tokens for this document/field.\n * @param {lunr.Token[]} tokens - All tokens for this document/field.\n * @returns {(?lunr.Token|lunr.Token[])}\n */\n\n/**\n * Register a function with the pipeline.\n *\n * Functions that are used in the pipeline should be registered if the pipeline\n * needs to be serialised, or a serialised pipeline needs to be loaded.\n *\n * Registering a function does not add it to a pipeline, functions must still be\n * added to instances of the pipeline for them to be used when running a pipeline.\n *\n * @param {lunr.PipelineFunction} fn - The function to check for.\n * @param {String} label - The label to register this function with\n */\nlunr.Pipeline.registerFunction = function (fn, label) {\n if (label in this.registeredFunctions) {\n lunr.utils.warn('Overwriting existing registered function: ' + label)\n }\n\n fn.label = label\n lunr.Pipeline.registeredFunctions[fn.label] = fn\n}\n\n/**\n * Warns if the function is not registered as a Pipeline function.\n *\n * @param {lunr.PipelineFunction} fn - The function to check for.\n * @private\n */\nlunr.Pipeline.warnIfFunctionNotRegistered = function (fn) {\n var isRegistered = fn.label && (fn.label in this.registeredFunctions)\n\n if (!isRegistered) {\n lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\\n', fn)\n }\n}\n\n/**\n * Loads a previously serialised pipeline.\n *\n * All functions to be loaded must already be registered with lunr.Pipeline.\n * If any function from the serialised data has not been registered then an\n * error will be thrown.\n *\n * @param {Object} serialised - The serialised pipeline to load.\n * @returns {lunr.Pipeline}\n */\nlunr.Pipeline.load = function (serialised) {\n var pipeline = new lunr.Pipeline\n\n serialised.forEach(function (fnName) {\n var fn = lunr.Pipeline.registeredFunctions[fnName]\n\n if (fn) {\n pipeline.add(fn)\n } else {\n throw new Error('Cannot load unregistered function: ' + fnName)\n }\n })\n\n return pipeline\n}\n\n/**\n * Adds new functions to the end of the pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline.\n */\nlunr.Pipeline.prototype.add = function () {\n var fns = Array.prototype.slice.call(arguments)\n\n fns.forEach(function (fn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(fn)\n this._stack.push(fn)\n }, this)\n}\n\n/**\n * Adds a single function after a function that already exists in the\n * pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.\n * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.\n */\nlunr.Pipeline.prototype.after = function (existingFn, newFn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(newFn)\n\n var pos = this._stack.indexOf(existingFn)\n if (pos == -1) {\n throw new Error('Cannot find existingFn')\n }\n\n pos = pos + 1\n this._stack.splice(pos, 0, newFn)\n}\n\n/**\n * Adds a single function before a function that already exists in the\n * pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.\n * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.\n */\nlunr.Pipeline.prototype.before = function (existingFn, newFn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(newFn)\n\n var pos = this._stack.indexOf(existingFn)\n if (pos == -1) {\n throw new Error('Cannot find existingFn')\n }\n\n this._stack.splice(pos, 0, newFn)\n}\n\n/**\n * Removes a function from the pipeline.\n *\n * @param {lunr.PipelineFunction} fn The function to remove from the pipeline.\n */\nlunr.Pipeline.prototype.remove = function (fn) {\n var pos = this._stack.indexOf(fn)\n if (pos == -1) {\n return\n }\n\n this._stack.splice(pos, 1)\n}\n\n/**\n * Runs the current list of functions that make up the pipeline against the\n * passed tokens.\n *\n * @param {Array} tokens The tokens to run through the pipeline.\n * @returns {Array}\n */\nlunr.Pipeline.prototype.run = function (tokens) {\n var stackLength = this._stack.length\n\n for (var i = 0; i < stackLength; i++) {\n var fn = this._stack[i]\n var memo = []\n\n for (var j = 0; j < tokens.length; j++) {\n var result = fn(tokens[j], j, tokens)\n\n if (result === null || result === void 0 || result === '') continue\n\n if (Array.isArray(result)) {\n for (var k = 0; k < result.length; k++) {\n memo.push(result[k])\n }\n } else {\n memo.push(result)\n }\n }\n\n tokens = memo\n }\n\n return tokens\n}\n\n/**\n * Convenience method for passing a string through a pipeline and getting\n * strings out. This method takes care of wrapping the passed string in a\n * token and mapping the resulting tokens back to strings.\n *\n * @param {string} str - The string to pass through the pipeline.\n * @param {?object} metadata - Optional metadata to associate with the token\n * passed to the pipeline.\n * @returns {string[]}\n */\nlunr.Pipeline.prototype.runString = function (str, metadata) {\n var token = new lunr.Token (str, metadata)\n\n return this.run([token]).map(function (t) {\n return t.toString()\n })\n}\n\n/**\n * Resets the pipeline by removing any existing processors.\n *\n */\nlunr.Pipeline.prototype.reset = function () {\n this._stack = []\n}\n\n/**\n * Returns a representation of the pipeline ready for serialisation.\n *\n * Logs a warning if the function has not been registered.\n *\n * @returns {Array}\n */\nlunr.Pipeline.prototype.toJSON = function () {\n return this._stack.map(function (fn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(fn)\n\n return fn.label\n })\n}\n/*!\n * lunr.Vector\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * A vector is used to construct the vector space of documents and queries. These\n * vectors support operations to determine the similarity between two documents or\n * a document and a query.\n *\n * Normally no parameters are required for initializing a vector, but in the case of\n * loading a previously dumped vector the raw elements can be provided to the constructor.\n *\n * For performance reasons vectors are implemented with a flat array, where an elements\n * index is immediately followed by its value. E.g. [index, value, index, value]. This\n * allows the underlying array to be as sparse as possible and still offer decent\n * performance when being used for vector calculations.\n *\n * @constructor\n * @param {Number[]} [elements] - The flat list of element index and element value pairs.\n */\nlunr.Vector = function (elements) {\n this._magnitude = 0\n this.elements = elements || []\n}\n\n\n/**\n * Calculates the position within the vector to insert a given index.\n *\n * This is used internally by insert and upsert. If there are duplicate indexes then\n * the position is returned as if the value for that index were to be updated, but it\n * is the callers responsibility to check whether there is a duplicate at that index\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @returns {Number}\n */\nlunr.Vector.prototype.positionForIndex = function (index) {\n // For an empty vector the tuple can be inserted at the beginning\n if (this.elements.length == 0) {\n return 0\n }\n\n var start = 0,\n end = this.elements.length / 2,\n sliceLength = end - start,\n pivotPoint = Math.floor(sliceLength / 2),\n pivotIndex = this.elements[pivotPoint * 2]\n\n while (sliceLength > 1) {\n if (pivotIndex < index) {\n start = pivotPoint\n }\n\n if (pivotIndex > index) {\n end = pivotPoint\n }\n\n if (pivotIndex == index) {\n break\n }\n\n sliceLength = end - start\n pivotPoint = start + Math.floor(sliceLength / 2)\n pivotIndex = this.elements[pivotPoint * 2]\n }\n\n if (pivotIndex == index) {\n return pivotPoint * 2\n }\n\n if (pivotIndex > index) {\n return pivotPoint * 2\n }\n\n if (pivotIndex < index) {\n return (pivotPoint + 1) * 2\n }\n}\n\n/**\n * Inserts an element at an index within the vector.\n *\n * Does not allow duplicates, will throw an error if there is already an entry\n * for this index.\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @param {Number} val - The value to be inserted into the vector.\n */\nlunr.Vector.prototype.insert = function (insertIdx, val) {\n this.upsert(insertIdx, val, function () {\n throw \"duplicate index\"\n })\n}\n\n/**\n * Inserts or updates an existing index within the vector.\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @param {Number} val - The value to be inserted into the vector.\n * @param {function} fn - A function that is called for updates, the existing value and the\n * requested value are passed as arguments\n */\nlunr.Vector.prototype.upsert = function (insertIdx, val, fn) {\n this._magnitude = 0\n var position = this.positionForIndex(insertIdx)\n\n if (this.elements[position] == insertIdx) {\n this.elements[position + 1] = fn(this.elements[position + 1], val)\n } else {\n this.elements.splice(position, 0, insertIdx, val)\n }\n}\n\n/**\n * Calculates the magnitude of this vector.\n *\n * @returns {Number}\n */\nlunr.Vector.prototype.magnitude = function () {\n if (this._magnitude) return this._magnitude\n\n var sumOfSquares = 0,\n elementsLength = this.elements.length\n\n for (var i = 1; i < elementsLength; i += 2) {\n var val = this.elements[i]\n sumOfSquares += val * val\n }\n\n return this._magnitude = Math.sqrt(sumOfSquares)\n}\n\n/**\n * Calculates the dot product of this vector and another vector.\n *\n * @param {lunr.Vector} otherVector - The vector to compute the dot product with.\n * @returns {Number}\n */\nlunr.Vector.prototype.dot = function (otherVector) {\n var dotProduct = 0,\n a = this.elements, b = otherVector.elements,\n aLen = a.length, bLen = b.length,\n aVal = 0, bVal = 0,\n i = 0, j = 0\n\n while (i < aLen && j < bLen) {\n aVal = a[i], bVal = b[j]\n if (aVal < bVal) {\n i += 2\n } else if (aVal > bVal) {\n j += 2\n } else if (aVal == bVal) {\n dotProduct += a[i + 1] * b[j + 1]\n i += 2\n j += 2\n }\n }\n\n return dotProduct\n}\n\n/**\n * Calculates the similarity between this vector and another vector.\n *\n * @param {lunr.Vector} otherVector - The other vector to calculate the\n * similarity with.\n * @returns {Number}\n */\nlunr.Vector.prototype.similarity = function (otherVector) {\n return this.dot(otherVector) / this.magnitude() || 0\n}\n\n/**\n * Converts the vector to an array of the elements within the vector.\n *\n * @returns {Number[]}\n */\nlunr.Vector.prototype.toArray = function () {\n var output = new Array (this.elements.length / 2)\n\n for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) {\n output[j] = this.elements[i]\n }\n\n return output\n}\n\n/**\n * A JSON serializable representation of the vector.\n *\n * @returns {Number[]}\n */\nlunr.Vector.prototype.toJSON = function () {\n return this.elements\n}\n/* eslint-disable */\n/*!\n * lunr.stemmer\n * Copyright (C) 2019 Oliver Nightingale\n * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt\n */\n\n/**\n * lunr.stemmer is an english language stemmer, this is a JavaScript\n * implementation of the PorterStemmer taken from http://tartarus.org/~martin\n *\n * @static\n * @implements {lunr.PipelineFunction}\n * @param {lunr.Token} token - The string to stem\n * @returns {lunr.Token}\n * @see {@link lunr.Pipeline}\n * @function\n */\nlunr.stemmer = (function(){\n var step2list = {\n \"ational\" : \"ate\",\n \"tional\" : \"tion\",\n \"enci\" : \"ence\",\n \"anci\" : \"ance\",\n \"izer\" : \"ize\",\n \"bli\" : \"ble\",\n \"alli\" : \"al\",\n \"entli\" : \"ent\",\n \"eli\" : \"e\",\n \"ousli\" : \"ous\",\n \"ization\" : \"ize\",\n \"ation\" : \"ate\",\n \"ator\" : \"ate\",\n \"alism\" : \"al\",\n \"iveness\" : \"ive\",\n \"fulness\" : \"ful\",\n \"ousness\" : \"ous\",\n \"aliti\" : \"al\",\n \"iviti\" : \"ive\",\n \"biliti\" : \"ble\",\n \"logi\" : \"log\"\n },\n\n step3list = {\n \"icate\" : \"ic\",\n \"ative\" : \"\",\n \"alize\" : \"al\",\n \"iciti\" : \"ic\",\n \"ical\" : \"ic\",\n \"ful\" : \"\",\n \"ness\" : \"\"\n },\n\n c = \"[^aeiou]\", // consonant\n v = \"[aeiouy]\", // vowel\n C = c + \"[^aeiouy]*\", // consonant sequence\n V = v + \"[aeiou]*\", // vowel sequence\n\n mgr0 = \"^(\" + C + \")?\" + V + C, // [C]VC... is m>0\n meq1 = \"^(\" + C + \")?\" + V + C + \"(\" + V + \")?$\", // [C]VC[V] is m=1\n mgr1 = \"^(\" + C + \")?\" + V + C + V + C, // [C]VCVC... is m>1\n s_v = \"^(\" + C + \")?\" + v; // vowel in stem\n\n var re_mgr0 = new RegExp(mgr0);\n var re_mgr1 = new RegExp(mgr1);\n var re_meq1 = new RegExp(meq1);\n var re_s_v = new RegExp(s_v);\n\n var re_1a = /^(.+?)(ss|i)es$/;\n var re2_1a = /^(.+?)([^s])s$/;\n var re_1b = /^(.+?)eed$/;\n var re2_1b = /^(.+?)(ed|ing)$/;\n var re_1b_2 = /.$/;\n var re2_1b_2 = /(at|bl|iz)$/;\n var re3_1b_2 = new RegExp(\"([^aeiouylsz])\\\\1$\");\n var re4_1b_2 = new RegExp(\"^\" + C + v + \"[^aeiouwxy]$\");\n\n var re_1c = /^(.+?[^aeiou])y$/;\n var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;\n\n var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;\n\n var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;\n var re2_4 = /^(.+?)(s|t)(ion)$/;\n\n var re_5 = /^(.+?)e$/;\n var re_5_1 = /ll$/;\n var re3_5 = new RegExp(\"^\" + C + v + \"[^aeiouwxy]$\");\n\n var porterStemmer = function porterStemmer(w) {\n var stem,\n suffix,\n firstch,\n re,\n re2,\n re3,\n re4;\n\n if (w.length < 3) { return w; }\n\n firstch = w.substr(0,1);\n if (firstch == \"y\") {\n w = firstch.toUpperCase() + w.substr(1);\n }\n\n // Step 1a\n re = re_1a\n re2 = re2_1a;\n\n if (re.test(w)) { w = w.replace(re,\"$1$2\"); }\n else if (re2.test(w)) { w = w.replace(re2,\"$1$2\"); }\n\n // Step 1b\n re = re_1b;\n re2 = re2_1b;\n if (re.test(w)) {\n var fp = re.exec(w);\n re = re_mgr0;\n if (re.test(fp[1])) {\n re = re_1b_2;\n w = w.replace(re,\"\");\n }\n } else if (re2.test(w)) {\n var fp = re2.exec(w);\n stem = fp[1];\n re2 = re_s_v;\n if (re2.test(stem)) {\n w = stem;\n re2 = re2_1b_2;\n re3 = re3_1b_2;\n re4 = re4_1b_2;\n if (re2.test(w)) { w = w + \"e\"; }\n else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,\"\"); }\n else if (re4.test(w)) { w = w + \"e\"; }\n }\n }\n\n // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say)\n re = re_1c;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n w = stem + \"i\";\n }\n\n // Step 2\n re = re_2;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n suffix = fp[2];\n re = re_mgr0;\n if (re.test(stem)) {\n w = stem + step2list[suffix];\n }\n }\n\n // Step 3\n re = re_3;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n suffix = fp[2];\n re = re_mgr0;\n if (re.test(stem)) {\n w = stem + step3list[suffix];\n }\n }\n\n // Step 4\n re = re_4;\n re2 = re2_4;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n re = re_mgr1;\n if (re.test(stem)) {\n w = stem;\n }\n } else if (re2.test(w)) {\n var fp = re2.exec(w);\n stem = fp[1] + fp[2];\n re2 = re_mgr1;\n if (re2.test(stem)) {\n w = stem;\n }\n }\n\n // Step 5\n re = re_5;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n re = re_mgr1;\n re2 = re_meq1;\n re3 = re3_5;\n if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) {\n w = stem;\n }\n }\n\n re = re_5_1;\n re2 = re_mgr1;\n if (re.test(w) && re2.test(w)) {\n re = re_1b_2;\n w = w.replace(re,\"\");\n }\n\n // and turn initial Y back to y\n\n if (firstch == \"y\") {\n w = firstch.toLowerCase() + w.substr(1);\n }\n\n return w;\n };\n\n return function (token) {\n return token.update(porterStemmer);\n }\n})();\n\nlunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer')\n/*!\n * lunr.stopWordFilter\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * lunr.generateStopWordFilter builds a stopWordFilter function from the provided\n * list of stop words.\n *\n * The built in lunr.stopWordFilter is built using this generator and can be used\n * to generate custom stopWordFilters for applications or non English languages.\n *\n * @function\n * @param {Array} token The token to pass through the filter\n * @returns {lunr.PipelineFunction}\n * @see lunr.Pipeline\n * @see lunr.stopWordFilter\n */\nlunr.generateStopWordFilter = function (stopWords) {\n var words = stopWords.reduce(function (memo, stopWord) {\n memo[stopWord] = stopWord\n return memo\n }, {})\n\n return function (token) {\n if (token && words[token.toString()] !== token.toString()) return token\n }\n}\n\n/**\n * lunr.stopWordFilter is an English language stop word list filter, any words\n * contained in the list will not be passed through the filter.\n *\n * This is intended to be used in the Pipeline. If the token does not pass the\n * filter then undefined will be returned.\n *\n * @function\n * @implements {lunr.PipelineFunction}\n * @params {lunr.Token} token - A token to check for being a stop word.\n * @returns {lunr.Token}\n * @see {@link lunr.Pipeline}\n */\nlunr.stopWordFilter = lunr.generateStopWordFilter([\n 'a',\n 'able',\n 'about',\n 'across',\n 'after',\n 'all',\n 'almost',\n 'also',\n 'am',\n 'among',\n 'an',\n 'and',\n 'any',\n 'are',\n 'as',\n 'at',\n 'be',\n 'because',\n 'been',\n 'but',\n 'by',\n 'can',\n 'cannot',\n 'could',\n 'dear',\n 'did',\n 'do',\n 'does',\n 'either',\n 'else',\n 'ever',\n 'every',\n 'for',\n 'from',\n 'get',\n 'got',\n 'had',\n 'has',\n 'have',\n 'he',\n 'her',\n 'hers',\n 'him',\n 'his',\n 'how',\n 'however',\n 'i',\n 'if',\n 'in',\n 'into',\n 'is',\n 'it',\n 'its',\n 'just',\n 'least',\n 'let',\n 'like',\n 'likely',\n 'may',\n 'me',\n 'might',\n 'most',\n 'must',\n 'my',\n 'neither',\n 'no',\n 'nor',\n 'not',\n 'of',\n 'off',\n 'often',\n 'on',\n 'only',\n 'or',\n 'other',\n 'our',\n 'own',\n 'rather',\n 'said',\n 'say',\n 'says',\n 'she',\n 'should',\n 'since',\n 'so',\n 'some',\n 'than',\n 'that',\n 'the',\n 'their',\n 'them',\n 'then',\n 'there',\n 'these',\n 'they',\n 'this',\n 'tis',\n 'to',\n 'too',\n 'twas',\n 'us',\n 'wants',\n 'was',\n 'we',\n 'were',\n 'what',\n 'when',\n 'where',\n 'which',\n 'while',\n 'who',\n 'whom',\n 'why',\n 'will',\n 'with',\n 'would',\n 'yet',\n 'you',\n 'your'\n])\n\nlunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter')\n/*!\n * lunr.trimmer\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * lunr.trimmer is a pipeline function for trimming non word\n * characters from the beginning and end of tokens before they\n * enter the index.\n *\n * This implementation may not work correctly for non latin\n * characters and should either be removed or adapted for use\n * with languages with non-latin characters.\n *\n * @static\n * @implements {lunr.PipelineFunction}\n * @param {lunr.Token} token The token to pass through the filter\n * @returns {lunr.Token}\n * @see lunr.Pipeline\n */\nlunr.trimmer = function (token) {\n return token.update(function (s) {\n return s.replace(/^\\W+/, '').replace(/\\W+$/, '')\n })\n}\n\nlunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer')\n/*!\n * lunr.TokenSet\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * A token set is used to store the unique list of all tokens\n * within an index. Token sets are also used to represent an\n * incoming query to the index, this query token set and index\n * token set are then intersected to find which tokens to look\n * up in the inverted index.\n *\n * A token set can hold multiple tokens, as in the case of the\n * index token set, or it can hold a single token as in the\n * case of a simple query token set.\n *\n * Additionally token sets are used to perform wildcard matching.\n * Leading, contained and trailing wildcards are supported, and\n * from this edit distance matching can also be provided.\n *\n * Token sets are implemented as a minimal finite state automata,\n * where both common prefixes and suffixes are shared between tokens.\n * This helps to reduce the space used for storing the token set.\n *\n * @constructor\n */\nlunr.TokenSet = function () {\n this.final = false\n this.edges = {}\n this.id = lunr.TokenSet._nextId\n lunr.TokenSet._nextId += 1\n}\n\n/**\n * Keeps track of the next, auto increment, identifier to assign\n * to a new tokenSet.\n *\n * TokenSets require a unique identifier to be correctly minimised.\n *\n * @private\n */\nlunr.TokenSet._nextId = 1\n\n/**\n * Creates a TokenSet instance from the given sorted array of words.\n *\n * @param {String[]} arr - A sorted array of strings to create the set from.\n * @returns {lunr.TokenSet}\n * @throws Will throw an error if the input array is not sorted.\n */\nlunr.TokenSet.fromArray = function (arr) {\n var builder = new lunr.TokenSet.Builder\n\n for (var i = 0, len = arr.length; i < len; i++) {\n builder.insert(arr[i])\n }\n\n builder.finish()\n return builder.root\n}\n\n/**\n * Creates a token set from a query clause.\n *\n * @private\n * @param {Object} clause - A single clause from lunr.Query.\n * @param {string} clause.term - The query clause term.\n * @param {number} [clause.editDistance] - The optional edit distance for the term.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.fromClause = function (clause) {\n if ('editDistance' in clause) {\n return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance)\n } else {\n return lunr.TokenSet.fromString(clause.term)\n }\n}\n\n/**\n * Creates a token set representing a single string with a specified\n * edit distance.\n *\n * Insertions, deletions, substitutions and transpositions are each\n * treated as an edit distance of 1.\n *\n * Increasing the allowed edit distance will have a dramatic impact\n * on the performance of both creating and intersecting these TokenSets.\n * It is advised to keep the edit distance less than 3.\n *\n * @param {string} str - The string to create the token set from.\n * @param {number} editDistance - The allowed edit distance to match.\n * @returns {lunr.Vector}\n */\nlunr.TokenSet.fromFuzzyString = function (str, editDistance) {\n var root = new lunr.TokenSet\n\n var stack = [{\n node: root,\n editsRemaining: editDistance,\n str: str\n }]\n\n while (stack.length) {\n var frame = stack.pop()\n\n // no edit\n if (frame.str.length > 0) {\n var char = frame.str.charAt(0),\n noEditNode\n\n if (char in frame.node.edges) {\n noEditNode = frame.node.edges[char]\n } else {\n noEditNode = new lunr.TokenSet\n frame.node.edges[char] = noEditNode\n }\n\n if (frame.str.length == 1) {\n noEditNode.final = true\n }\n\n stack.push({\n node: noEditNode,\n editsRemaining: frame.editsRemaining,\n str: frame.str.slice(1)\n })\n }\n\n if (frame.editsRemaining == 0) {\n continue\n }\n\n // insertion\n if (\"*\" in frame.node.edges) {\n var insertionNode = frame.node.edges[\"*\"]\n } else {\n var insertionNode = new lunr.TokenSet\n frame.node.edges[\"*\"] = insertionNode\n }\n\n if (frame.str.length == 0) {\n insertionNode.final = true\n }\n\n stack.push({\n node: insertionNode,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str\n })\n\n // deletion\n // can only do a deletion if we have enough edits remaining\n // and if there are characters left to delete in the string\n if (frame.str.length > 1) {\n stack.push({\n node: frame.node,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str.slice(1)\n })\n }\n\n // deletion\n // just removing the last character from the str\n if (frame.str.length == 1) {\n frame.node.final = true\n }\n\n // substitution\n // can only do a substitution if we have enough edits remaining\n // and if there are characters left to substitute\n if (frame.str.length >= 1) {\n if (\"*\" in frame.node.edges) {\n var substitutionNode = frame.node.edges[\"*\"]\n } else {\n var substitutionNode = new lunr.TokenSet\n frame.node.edges[\"*\"] = substitutionNode\n }\n\n if (frame.str.length == 1) {\n substitutionNode.final = true\n }\n\n stack.push({\n node: substitutionNode,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str.slice(1)\n })\n }\n\n // transposition\n // can only do a transposition if there are edits remaining\n // and there are enough characters to transpose\n if (frame.str.length > 1) {\n var charA = frame.str.charAt(0),\n charB = frame.str.charAt(1),\n transposeNode\n\n if (charB in frame.node.edges) {\n transposeNode = frame.node.edges[charB]\n } else {\n transposeNode = new lunr.TokenSet\n frame.node.edges[charB] = transposeNode\n }\n\n if (frame.str.length == 1) {\n transposeNode.final = true\n }\n\n stack.push({\n node: transposeNode,\n editsRemaining: frame.editsRemaining - 1,\n str: charA + frame.str.slice(2)\n })\n }\n }\n\n return root\n}\n\n/**\n * Creates a TokenSet from a string.\n *\n * The string may contain one or more wildcard characters (*)\n * that will allow wildcard matching when intersecting with\n * another TokenSet.\n *\n * @param {string} str - The string to create a TokenSet from.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.fromString = function (str) {\n var node = new lunr.TokenSet,\n root = node\n\n /*\n * Iterates through all characters within the passed string\n * appending a node for each character.\n *\n * When a wildcard character is found then a self\n * referencing edge is introduced to continually match\n * any number of any characters.\n */\n for (var i = 0, len = str.length; i < len; i++) {\n var char = str[i],\n final = (i == len - 1)\n\n if (char == \"*\") {\n node.edges[char] = node\n node.final = final\n\n } else {\n var next = new lunr.TokenSet\n next.final = final\n\n node.edges[char] = next\n node = next\n }\n }\n\n return root\n}\n\n/**\n * Converts this TokenSet into an array of strings\n * contained within the TokenSet.\n *\n * This is not intended to be used on a TokenSet that\n * contains wildcards, in these cases the results are\n * undefined and are likely to cause an infinite loop.\n *\n * @returns {string[]}\n */\nlunr.TokenSet.prototype.toArray = function () {\n var words = []\n\n var stack = [{\n prefix: \"\",\n node: this\n }]\n\n while (stack.length) {\n var frame = stack.pop(),\n edges = Object.keys(frame.node.edges),\n len = edges.length\n\n if (frame.node.final) {\n /* In Safari, at this point the prefix is sometimes corrupted, see:\n * https://github.com/olivernn/lunr.js/issues/279 Calling any\n * String.prototype method forces Safari to \"cast\" this string to what\n * it's supposed to be, fixing the bug. */\n frame.prefix.charAt(0)\n words.push(frame.prefix)\n }\n\n for (var i = 0; i < len; i++) {\n var edge = edges[i]\n\n stack.push({\n prefix: frame.prefix.concat(edge),\n node: frame.node.edges[edge]\n })\n }\n }\n\n return words\n}\n\n/**\n * Generates a string representation of a TokenSet.\n *\n * This is intended to allow TokenSets to be used as keys\n * in objects, largely to aid the construction and minimisation\n * of a TokenSet. As such it is not designed to be a human\n * friendly representation of the TokenSet.\n *\n * @returns {string}\n */\nlunr.TokenSet.prototype.toString = function () {\n // NOTE: Using Object.keys here as this.edges is very likely\n // to enter 'hash-mode' with many keys being added\n //\n // avoiding a for-in loop here as it leads to the function\n // being de-optimised (at least in V8). From some simple\n // benchmarks the performance is comparable, but allowing\n // V8 to optimize may mean easy performance wins in the future.\n\n if (this._str) {\n return this._str\n }\n\n var str = this.final ? '1' : '0',\n labels = Object.keys(this.edges).sort(),\n len = labels.length\n\n for (var i = 0; i < len; i++) {\n var label = labels[i],\n node = this.edges[label]\n\n str = str + label + node.id\n }\n\n return str\n}\n\n/**\n * Returns a new TokenSet that is the intersection of\n * this TokenSet and the passed TokenSet.\n *\n * This intersection will take into account any wildcards\n * contained within the TokenSet.\n *\n * @param {lunr.TokenSet} b - An other TokenSet to intersect with.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.prototype.intersect = function (b) {\n var output = new lunr.TokenSet,\n frame = undefined\n\n var stack = [{\n qNode: b,\n output: output,\n node: this\n }]\n\n while (stack.length) {\n frame = stack.pop()\n\n // NOTE: As with the #toString method, we are using\n // Object.keys and a for loop instead of a for-in loop\n // as both of these objects enter 'hash' mode, causing\n // the function to be de-optimised in V8\n var qEdges = Object.keys(frame.qNode.edges),\n qLen = qEdges.length,\n nEdges = Object.keys(frame.node.edges),\n nLen = nEdges.length\n\n for (var q = 0; q < qLen; q++) {\n var qEdge = qEdges[q]\n\n for (var n = 0; n < nLen; n++) {\n var nEdge = nEdges[n]\n\n if (nEdge == qEdge || qEdge == '*') {\n var node = frame.node.edges[nEdge],\n qNode = frame.qNode.edges[qEdge],\n final = node.final && qNode.final,\n next = undefined\n\n if (nEdge in frame.output.edges) {\n // an edge already exists for this character\n // no need to create a new node, just set the finality\n // bit unless this node is already final\n next = frame.output.edges[nEdge]\n next.final = next.final || final\n\n } else {\n // no edge exists yet, must create one\n // set the finality bit and insert it\n // into the output\n next = new lunr.TokenSet\n next.final = final\n frame.output.edges[nEdge] = next\n }\n\n stack.push({\n qNode: qNode,\n output: next,\n node: node\n })\n }\n }\n }\n }\n\n return output\n}\nlunr.TokenSet.Builder = function () {\n this.previousWord = \"\"\n this.root = new lunr.TokenSet\n this.uncheckedNodes = []\n this.minimizedNodes = {}\n}\n\nlunr.TokenSet.Builder.prototype.insert = function (word) {\n var node,\n commonPrefix = 0\n\n if (word < this.previousWord) {\n throw new Error (\"Out of order word insertion\")\n }\n\n for (var i = 0; i < word.length && i < this.previousWord.length; i++) {\n if (word[i] != this.previousWord[i]) break\n commonPrefix++\n }\n\n this.minimize(commonPrefix)\n\n if (this.uncheckedNodes.length == 0) {\n node = this.root\n } else {\n node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child\n }\n\n for (var i = commonPrefix; i < word.length; i++) {\n var nextNode = new lunr.TokenSet,\n char = word[i]\n\n node.edges[char] = nextNode\n\n this.uncheckedNodes.push({\n parent: node,\n char: char,\n child: nextNode\n })\n\n node = nextNode\n }\n\n node.final = true\n this.previousWord = word\n}\n\nlunr.TokenSet.Builder.prototype.finish = function () {\n this.minimize(0)\n}\n\nlunr.TokenSet.Builder.prototype.minimize = function (downTo) {\n for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) {\n var node = this.uncheckedNodes[i],\n childKey = node.child.toString()\n\n if (childKey in this.minimizedNodes) {\n node.parent.edges[node.char] = this.minimizedNodes[childKey]\n } else {\n // Cache the key for this node since\n // we know it can't change anymore\n node.child._str = childKey\n\n this.minimizedNodes[childKey] = node.child\n }\n\n this.uncheckedNodes.pop()\n }\n}\n/*!\n * lunr.Index\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * An index contains the built index of all documents and provides a query interface\n * to the index.\n *\n * Usually instances of lunr.Index will not be created using this constructor, instead\n * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be\n * used to load previously built and serialized indexes.\n *\n * @constructor\n * @param {Object} attrs - The attributes of the built search index.\n * @param {Object} attrs.invertedIndex - An index of term/field to document reference.\n * @param {Object} attrs.fieldVectors - Field vectors\n * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens.\n * @param {string[]} attrs.fields - The names of indexed document fields.\n * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms.\n */\nlunr.Index = function (attrs) {\n this.invertedIndex = attrs.invertedIndex\n this.fieldVectors = attrs.fieldVectors\n this.tokenSet = attrs.tokenSet\n this.fields = attrs.fields\n this.pipeline = attrs.pipeline\n}\n\n/**\n * A result contains details of a document matching a search query.\n * @typedef {Object} lunr.Index~Result\n * @property {string} ref - The reference of the document this result represents.\n * @property {number} score - A number between 0 and 1 representing how similar this document is to the query.\n * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match.\n */\n\n/**\n * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple\n * query language which itself is parsed into an instance of lunr.Query.\n *\n * For programmatically building queries it is advised to directly use lunr.Query, the query language\n * is best used for human entered text rather than program generated text.\n *\n * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported\n * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello'\n * or 'world', though those that contain both will rank higher in the results.\n *\n * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can\n * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding\n * wildcards will increase the number of documents that will be found but can also have a negative\n * impact on query performance, especially with wildcards at the beginning of a term.\n *\n * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term\n * hello in the title field will match this query. Using a field not present in the index will lead\n * to an error being thrown.\n *\n * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term\n * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported\n * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2.\n * Avoid large values for edit distance to improve query performance.\n *\n * Each term also supports a presence modifier. By default a term's presence in document is optional, however\n * this can be changed to either required or prohibited. For a term's presence to be required in a document the\n * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and\n * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not\n * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'.\n *\n * To escape special characters the backslash character '\\' can be used, this allows searches to include\n * characters that would normally be considered modifiers, e.g. `foo\\~2` will search for a term \"foo~2\" instead\n * of attempting to apply a boost of 2 to the search term \"foo\".\n *\n * @typedef {string} lunr.Index~QueryString\n * @example Simple single term query\n * hello\n * @example Multiple term query\n * hello world\n * @example term scoped to a field\n * title:hello\n * @example term with a boost of 10\n * hello^10\n * @example term with an edit distance of 2\n * hello~2\n * @example terms with presence modifiers\n * -foo +bar baz\n */\n\n/**\n * Performs a search against the index using lunr query syntax.\n *\n * Results will be returned sorted by their score, the most relevant results\n * will be returned first. For details on how the score is calculated, please see\n * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}.\n *\n * For more programmatic querying use lunr.Index#query.\n *\n * @param {lunr.Index~QueryString} queryString - A string containing a lunr query.\n * @throws {lunr.QueryParseError} If the passed query string cannot be parsed.\n * @returns {lunr.Index~Result[]}\n */\nlunr.Index.prototype.search = function (queryString) {\n return this.query(function (query) {\n var parser = new lunr.QueryParser(queryString, query)\n parser.parse()\n })\n}\n\n/**\n * A query builder callback provides a query object to be used to express\n * the query to perform on the index.\n *\n * @callback lunr.Index~queryBuilder\n * @param {lunr.Query} query - The query object to build up.\n * @this lunr.Query\n */\n\n/**\n * Performs a query against the index using the yielded lunr.Query object.\n *\n * If performing programmatic queries against the index, this method is preferred\n * over lunr.Index#search so as to avoid the additional query parsing overhead.\n *\n * A query object is yielded to the supplied function which should be used to\n * express the query to be run against the index.\n *\n * Note that although this function takes a callback parameter it is _not_ an\n * asynchronous operation, the callback is just yielded a query object to be\n * customized.\n *\n * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query.\n * @returns {lunr.Index~Result[]}\n */\nlunr.Index.prototype.query = function (fn) {\n // for each query clause\n // * process terms\n // * expand terms from token set\n // * find matching documents and metadata\n // * get document vectors\n // * score documents\n\n var query = new lunr.Query(this.fields),\n matchingFields = Object.create(null),\n queryVectors = Object.create(null),\n termFieldCache = Object.create(null),\n requiredMatches = Object.create(null),\n prohibitedMatches = Object.create(null)\n\n /*\n * To support field level boosts a query vector is created per\n * field. An empty vector is eagerly created to support negated\n * queries.\n */\n for (var i = 0; i < this.fields.length; i++) {\n queryVectors[this.fields[i]] = new lunr.Vector\n }\n\n fn.call(query, query)\n\n for (var i = 0; i < query.clauses.length; i++) {\n /*\n * Unless the pipeline has been disabled for this term, which is\n * the case for terms with wildcards, we need to pass the clause\n * term through the search pipeline. A pipeline returns an array\n * of processed terms. Pipeline functions may expand the passed\n * term, which means we may end up performing multiple index lookups\n * for a single query term.\n */\n var clause = query.clauses[i],\n terms = null,\n clauseMatches = lunr.Set.complete\n\n if (clause.usePipeline) {\n terms = this.pipeline.runString(clause.term, {\n fields: clause.fields\n })\n } else {\n terms = [clause.term]\n }\n\n for (var m = 0; m < terms.length; m++) {\n var term = terms[m]\n\n /*\n * Each term returned from the pipeline needs to use the same query\n * clause object, e.g. the same boost and or edit distance. The\n * simplest way to do this is to re-use the clause object but mutate\n * its term property.\n */\n clause.term = term\n\n /*\n * From the term in the clause we create a token set which will then\n * be used to intersect the indexes token set to get a list of terms\n * to lookup in the inverted index\n */\n var termTokenSet = lunr.TokenSet.fromClause(clause),\n expandedTerms = this.tokenSet.intersect(termTokenSet).toArray()\n\n /*\n * If a term marked as required does not exist in the tokenSet it is\n * impossible for the search to return any matches. We set all the field\n * scoped required matches set to empty and stop examining any further\n * clauses.\n */\n if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) {\n for (var k = 0; k < clause.fields.length; k++) {\n var field = clause.fields[k]\n requiredMatches[field] = lunr.Set.empty\n }\n\n break\n }\n\n for (var j = 0; j < expandedTerms.length; j++) {\n /*\n * For each term get the posting and termIndex, this is required for\n * building the query vector.\n */\n var expandedTerm = expandedTerms[j],\n posting = this.invertedIndex[expandedTerm],\n termIndex = posting._index\n\n for (var k = 0; k < clause.fields.length; k++) {\n /*\n * For each field that this query term is scoped by (by default\n * all fields are in scope) we need to get all the document refs\n * that have this term in that field.\n *\n * The posting is the entry in the invertedIndex for the matching\n * term from above.\n */\n var field = clause.fields[k],\n fieldPosting = posting[field],\n matchingDocumentRefs = Object.keys(fieldPosting),\n termField = expandedTerm + \"/\" + field,\n matchingDocumentsSet = new lunr.Set(matchingDocumentRefs)\n\n /*\n * if the presence of this term is required ensure that the matching\n * documents are added to the set of required matches for this clause.\n *\n */\n if (clause.presence == lunr.Query.presence.REQUIRED) {\n clauseMatches = clauseMatches.union(matchingDocumentsSet)\n\n if (requiredMatches[field] === undefined) {\n requiredMatches[field] = lunr.Set.complete\n }\n }\n\n /*\n * if the presence of this term is prohibited ensure that the matching\n * documents are added to the set of prohibited matches for this field,\n * creating that set if it does not yet exist.\n */\n if (clause.presence == lunr.Query.presence.PROHIBITED) {\n if (prohibitedMatches[field] === undefined) {\n prohibitedMatches[field] = lunr.Set.empty\n }\n\n prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet)\n\n /*\n * Prohibited matches should not be part of the query vector used for\n * similarity scoring and no metadata should be extracted so we continue\n * to the next field\n */\n continue\n }\n\n /*\n * The query field vector is populated using the termIndex found for\n * the term and a unit value with the appropriate boost applied.\n * Using upsert because there could already be an entry in the vector\n * for the term we are working with. In that case we just add the scores\n * together.\n */\n queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b })\n\n /**\n * If we've already seen this term, field combo then we've already collected\n * the matching documents and metadata, no need to go through all that again\n */\n if (termFieldCache[termField]) {\n continue\n }\n\n for (var l = 0; l < matchingDocumentRefs.length; l++) {\n /*\n * All metadata for this term/field/document triple\n * are then extracted and collected into an instance\n * of lunr.MatchData ready to be returned in the query\n * results\n */\n var matchingDocumentRef = matchingDocumentRefs[l],\n matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field),\n metadata = fieldPosting[matchingDocumentRef],\n fieldMatch\n\n if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) {\n matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata)\n } else {\n fieldMatch.add(expandedTerm, field, metadata)\n }\n\n }\n\n termFieldCache[termField] = true\n }\n }\n }\n\n /**\n * If the presence was required we need to update the requiredMatches field sets.\n * We do this after all fields for the term have collected their matches because\n * the clause terms presence is required in _any_ of the fields not _all_ of the\n * fields.\n */\n if (clause.presence === lunr.Query.presence.REQUIRED) {\n for (var k = 0; k < clause.fields.length; k++) {\n var field = clause.fields[k]\n requiredMatches[field] = requiredMatches[field].intersect(clauseMatches)\n }\n }\n }\n\n /**\n * Need to combine the field scoped required and prohibited\n * matching documents into a global set of required and prohibited\n * matches\n */\n var allRequiredMatches = lunr.Set.complete,\n allProhibitedMatches = lunr.Set.empty\n\n for (var i = 0; i < this.fields.length; i++) {\n var field = this.fields[i]\n\n if (requiredMatches[field]) {\n allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field])\n }\n\n if (prohibitedMatches[field]) {\n allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field])\n }\n }\n\n var matchingFieldRefs = Object.keys(matchingFields),\n results = [],\n matches = Object.create(null)\n\n /*\n * If the query is negated (contains only prohibited terms)\n * we need to get _all_ fieldRefs currently existing in the\n * index. This is only done when we know that the query is\n * entirely prohibited terms to avoid any cost of getting all\n * fieldRefs unnecessarily.\n *\n * Additionally, blank MatchData must be created to correctly\n * populate the results.\n */\n if (query.isNegated()) {\n matchingFieldRefs = Object.keys(this.fieldVectors)\n\n for (var i = 0; i < matchingFieldRefs.length; i++) {\n var matchingFieldRef = matchingFieldRefs[i]\n var fieldRef = lunr.FieldRef.fromString(matchingFieldRef)\n matchingFields[matchingFieldRef] = new lunr.MatchData\n }\n }\n\n for (var i = 0; i < matchingFieldRefs.length; i++) {\n /*\n * Currently we have document fields that match the query, but we\n * need to return documents. The matchData and scores are combined\n * from multiple fields belonging to the same document.\n *\n * Scores are calculated by field, using the query vectors created\n * above, and combined into a final document score using addition.\n */\n var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]),\n docRef = fieldRef.docRef\n\n if (!allRequiredMatches.contains(docRef)) {\n continue\n }\n\n if (allProhibitedMatches.contains(docRef)) {\n continue\n }\n\n var fieldVector = this.fieldVectors[fieldRef],\n score = queryVectors[fieldRef.fieldName].similarity(fieldVector),\n docMatch\n\n if ((docMatch = matches[docRef]) !== undefined) {\n docMatch.score += score\n docMatch.matchData.combine(matchingFields[fieldRef])\n } else {\n var match = {\n ref: docRef,\n score: score,\n matchData: matchingFields[fieldRef]\n }\n matches[docRef] = match\n results.push(match)\n }\n }\n\n /*\n * Sort the results objects by score, highest first.\n */\n return results.sort(function (a, b) {\n return b.score - a.score\n })\n}\n\n/**\n * Prepares the index for JSON serialization.\n *\n * The schema for this JSON blob will be described in a\n * separate JSON schema file.\n *\n * @returns {Object}\n */\nlunr.Index.prototype.toJSON = function () {\n var invertedIndex = Object.keys(this.invertedIndex)\n .sort()\n .map(function (term) {\n return [term, this.invertedIndex[term]]\n }, this)\n\n var fieldVectors = Object.keys(this.fieldVectors)\n .map(function (ref) {\n return [ref, this.fieldVectors[ref].toJSON()]\n }, this)\n\n return {\n version: lunr.version,\n fields: this.fields,\n fieldVectors: fieldVectors,\n invertedIndex: invertedIndex,\n pipeline: this.pipeline.toJSON()\n }\n}\n\n/**\n * Loads a previously serialized lunr.Index\n *\n * @param {Object} serializedIndex - A previously serialized lunr.Index\n * @returns {lunr.Index}\n */\nlunr.Index.load = function (serializedIndex) {\n var attrs = {},\n fieldVectors = {},\n serializedVectors = serializedIndex.fieldVectors,\n invertedIndex = Object.create(null),\n serializedInvertedIndex = serializedIndex.invertedIndex,\n tokenSetBuilder = new lunr.TokenSet.Builder,\n pipeline = lunr.Pipeline.load(serializedIndex.pipeline)\n\n if (serializedIndex.version != lunr.version) {\n lunr.utils.warn(\"Version mismatch when loading serialised index. Current version of lunr '\" + lunr.version + \"' does not match serialized index '\" + serializedIndex.version + \"'\")\n }\n\n for (var i = 0; i < serializedVectors.length; i++) {\n var tuple = serializedVectors[i],\n ref = tuple[0],\n elements = tuple[1]\n\n fieldVectors[ref] = new lunr.Vector(elements)\n }\n\n for (var i = 0; i < serializedInvertedIndex.length; i++) {\n var tuple = serializedInvertedIndex[i],\n term = tuple[0],\n posting = tuple[1]\n\n tokenSetBuilder.insert(term)\n invertedIndex[term] = posting\n }\n\n tokenSetBuilder.finish()\n\n attrs.fields = serializedIndex.fields\n\n attrs.fieldVectors = fieldVectors\n attrs.invertedIndex = invertedIndex\n attrs.tokenSet = tokenSetBuilder.root\n attrs.pipeline = pipeline\n\n return new lunr.Index(attrs)\n}\n/*!\n * lunr.Builder\n * Copyright (C) 2019 Oliver Nightingale\n */\n\n/**\n * lunr.Builder performs indexing on a set of documents and\n * returns instances of lunr.Index ready for querying.\n *\n * All configuration of the index is done via the builder, the\n * fields to index, the document reference, the text processing\n * pipeline and document scoring parameters are all set on the\n * builder before indexing.\n *\n * @constructor\n * @property {string} _ref - Internal reference to the document reference field.\n * @property {string[]} _fields - Internal reference to the document fields to index.\n * @property {object} invertedIndex - The inverted index maps terms to document fields.\n * @property {object} documentTermFrequencies - Keeps track of document term frequencies.\n * @property {object} documentLengths - Keeps track of the length of documents added to the index.\n * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing.\n * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing.\n * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index.\n * @property {number} documentCount - Keeps track of the total number of documents indexed.\n * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75.\n * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2.\n * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space.\n * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index.\n */\nlunr.Builder = function () {\n this._ref = \"id\"\n this._fields = Object.create(null)\n this._documents = Object.create(null)\n this.invertedIndex = Object.create(null)\n this.fieldTermFrequencies = {}\n this.fieldLengths = {}\n this.tokenizer = lunr.tokenizer\n this.pipeline = new lunr.Pipeline\n this.searchPipeline = new lunr.Pipeline\n this.documentCount = 0\n this._b = 0.75\n this._k1 = 1.2\n this.termIndex = 0\n this.metadataWhitelist = []\n}\n\n/**\n * Sets the document field used as the document reference. Every document must have this field.\n * The type of this field in the document should be a string, if it is not a string it will be\n * coerced into a string by calling toString.\n *\n * The default ref is 'id'.\n *\n * The ref should _not_ be changed during indexing, it should be set before any documents are\n * added to the index. Changing it during indexing can lead to inconsistent results.\n *\n * @param {string} ref - The name of the reference field in the document.\n */\nlunr.Builder.prototype.ref = function (ref) {\n this._ref = ref\n}\n\n/**\n * A function that is used to extract a field from a document.\n *\n * Lunr expects a field to be at the top level of a document, if however the field\n * is deeply nested within a document an extractor function can be used to extract\n * the right field for indexing.\n *\n * @callback fieldExtractor\n * @param {object} doc - The document being added to the index.\n * @returns {?(string|object|object[])} obj - The object that will be indexed for this field.\n * @example Extracting a nested field\n * function (doc) { return doc.nested.field }\n */\n\n/**\n * Adds a field to the list of document fields that will be indexed. Every document being\n * indexed should have this field. Null values for this field in indexed documents will\n * not cause errors but will limit the chance of that document being retrieved by searches.\n *\n * All fields should be added before adding documents to the index. Adding fields after\n * a document has been indexed will have no effect on already indexed documents.\n *\n * Fields can be boosted at build time. This allows terms within that field to have more\n * importance when ranking search results. Use a field boost to specify that matches within\n * one field are more important than other fields.\n *\n * @param {string} fieldName - The name of a field to index in all documents.\n * @param {object} attributes - Optional attributes associated with this field.\n * @param {number} [attributes.boost=1] - Boost applied to all terms within this field.\n * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document.\n * @throws {RangeError} fieldName cannot contain unsupported characters '/'\n */\nlunr.Builder.prototype.field = function (fieldName, attributes) {\n if (/\\//.test(fieldName)) {\n throw new RangeError (\"Field '\" + fieldName + \"' contains illegal character '/'\")\n }\n\n this._fields[fieldName] = attributes || {}\n}\n\n/**\n * A parameter to tune the amount of field length normalisation that is applied when\n * calculating relevance scores. A value of 0 will completely disable any normalisation\n * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b\n * will be clamped to the range 0 - 1.\n *\n * @param {number} number - The value to set for this tuning parameter.\n */\nlunr.Builder.prototype.b = function (number) {\n if (number < 0) {\n this._b = 0\n } else if (number > 1) {\n this._b = 1\n } else {\n this._b = number\n }\n}\n\n/**\n * A parameter that controls the speed at which a rise in term frequency results in term\n * frequency saturation. The default value is 1.2. Setting this to a higher value will give\n * slower saturation levels, a lower value will result in quicker saturation.\n *\n * @param {number} number - The value to set for this tuning parameter.\n */\nlunr.Builder.prototype.k1 = function (number) {\n this._k1 = number\n}\n\n/**\n * Adds a document to the index.\n *\n * Before adding fields to the index the index should have been fully setup, with the document\n * ref and all fields to index already having been specified.\n *\n * The document must have a field name as specified by the ref (by default this is 'id') and\n * it should have all fields defined for indexing, though null or undefined values will not\n * cause errors.\n *\n * Entire documents can be boosted at build time. Applying a boost to a document indicates that\n * this document should rank higher in search results than other documents.\n *\n * @param {object} doc - The document to add to the index.\n * @param {object} attributes - Optional attributes associated with this document.\n * @param {number} [attributes.boost=1] - Boost applied to all terms within this document.\n */\nlunr.Builder.prototype.add = function (doc, attributes) {\n var docRef = doc[this._ref],\n fields = Object.keys(this._fields)\n\n this._documents[docRef] = attributes || {}\n this.documentCount += 1\n\n for (var i = 0; i < fields.length; i++) {\n var fieldName = fields[i],\n extractor = this._fields[fieldName].extractor,\n field = extractor ? extractor(doc) : doc[fieldName],\n tokens = this.tokenizer(field, {\n fields: [fieldName]\n }),\n terms = this.pipeline.run(tokens),\n fieldRef = new lunr.FieldRef (docRef, fieldName),\n fieldTerms = Object.create(null)\n\n this.fieldTermFrequencies[fieldRef] = fieldTerms\n this.fieldLengths[fieldRef] = 0\n\n // store the length of this field for this document\n this.fieldLengths[fieldRef] += terms.length\n\n // calculate term frequencies for this field\n for (var j = 0; j < terms.length; j++) {\n var term = terms[j]\n\n if (fieldTerms[term] == undefined) {\n fieldTerms[term] = 0\n }\n\n fieldTerms[term] += 1\n\n // add to inverted index\n // create an initial posting if one doesn't exist\n if (this.invertedIndex[term] == undefined) {\n var posting = Object.create(null)\n posting[\"_index\"] = this.termIndex\n this.termIndex += 1\n\n for (var k = 0; k < fields.length; k++) {\n posting[fields[k]] = Object.create(null)\n }\n\n this.invertedIndex[term] = posting\n }\n\n // add an entry for this term/fieldName/docRef to the invertedIndex\n if (this.invertedIndex[term][fieldName][docRef] == undefined) {\n this.invertedIndex[term][fieldName][docRef] = Object.create(null)\n }\n\n // store all whitelisted metadata about this token in the\n // inverted index\n for (var l = 0; l < this.metadataWhitelist.length; l++) {\n var metadataKey = this.metadataWhitelist[l],\n metadata = term.metadata[metadataKey]\n\n if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) {\n this.invertedIndex[term][fieldName][docRef][metadataKey] = []\n }\n\n this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata)\n }\n }\n\n }\n}\n\n/**\n * Calculates the average document length for this index\n *\n * @private\n */\nlunr.Builder.prototype.calculateAverageFieldLengths = function () {\n\n var fieldRefs = Object.keys(this.fieldLengths),\n numberOfFields = fieldRefs.length,\n accumulator = {},\n documentsWithField = {}\n\n for (var i = 0; i < numberOfFields; i++) {\n var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),\n field = fieldRef.fieldName\n\n documentsWithField[field] || (documentsWithField[field] = 0)\n documentsWithField[field] += 1\n\n accumulator[field] || (accumulator[field] = 0)\n accumulator[field] += this.fieldLengths[fieldRef]\n }\n\n var fields = Object.keys(this._fields)\n\n for (var i = 0; i < fields.length; i++) {\n var fieldName = fields[i]\n accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName]\n }\n\n this.averageFieldLength = accumulator\n}\n\n/**\n * Builds a vector space model of every document using lunr.Vector\n *\n * @private\n */\nlunr.Builder.prototype.createFieldVectors = function () {\n var fieldVectors = {},\n fieldRefs = Object.keys(this.fieldTermFrequencies),\n fieldRefsLength = fieldRefs.length,\n termIdfCache = Object.create(null)\n\n for (var i = 0; i < fieldRefsLength; i++) {\n var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),\n fieldName = fieldRef.fieldName,\n fieldLength = this.fieldLengths[fieldRef],\n fieldVector = new lunr.Vector,\n termFrequencies = this.fieldTermFrequencies[fieldRef],\n terms = Object.keys(termFrequencies),\n termsLength = terms.length\n\n\n var fieldBoost = this._fields[fieldName].boost || 1,\n docBoost = this._documents[fieldRef.docRef].boost || 1\n\n for (var j = 0; j < termsLength; j++) {\n var term = terms[j],\n tf = termFrequencies[term],\n termIndex = this.invertedIndex[term]._index,\n idf, score, scoreWithPrecision\n\n if (termIdfCache[term] === undefined) {\n idf = lunr.idf(this.invertedIndex[term], this.documentCount)\n termIdfCache[term] = idf\n } else {\n idf = termIdfCache[term]\n }\n\n score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf)\n score *= fieldBoost\n score *= docBoost\n scoreWithPrecision = Math.round(score * 1000) / 1000\n // Converts 1.23456789 to 1.234.\n // Reducing the precision so that the vectors take up less\n // space when serialised. Doing it now so that they behave\n // the same before and after serialisation. Also, this is\n // the fastest approach to reducing a number's precision in\n // JavaScript.\n\n fieldVector.insert(termIndex, scoreWithPrecision)\n }\n\n fieldVectors[fieldRef] = fieldVector\n }\n\n this.fieldVectors = fieldVectors\n}\n\n/**\n * Creates a token set of all tokens in the index using lunr.TokenSet\n *\n * @private\n */\nlunr.Builder.prototype.createTokenSet = function () {\n this.tokenSet = lunr.TokenSet.fromArray(\n Object.keys(this.invertedIndex).sort()\n )\n}\n\n/**\n * Builds the index, creating an instance of lunr.Index.\n *\n * This completes the indexing process and should only be called\n * once all documents have been added to the index.\n *\n * @returns {lunr.Index}\n */\nlunr.Builder.prototype.build = function () {\n this.calculateAverageFieldLengths()\n this.createFieldVectors()\n this.createTokenSet()\n\n return new lunr.Index({\n invertedIndex: this.invertedIndex,\n fieldVectors: this.fieldVectors,\n tokenSet: this.tokenSet,\n fields: Object.keys(this._fields),\n pipeline: this.searchPipeline\n })\n}\n\n/**\n * Applies a plugin to the index builder.\n *\n * A plugin is a function that is called with the index builder as its context.\n * Plugins can be used to customise or extend the behaviour of the index\n * in some way. A plugin is just a function, that encapsulated the custom\n * behaviour that should be applied when building the index.\n *\n * The plugin function will be called with the index builder as its argument, additional\n * arguments can also be passed when calling use. The function will be called\n * with the index builder as its context.\n *\n * @param {Function} plugin The plugin to apply.\n */\nlunr.Builder.prototype.use = function (fn) {\n var args = Array.prototype.slice.call(arguments, 1)\n args.unshift(this)\n fn.apply(this, args)\n}\n/**\n * Contains and collects metadata about a matching document.\n * A single instance of lunr.MatchData is returned as part of every\n * lunr.Index~Result.\n *\n * @constructor\n * @param {string} term - The term this match data is associated with\n * @param {string} field - The field in which the term was found\n * @param {object} metadata - The metadata recorded about this term in this field\n * @property {object} metadata - A cloned collection of metadata associated with this document.\n * @see {@link lunr.Index~Result}\n */\nlunr.MatchData = function (term, field, metadata) {\n var clonedMetadata = Object.create(null),\n metadataKeys = Object.keys(metadata || {})\n\n // Cloning the metadata to prevent the original\n // being mutated during match data combination.\n // Metadata is kept in an array within the inverted\n // index so cloning the data can be done with\n // Array#slice\n for (var i = 0; i < metadataKeys.length; i++) {\n var key = metadataKeys[i]\n clonedMetadata[key] = metadata[key].slice()\n }\n\n this.metadata = Object.create(null)\n\n if (term !== undefined) {\n this.metadata[term] = Object.create(null)\n this.metadata[term][field] = clonedMetadata\n }\n}\n\n/**\n * An instance of lunr.MatchData will be created for every term that matches a\n * document. However only one instance is required in a lunr.Index~Result. This\n * method combines metadata from another instance of lunr.MatchData with this\n * objects metadata.\n *\n * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one.\n * @see {@link lunr.Index~Result}\n */\nlunr.MatchData.prototype.combine = function (otherMatchData) {\n var terms = Object.keys(otherMatchData.metadata)\n\n for (var i = 0; i < terms.length; i++) {\n var term = terms[i],\n fields = Object.keys(otherMatchData.metadata[term])\n\n if (this.metadata[term] == undefined) {\n this.metadata[term] = Object.create(null)\n }\n\n for (var j = 0; j < fields.length; j++) {\n var field = fields[j],\n keys = Object.keys(otherMatchData.metadata[term][field])\n\n if (this.metadata[term][field] == undefined) {\n this.metadata[term][field] = Object.create(null)\n }\n\n for (var k = 0; k < keys.length; k++) {\n var key = keys[k]\n\n if (this.metadata[term][field][key] == undefined) {\n this.metadata[term][field][key] = otherMatchData.metadata[term][field][key]\n } else {\n this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key])\n }\n\n }\n }\n }\n}\n\n/**\n * Add metadata for a term/field pair to this instance of match data.\n *\n * @param {string} term - The term this match data is associated with\n * @param {string} field - The field in which the term was found\n * @param {object} metadata - The metadata recorded about this term in this field\n */\nlunr.MatchData.prototype.add = function (term, field, metadata) {\n if (!(term in this.metadata)) {\n this.metadata[term] = Object.create(null)\n this.metadata[term][field] = metadata\n return\n }\n\n if (!(field in this.metadata[term])) {\n this.metadata[term][field] = metadata\n return\n }\n\n var metadataKeys = Object.keys(metadata)\n\n for (var i = 0; i < metadataKeys.length; i++) {\n var key = metadataKeys[i]\n\n if (key in this.metadata[term][field]) {\n this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key])\n } else {\n this.metadata[term][field][key] = metadata[key]\n }\n }\n}\n/**\n * A lunr.Query provides a programmatic way of defining queries to be performed\n * against a {@link lunr.Index}.\n *\n * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method\n * so the query object is pre-initialized with the right index fields.\n *\n * @constructor\n * @property {lunr.Query~Clause[]} clauses - An array of query clauses.\n * @property {string[]} allFields - An array of all available fields in a lunr.Index.\n */\nlunr.Query = function (allFields) {\n this.clauses = []\n this.allFields = allFields\n}\n\n/**\n * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause.\n *\n * This allows wildcards to be added to the beginning and end of a term without having to manually do any string\n * concatenation.\n *\n * The wildcard constants can be bitwise combined to select both leading and trailing wildcards.\n *\n * @constant\n * @default\n * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour\n * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists\n * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists\n * @see lunr.Query~Clause\n * @see lunr.Query#clause\n * @see lunr.Query#term\n * @example query term with trailing wildcard\n * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING })\n * @example query term with leading and trailing wildcard\n * query.term('foo', {\n * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING\n * })\n */\n\nlunr.Query.wildcard = new String (\"*\")\nlunr.Query.wildcard.NONE = 0\nlunr.Query.wildcard.LEADING = 1\nlunr.Query.wildcard.TRAILING = 2\n\n/**\n * Constants for indicating what kind of presence a term must have in matching documents.\n *\n * @constant\n * @enum {number}\n * @see lunr.Query~Clause\n * @see lunr.Query#clause\n * @see lunr.Query#term\n * @example query term with required presence\n * query.term('foo', { presence: lunr.Query.presence.REQUIRED })\n */\nlunr.Query.presence = {\n /**\n * Term's presence in a document is optional, this is the default value.\n */\n OPTIONAL: 1,\n\n /**\n * Term's presence in a document is required, documents that do not contain\n * this term will not be returned.\n */\n REQUIRED: 2,\n\n /**\n * Term's presence in a document is prohibited, documents that do contain\n * this term will not be returned.\n */\n PROHIBITED: 3\n}\n\n/**\n * A single clause in a {@link lunr.Query} contains a term and details on how to\n * match that term against a {@link lunr.Index}.\n *\n * @typedef {Object} lunr.Query~Clause\n * @property {string[]} fields - The fields in an index this clause should be matched against.\n * @property {number} [boost=1] - Any boost that should be applied when matching this clause.\n * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be.\n * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline.\n * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended.\n * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents.\n */\n\n/**\n * Adds a {@link lunr.Query~Clause} to this query.\n *\n * Unless the clause contains the fields to be matched all fields will be matched. In addition\n * a default boost of 1 is applied to the clause.\n *\n * @param {lunr.Query~Clause} clause - The clause to add to this query.\n * @see lunr.Query~Clause\n * @returns {lunr.Query}\n */\nlunr.Query.prototype.clause = function (clause) {\n if (!('fields' in clause)) {\n clause.fields = this.allFields\n }\n\n if (!('boost' in clause)) {\n clause.boost = 1\n }\n\n if (!('usePipeline' in clause)) {\n clause.usePipeline = true\n }\n\n if (!('wildcard' in clause)) {\n clause.wildcard = lunr.Query.wildcard.NONE\n }\n\n if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) {\n clause.term = \"*\" + clause.term\n }\n\n if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) {\n clause.term = \"\" + clause.term + \"*\"\n }\n\n if (!('presence' in clause)) {\n clause.presence = lunr.Query.presence.OPTIONAL\n }\n\n this.clauses.push(clause)\n\n return this\n}\n\n/**\n * A negated query is one in which every clause has a presence of\n * prohibited. These queries require some special processing to return\n * the expected results.\n *\n * @returns boolean\n */\nlunr.Query.prototype.isNegated = function () {\n for (var i = 0; i < this.clauses.length; i++) {\n if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) {\n return false\n }\n }\n\n return true\n}\n\n/**\n * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause}\n * to the list of clauses that make up this query.\n *\n * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion\n * to a token or token-like string should be done before calling this method.\n *\n * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an\n * array, each term in the array will share the same options.\n *\n * @param {object|object[]} term - The term(s) to add to the query.\n * @param {object} [options] - Any additional properties to add to the query clause.\n * @returns {lunr.Query}\n * @see lunr.Query#clause\n * @see lunr.Query~Clause\n * @example adding a single term to a query\n * query.term(\"foo\")\n * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard\n * query.term(\"foo\", {\n * fields: [\"title\"],\n * boost: 10,\n * wildcard: lunr.Query.wildcard.TRAILING\n * })\n * @example using lunr.tokenizer to convert a string to tokens before using them as terms\n * query.term(lunr.tokenizer(\"foo bar\"))\n */\nlunr.Query.prototype.term = function (term, options) {\n if (Array.isArray(term)) {\n term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this)\n return this\n }\n\n var clause = options || {}\n clause.term = term.toString()\n\n this.clause(clause)\n\n return this\n}\nlunr.QueryParseError = function (message, start, end) {\n this.name = \"QueryParseError\"\n this.message = message\n this.start = start\n this.end = end\n}\n\nlunr.QueryParseError.prototype = new Error\nlunr.QueryLexer = function (str) {\n this.lexemes = []\n this.str = str\n this.length = str.length\n this.pos = 0\n this.start = 0\n this.escapeCharPositions = []\n}\n\nlunr.QueryLexer.prototype.run = function () {\n var state = lunr.QueryLexer.lexText\n\n while (state) {\n state = state(this)\n }\n}\n\nlunr.QueryLexer.prototype.sliceString = function () {\n var subSlices = [],\n sliceStart = this.start,\n sliceEnd = this.pos\n\n for (var i = 0; i < this.escapeCharPositions.length; i++) {\n sliceEnd = this.escapeCharPositions[i]\n subSlices.push(this.str.slice(sliceStart, sliceEnd))\n sliceStart = sliceEnd + 1\n }\n\n subSlices.push(this.str.slice(sliceStart, this.pos))\n this.escapeCharPositions.length = 0\n\n return subSlices.join('')\n}\n\nlunr.QueryLexer.prototype.emit = function (type) {\n this.lexemes.push({\n type: type,\n str: this.sliceString(),\n start: this.start,\n end: this.pos\n })\n\n this.start = this.pos\n}\n\nlunr.QueryLexer.prototype.escapeCharacter = function () {\n this.escapeCharPositions.push(this.pos - 1)\n this.pos += 1\n}\n\nlunr.QueryLexer.prototype.next = function () {\n if (this.pos >= this.length) {\n return lunr.QueryLexer.EOS\n }\n\n var char = this.str.charAt(this.pos)\n this.pos += 1\n return char\n}\n\nlunr.QueryLexer.prototype.width = function () {\n return this.pos - this.start\n}\n\nlunr.QueryLexer.prototype.ignore = function () {\n if (this.start == this.pos) {\n this.pos += 1\n }\n\n this.start = this.pos\n}\n\nlunr.QueryLexer.prototype.backup = function () {\n this.pos -= 1\n}\n\nlunr.QueryLexer.prototype.acceptDigitRun = function () {\n var char, charCode\n\n do {\n char = this.next()\n charCode = char.charCodeAt(0)\n } while (charCode > 47 && charCode < 58)\n\n if (char != lunr.QueryLexer.EOS) {\n this.backup()\n }\n}\n\nlunr.QueryLexer.prototype.more = function () {\n return this.pos < this.length\n}\n\nlunr.QueryLexer.EOS = 'EOS'\nlunr.QueryLexer.FIELD = 'FIELD'\nlunr.QueryLexer.TERM = 'TERM'\nlunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE'\nlunr.QueryLexer.BOOST = 'BOOST'\nlunr.QueryLexer.PRESENCE = 'PRESENCE'\n\nlunr.QueryLexer.lexField = function (lexer) {\n lexer.backup()\n lexer.emit(lunr.QueryLexer.FIELD)\n lexer.ignore()\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexTerm = function (lexer) {\n if (lexer.width() > 1) {\n lexer.backup()\n lexer.emit(lunr.QueryLexer.TERM)\n }\n\n lexer.ignore()\n\n if (lexer.more()) {\n return lunr.QueryLexer.lexText\n }\n}\n\nlunr.QueryLexer.lexEditDistance = function (lexer) {\n lexer.ignore()\n lexer.acceptDigitRun()\n lexer.emit(lunr.QueryLexer.EDIT_DISTANCE)\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexBoost = function (lexer) {\n lexer.ignore()\n lexer.acceptDigitRun()\n lexer.emit(lunr.QueryLexer.BOOST)\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexEOS = function (lexer) {\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n}\n\n// This matches the separator used when tokenising fields\n// within a document. These should match otherwise it is\n// not possible to search for some tokens within a document.\n//\n// It is possible for the user to change the separator on the\n// tokenizer so it _might_ clash with any other of the special\n// characters already used within the search string, e.g. :.\n//\n// This means that it is possible to change the separator in\n// such a way that makes some words unsearchable using a search\n// string.\nlunr.QueryLexer.termSeparator = lunr.tokenizer.separator\n\nlunr.QueryLexer.lexText = function (lexer) {\n while (true) {\n var char = lexer.next()\n\n if (char == lunr.QueryLexer.EOS) {\n return lunr.QueryLexer.lexEOS\n }\n\n // Escape character is '\\'\n if (char.charCodeAt(0) == 92) {\n lexer.escapeCharacter()\n continue\n }\n\n if (char == \":\") {\n return lunr.QueryLexer.lexField\n }\n\n if (char == \"~\") {\n lexer.backup()\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n return lunr.QueryLexer.lexEditDistance\n }\n\n if (char == \"^\") {\n lexer.backup()\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n return lunr.QueryLexer.lexBoost\n }\n\n // \"+\" indicates term presence is required\n // checking for length to ensure that only\n // leading \"+\" are considered\n if (char == \"+\" && lexer.width() === 1) {\n lexer.emit(lunr.QueryLexer.PRESENCE)\n return lunr.QueryLexer.lexText\n }\n\n // \"-\" indicates term presence is prohibited\n // checking for length to ensure that only\n // leading \"-\" are considered\n if (char == \"-\" && lexer.width() === 1) {\n lexer.emit(lunr.QueryLexer.PRESENCE)\n return lunr.QueryLexer.lexText\n }\n\n if (char.match(lunr.QueryLexer.termSeparator)) {\n return lunr.QueryLexer.lexTerm\n }\n }\n}\n\nlunr.QueryParser = function (str, query) {\n this.lexer = new lunr.QueryLexer (str)\n this.query = query\n this.currentClause = {}\n this.lexemeIdx = 0\n}\n\nlunr.QueryParser.prototype.parse = function () {\n this.lexer.run()\n this.lexemes = this.lexer.lexemes\n\n var state = lunr.QueryParser.parseClause\n\n while (state) {\n state = state(this)\n }\n\n return this.query\n}\n\nlunr.QueryParser.prototype.peekLexeme = function () {\n return this.lexemes[this.lexemeIdx]\n}\n\nlunr.QueryParser.prototype.consumeLexeme = function () {\n var lexeme = this.peekLexeme()\n this.lexemeIdx += 1\n return lexeme\n}\n\nlunr.QueryParser.prototype.nextClause = function () {\n var completedClause = this.currentClause\n this.query.clause(completedClause)\n this.currentClause = {}\n}\n\nlunr.QueryParser.parseClause = function (parser) {\n var lexeme = parser.peekLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n switch (lexeme.type) {\n case lunr.QueryLexer.PRESENCE:\n return lunr.QueryParser.parsePresence\n case lunr.QueryLexer.FIELD:\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expected either a field or a term, found \" + lexeme.type\n\n if (lexeme.str.length >= 1) {\n errorMessage += \" with value '\" + lexeme.str + \"'\"\n }\n\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n}\n\nlunr.QueryParser.parsePresence = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n switch (lexeme.str) {\n case \"-\":\n parser.currentClause.presence = lunr.Query.presence.PROHIBITED\n break\n case \"+\":\n parser.currentClause.presence = lunr.Query.presence.REQUIRED\n break\n default:\n var errorMessage = \"unrecognised presence operator'\" + lexeme.str + \"'\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n var errorMessage = \"expecting term or field, found nothing\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.FIELD:\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expecting term or field, found '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseField = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n if (parser.query.allFields.indexOf(lexeme.str) == -1) {\n var possibleFields = parser.query.allFields.map(function (f) { return \"'\" + f + \"'\" }).join(', '),\n errorMessage = \"unrecognised field '\" + lexeme.str + \"', possible fields: \" + possibleFields\n\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.fields = [lexeme.str]\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n var errorMessage = \"expecting term, found nothing\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expecting term, found '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseTerm = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n parser.currentClause.term = lexeme.str.toLowerCase()\n\n if (lexeme.str.indexOf(\"*\") != -1) {\n parser.currentClause.usePipeline = false\n }\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseEditDistance = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n var editDistance = parseInt(lexeme.str, 10)\n\n if (isNaN(editDistance)) {\n var errorMessage = \"edit distance must be numeric\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.editDistance = editDistance\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseBoost = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n var boost = parseInt(lexeme.str, 10)\n\n if (isNaN(boost)) {\n var errorMessage = \"boost must be numeric\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.boost = boost\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\n /**\n * export the module via AMD, CommonJS or as a browser global\n * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js\n */\n ;(function (root, factory) {\n if (typeof define === 'function' && define.amd) {\n // AMD. Register as an anonymous module.\n define(factory)\n } else if (typeof exports === 'object') {\n /**\n * Node. Does not work with strict CommonJS, but\n * only CommonJS-like enviroments that support module.exports,\n * like Node.\n */\n module.exports = factory()\n } else {\n // Browser globals (root is window)\n root.lunr = factory()\n }\n }(this, function () {\n /**\n * Just return a value to define the module export.\n * This example returns an object, but the module\n * can return a function as the exported value.\n */\n return lunr\n }))\n})();\n","/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation. All rights reserved.\r\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use\r\nthis file except in compliance with the License. You may obtain a copy of the\r\nLicense at http://www.apache.org/licenses/LICENSE-2.0\r\n\r\nTHIS CODE IS PROVIDED ON AN *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\r\nKIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED\r\nWARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,\r\nMERCHANTABLITY OR NON-INFRINGEMENT.\r\n\r\nSee the Apache Version 2.0 License for specific language governing permissions\r\nand limitations under the License.\r\n***************************************************************************** */\r\n/* global Reflect, Promise */\r\n\r\nvar extendStatics = function(d, b) {\r\n extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p]; };\r\n return extendStatics(d, b);\r\n};\r\n\r\nexport function __extends(d, b) {\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n}\r\n\r\nexport var __assign = function() {\r\n __assign = Object.assign || function __assign(t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n }\r\n return __assign.apply(this, arguments);\r\n}\r\n\r\nexport function __rest(s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n}\r\n\r\nexport function __decorate(decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n}\r\n\r\nexport function __param(paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n}\r\n\r\nexport function __metadata(metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n}\r\n\r\nexport function __awaiter(thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n}\r\n\r\nexport function __generator(thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n}\r\n\r\nexport function __exportStar(m, exports) {\r\n for (var p in m) if (!exports.hasOwnProperty(p)) exports[p] = m[p];\r\n}\r\n\r\nexport function __values(o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n}\r\n\r\nexport function __read(o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n}\r\n\r\nexport function __spread() {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n}\r\n\r\nexport function __spreadArrays() {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n};\r\n\r\nexport function __await(v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n}\r\n\r\nexport function __asyncGenerator(thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n}\r\n\r\nexport function __asyncDelegator(o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n}\r\n\r\nexport function __asyncValues(o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n}\r\n\r\nexport function __makeTemplateObject(cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n};\r\n\r\nexport function __importStar(mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k];\r\n result.default = mod;\r\n return result;\r\n}\r\n\r\nexport function __importDefault(mod) {\r\n return (mod && mod.__esModule) ? mod : { default: mod };\r\n}\r\n\r\nexport function __classPrivateFieldGet(receiver, privateMap) {\r\n if (!privateMap.has(receiver)) {\r\n throw new TypeError(\"attempted to get private field on non-instance\");\r\n }\r\n return privateMap.get(receiver);\r\n}\r\n\r\nexport function __classPrivateFieldSet(receiver, privateMap, value) {\r\n if (!privateMap.has(receiver)) {\r\n throw new TypeError(\"attempted to set private field on non-instance\");\r\n }\r\n privateMap.set(receiver, value);\r\n return value;\r\n}\r\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n ArticleDocument,\n SearchDocumentMap,\n SectionDocument,\n setupSearchDocumentMap\n} from \"../document\"\nimport {\n SearchHighlightFactoryFn,\n setupSearchHighlighter\n} from \"../highlighter\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index configuration\n */\nexport interface SearchIndexConfig {\n lang: string[] /* Search languages */\n separator: string /* Search separator */\n}\n\n/**\n * Search index document\n */\nexport interface SearchIndexDocument {\n location: string /* Document location */\n title: string /* Document title */\n text: string /* Document text */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search index pipeline function\n */\nexport type SearchIndexPipelineFn =\n | \"stemmer\" /* Stemmer */\n | \"stopWordFilter\" /* Stop word filter */\n | \"trimmer\" /* Trimmer */\n\n/**\n * Search index pipeline\n */\nexport type SearchIndexPipeline = SearchIndexPipelineFn[]\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search index\n *\n * This interfaces describes the format of the `search_index.json` file which\n * is automatically built by the MkDocs search plugin.\n */\nexport interface SearchIndex {\n config: SearchIndexConfig /* Search index configuration */\n docs: SearchIndexDocument[] /* Search index documents */\n index?: object | string /* Prebuilt or serialized index */\n pipeline?: SearchIndexPipeline /* Search index pipeline */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search result\n */\nexport interface SearchResult {\n article: ArticleDocument /* Article document */\n sections: SectionDocument[] /* Section documents */\n}\n\n/* ----------------------------------------------------------------------------\n * Class\n * ------------------------------------------------------------------------- */\n\n/**\n * Search\n *\n * Note that `lunr` is injected via Webpack, as it will otherwise also be\n * bundled in the application bundle.\n */\nexport class Search {\n\n /**\n * Search document mapping\n *\n * A mapping of URLs (including hash fragments) to the actual articles and\n * sections of the documentation. The search document mapping must be created\n * regardless of whether the index was prebuilt or not, as `lunr` itself will\n * only store the actual index.\n */\n protected documents: SearchDocumentMap\n\n /**\n * Search highlight factory function\n */\n protected highlight: SearchHighlightFactoryFn\n\n /**\n * The `lunr` search index\n */\n protected index: lunr.Index\n\n /**\n * Create the search integration\n *\n * @param data - Search index\n */\n public constructor({ config, docs, pipeline, index }: SearchIndex) {\n this.documents = setupSearchDocumentMap(docs)\n this.highlight = setupSearchHighlighter(config)\n\n /* If no index was given, create it */\n if (typeof index === \"undefined\") {\n this.index = lunr(function() {\n pipeline = pipeline || [\"trimmer\", \"stopWordFilter\"]\n\n /* Set up pipeline according to configuration */\n this.pipeline.reset()\n for (const fn of pipeline)\n this.pipeline.add(lunr[fn])\n\n /* Set up alternate search languages */\n if (config.lang.length === 1 && config.lang[0] !== \"en\") {\n this.use((lunr as any)[config.lang[0]])\n } else if (config.lang.length > 1) {\n this.use((lunr as any).multiLanguage(...config.lang))\n }\n\n /* Set up fields and reference */\n this.field(\"title\", { boost: 1000 })\n this.field(\"text\")\n this.ref(\"location\")\n\n /* Index documents */\n for (const doc of docs)\n this.add(doc)\n })\n\n /* Prebuilt or serialized index */\n } else {\n this.index = lunr.Index.load(\n typeof index === \"string\"\n ? JSON.parse(index)\n : index\n )\n }\n }\n\n /**\n * Search for matching documents\n *\n * The search index which MkDocs provides is divided up into articles, which\n * contain the whole content of the individual pages, and sections, which only\n * contain the contents of the subsections obtained by breaking the individual\n * pages up at `h1` ... `h6`. As there may be many sections on different pages\n * with identical titles (for example within this very project, e.g. \"Usage\"\n * or \"Installation\"), they need to be put into the context of the containing\n * page. For this reason, section results are grouped within their respective\n * articles which are the top-level results that are returned.\n *\n * @param value - Query value\n *\n * @return Search results\n */\n public query(value: string): SearchResult[] {\n if (value) {\n try {\n\n /* Group sections by containing article */\n const groups = this.index.search(value)\n .reduce((results, result) => {\n const document = this.documents.get(result.ref)\n if (typeof document !== \"undefined\") {\n if (\"parent\" in document) {\n const ref = document.parent.location\n results.set(ref, [...results.get(ref) || [], result])\n } else {\n const ref = document.location\n results.set(ref, results.get(ref) || [])\n }\n }\n return results\n }, new Map())\n\n /* Create highlighter for query */\n const fn = this.highlight(value)\n\n /* Map groups to search documents */\n return [...groups].map(([ref, sections]) => ({\n article: fn(this.documents.get(ref) as ArticleDocument),\n sections: sections.map(section => {\n return fn(this.documents.get(section.ref) as SectionDocument)\n })\n }))\n\n /* Log errors to console (for now) */\n } catch (err) {\n // tslint:disable-next-line no-console\n console.warn(`Invalid query: ${value} – see https://bit.ly/2s3ChXG`)\n }\n }\n\n /* Return nothing in case of error or empty query */\n return []\n }\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SearchIndex, SearchResult } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search message type\n */\nexport const enum SearchMessageType {\n SETUP, /* Search index setup */\n READY, /* Search index ready */\n QUERY, /* Search query */\n RESULT /* Search results */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * A message containing the data necessary to setup the search index\n */\nexport interface SearchSetupMessage {\n type: SearchMessageType.SETUP /* Message type */\n data: SearchIndex /* Message data */\n}\n\n/**\n * A message indicating the search index is ready\n */\nexport interface SearchReadyMessage {\n type: SearchMessageType.READY /* Message type */\n}\n\n/**\n * A message containing a search query\n */\nexport interface SearchQueryMessage {\n type: SearchMessageType.QUERY /* Message type */\n data: string /* Message data */\n}\n\n/**\n * A message containing results for a search query\n */\nexport interface SearchResultMessage {\n type: SearchMessageType.RESULT /* Message type */\n data: SearchResult[] /* Message data */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * A message exchanged with the search worker\n */\nexport type SearchMessage =\n | SearchSetupMessage\n | SearchReadyMessage\n | SearchQueryMessage\n | SearchResultMessage\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Type guard for search setup messages\n *\n * @param message - Search worker message\n *\n * @return Test result\n */\nexport function isSearchSetupMessage(\n message: SearchMessage\n): message is SearchSetupMessage {\n return message.type === SearchMessageType.SETUP\n}\n\n/**\n * Type guard for search ready messages\n *\n * @param message - Search worker message\n *\n * @return Test result\n */\nexport function isSearchReadyMessage(\n message: SearchMessage\n): message is SearchReadyMessage {\n return message.type === SearchMessageType.READY\n}\n\n/**\n * Type guard for search query messages\n *\n * @param message - Search worker message\n *\n * @return Test result\n */\nexport function isSearchQueryMessage(\n message: SearchMessage\n): message is SearchQueryMessage {\n return message.type === SearchMessageType.QUERY\n}\n\n/**\n * Type guard for search result messages\n *\n * @param message - Search worker message\n *\n * @return Test result\n */\nexport function isSearchResultMessage(\n message: SearchMessage\n): message is SearchResultMessage {\n return message.type === SearchMessageType.RESULT\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"expose-loader?lunr!lunr\"\n\nimport { Search, SearchIndexConfig } from \"../../_\"\nimport { SearchMessage, SearchMessageType } from \"../message\"\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Search\n */\nlet search: Search\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up multi-language support through `lunr-languages`\n *\n * This function will automatically import the stemmers necessary to process\n * the languages which were given through the search index configuration.\n *\n * @param config - Search index configuration\n */\nfunction setupLunrLanguages(config: SearchIndexConfig): void {\n const base = \"../lunr\"\n\n /* Add scripts for languages */\n const scripts = []\n for (const lang of config.lang) {\n if (lang === \"ja\") scripts.push(`${base}/tinyseg.min.js`)\n if (lang !== \"en\") scripts.push(`${base}/min/lunr.${lang}.min.js`)\n }\n\n /* Add multi-language support */\n if (config.lang.length > 1)\n scripts.push(`${base}/min/lunr.multi.min.js`)\n\n /* Load scripts synchronously */\n if (scripts.length)\n importScripts(\n `${base}/min/lunr.stemmer.support.min.js`,\n ...scripts\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Message handler\n *\n * @param message - Source message\n *\n * @return Target message\n */\nexport function handler(message: SearchMessage): SearchMessage {\n switch (message.type) {\n\n /* Search setup message */\n case SearchMessageType.SETUP:\n setupLunrLanguages(message.data.config)\n search = new Search(message.data)\n return {\n type: SearchMessageType.READY\n }\n\n /* Search query message */\n case SearchMessageType.QUERY:\n return {\n type: SearchMessageType.RESULT,\n data: search ? search.query(message.data) : []\n }\n\n /* All other messages */\n default:\n throw new TypeError(\"Invalid message type\")\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Worker\n * ------------------------------------------------------------------------- */\n\naddEventListener(\"message\", ev => {\n postMessage(handler(ev.data))\n})\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport * as escapeHTML from \"escape-html\"\n\nimport { SearchIndexDocument } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * A top-level article\n */\nexport interface ArticleDocument extends SearchIndexDocument {\n linked: boolean /* Whether the section was linked */\n}\n\n/**\n * A section of an article\n */\nexport interface SectionDocument extends SearchIndexDocument {\n parent: ArticleDocument /* Parent article */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search document\n */\nexport type SearchDocument =\n | ArticleDocument\n | SectionDocument\n\n/**\n * Search document mapping\n */\nexport type SearchDocumentMap = Map\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search document mapping\n *\n * @param docs - Search index documents\n *\n * @return Search document map\n */\nexport function setupSearchDocumentMap(\n docs: SearchIndexDocument[]\n): SearchDocumentMap {\n const documents = new Map()\n for (const doc of docs) {\n const [path, hash] = doc.location.split(\"#\")\n\n /* Extract location and title */\n const location = doc.location\n const title = doc.title\n\n /* Escape and cleanup text */\n const text = escapeHTML(doc.text)\n .replace(/\\s+(?=[,.:;!?])/g, \"\")\n .replace(/\\s+/g, \" \")\n\n /* Handle section */\n if (hash) {\n const parent = documents.get(path) as ArticleDocument\n\n /* Ignore first section, override article */\n if (!parent.linked) {\n parent.title = doc.title\n parent.text = text\n parent.linked = true\n\n /* Add subsequent section */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n parent\n })\n }\n\n /* Add article */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n linked: false\n })\n }\n }\n return documents\n}\n","/*\n * Copyright (c) 2016-2020 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SearchIndexConfig } from \"../_\"\nimport { SearchDocument } from \"../document\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlight function\n *\n * @template T - Search document type\n *\n * @param document - Search document\n *\n * @return Highlighted document\n */\nexport type SearchHighlightFn = <\n T extends SearchDocument\n>(document: Readonly) => T\n\n/**\n * Search highlight factory function\n *\n * @param value - Query value\n *\n * @return Search highlight function\n */\nexport type SearchHighlightFactoryFn = (value: string) => SearchHighlightFn\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search highlighter\n *\n * @param config - Search index configuration\n *\n * @return Search highlight factory function\n */\nexport function setupSearchHighlighter(\n config: SearchIndexConfig\n): SearchHighlightFactoryFn {\n const separator = new RegExp(config.separator, \"img\")\n const highlight = (_: unknown, data: string, term: string) => {\n return `${data}${term}`\n }\n\n /* Return factory function */\n return (value: string) => {\n value = value\n .replace(/[\\s*+-:~^]+/g, \" \")\n .trim()\n\n /* Create search term match expression */\n const match = new RegExp(`(^|${config.separator})(${\n value\n .replace(/[|\\\\{}()[\\]^$+*?.-]/g, \"\\\\$&\")\n .replace(separator, \"|\")\n })`, \"img\")\n\n /* Highlight document */\n return document => ({\n ...document,\n title: document.title.replace(match, highlight),\n text: document.text.replace(match, highlight)\n })\n }\n}\n"],"sourceRoot":""} \ No newline at end of file diff --git a/assets/stylesheets/main.62d34fff.min.css b/assets/stylesheets/main.62d34fff.min.css new file mode 100644 index 00000000..ce22a091 --- /dev/null +++ b/assets/stylesheets/main.62d34fff.min.css @@ -0,0 +1,3 @@ +html{box-sizing:border-box}*,*::before,*::after{box-sizing:inherit}html{-webkit-text-size-adjust:none;-moz-text-size-adjust:none;-ms-text-size-adjust:none;text-size-adjust:none}body{margin:0}hr{box-sizing:content-box;overflow:visible}a,button,label,input{-webkit-tap-highlight-color:transparent}a{color:inherit;text-decoration:none}small{font-size:80%}sub,sup{position:relative;font-size:80%;line-height:0;vertical-align:baseline}sub{bottom:-0.25em}sup{top:-0.5em}img{border-style:none}table{border-collapse:separate;border-spacing:0}td,th{font-weight:normal;vertical-align:top}button{margin:0;padding:0;font-size:inherit;background:transparent;border:0}input{border:0;outline:0}:root{--md-default-fg-color: hsla(0, 0%, 0%, 0.87);--md-default-fg-color--light: hsla(0, 0%, 0%, 0.54);--md-default-fg-color--lighter: hsla(0, 0%, 0%, 0.26);--md-default-fg-color--lightest: hsla(0, 0%, 0%, 0.07);--md-default-bg-color: hsla(0, 0%, 100%, 1);--md-default-bg-color--light: hsla(0, 0%, 100%, 0.7);--md-default-bg-color--lighter: hsla(0, 0%, 100%, 0.3);--md-default-bg-color--lightest: hsla(0, 0%, 100%, 0.12);--md-primary-fg-color: hsla(231deg, 48%, 48%, 1);--md-primary-fg-color--light: hsla(230deg, 44%, 64%, 1);--md-primary-fg-color--dark: hsla(232deg, 54%, 41%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light);--md-accent-fg-color: hsla(231deg, 99%, 66%, 1);--md-accent-fg-color--transparent: hsla(231deg, 99%, 66%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light);--md-code-bg-color: hsla(0, 0%, 96%, 1);--md-code-fg-color: hsla(200, 18%, 26%, 1)}.md-icon svg{display:block;width:1.2rem;height:1.2rem;margin:0 auto;fill:currentColor}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}body,input{color:var(--md-default-fg-color);font-feature-settings:"kern","liga";font-family:-apple-system,BlinkMacSystemFont,Helvetica,Arial,sans-serif}code,pre,kbd{color:var(--md-default-fg-color);font-feature-settings:"kern";font-family:SFMono-Regular,Consolas,Menlo,monospace}.md-typeset{font-size:.8rem;line-height:1.6;-webkit-print-color-adjust:exact;color-adjust:exact}.md-typeset p,.md-typeset ul,.md-typeset ol,.md-typeset blockquote{margin:1em 0}.md-typeset h1{margin:0 0 2rem;color:var(--md-default-fg-color--light);font-weight:300;font-size:1.5625rem;line-height:1.3;letter-spacing:-0.01em}.md-typeset h2{margin:2rem 0 .8rem;font-weight:300;font-size:1.25rem;line-height:1.4;letter-spacing:-0.01em}.md-typeset h3{margin:1.6rem 0 .8rem;font-weight:400;font-size:1rem;line-height:1.5;letter-spacing:-0.01em}.md-typeset h2+h3{margin-top:.8rem}.md-typeset h4{margin:.8rem 0;font-weight:700;font-size:.8rem;letter-spacing:-0.01em}.md-typeset h5,.md-typeset h6{margin:.8rem 0;color:var(--md-default-fg-color--light);font-weight:700;font-size:.64rem;letter-spacing:-0.01em}.md-typeset h5{text-transform:uppercase}.md-typeset hr{margin:1.5em 0;border-bottom:.05rem dotted var(--md-default-fg-color--lighter)}.md-typeset a{color:var(--md-primary-fg-color);word-break:break-word}.md-typeset a,.md-typeset a::before{transition:color 125ms}.md-typeset a:focus,.md-typeset a:hover{color:var(--md-accent-fg-color)}.md-typeset code,.md-typeset pre,.md-typeset kbd{color:var(--md-code-fg-color);direction:ltr}@media print{.md-typeset code,.md-typeset pre,.md-typeset kbd{white-space:pre-wrap}}.md-typeset code{padding:0 .2941176471em;font-size:.85em;word-break:break-word;background-color:var(--md-code-bg-color);border-radius:.1rem;-webkit-box-decoration-break:clone;box-decoration-break:clone}.md-typeset h1 code,.md-typeset h2 code,.md-typeset h3 code,.md-typeset h4 code,.md-typeset h5 code,.md-typeset h6 code{margin:initial;padding:initial;background-color:transparent;box-shadow:none}.md-typeset a>code{color:currentColor}.md-typeset pre{position:relative;margin:1em 0;line-height:1.4}.md-typeset pre>code{display:block;margin:0;padding:.525rem 1.1764705882em;overflow:auto;word-break:normal;box-shadow:none;-webkit-box-decoration-break:slice;box-decoration-break:slice;touch-action:auto}.md-typeset pre>code::-webkit-scrollbar{width:.2rem;height:.2rem}.md-typeset pre>code::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-typeset pre>code::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}@media screen and (max-width: 44.9375em){.md-typeset>pre{margin:1em -0.8rem}.md-typeset>pre code{border-radius:0}}.md-typeset kbd{display:inline-block;padding:0 .6666666667em;font-size:.75em;line-height:1.5;vertical-align:text-top;word-break:break-word;border-radius:.1rem;box-shadow:0 .1rem 0 .05rem var(--md-default-fg-color--lighter),0 .1rem 0 var(--md-default-fg-color--lighter),inset 0 -0.1rem .2rem var(--md-default-bg-color)}.md-typeset mark{padding:0 .25em;word-break:break-word;background-color:rgba(255,235,59,.5);border-radius:.1rem;-webkit-box-decoration-break:clone;box-decoration-break:clone}.md-typeset abbr{text-decoration:none;border-bottom:.05rem dotted var(--md-default-fg-color--light);cursor:help}.md-typeset small{opacity:.75}.md-typeset sup,.md-typeset sub{margin-left:.078125em}[dir=rtl] .md-typeset sup,[dir=rtl] .md-typeset sub{margin-right:.078125em;margin-left:initial}.md-typeset blockquote{padding-left:.6rem;color:var(--md-default-fg-color--light);border-left:.2rem solid var(--md-default-fg-color--lighter)}[dir=rtl] .md-typeset blockquote{padding-right:.6rem;padding-left:initial;border-right:.2rem solid var(--md-default-fg-color--lighter);border-left:initial}.md-typeset ul{list-style-type:disc}.md-typeset ul,.md-typeset ol{margin-left:.625em;padding:0}[dir=rtl] .md-typeset ul,[dir=rtl] .md-typeset ol{margin-right:.625em;margin-left:initial}.md-typeset ul ol,.md-typeset ol ol{list-style-type:lower-alpha}.md-typeset ul ol ol,.md-typeset ol ol ol{list-style-type:lower-roman}.md-typeset ul li,.md-typeset ol li{margin-bottom:.5em;margin-left:1.25em}[dir=rtl] .md-typeset ul li,[dir=rtl] .md-typeset ol li{margin-right:1.25em;margin-left:initial}.md-typeset ul li p,.md-typeset ul li blockquote,.md-typeset ol li p,.md-typeset ol li blockquote{margin:.5em 0}.md-typeset ul li:last-child,.md-typeset ol li:last-child{margin-bottom:0}.md-typeset ul li ul,.md-typeset ul li ol,.md-typeset ol li ul,.md-typeset ol li ol{margin:.5em 0 .5em .625em}[dir=rtl] .md-typeset ul li ul,[dir=rtl] .md-typeset ul li ol,[dir=rtl] .md-typeset ol li ul,[dir=rtl] .md-typeset ol li ol{margin-right:.625em;margin-left:initial}.md-typeset dd{margin:1em 0 1em 1.875em}[dir=rtl] .md-typeset dd{margin-right:1.875em;margin-left:initial}.md-typeset iframe,.md-typeset img,.md-typeset svg{max-width:100%}.md-typeset table:not([class]){display:inline-block;max-width:100%;overflow:auto;font-size:.64rem;background:var(--md-default-bg-color);border-radius:.1rem;box-shadow:0 .2rem .5rem rgba(0,0,0,.05),0 0 .05rem rgba(0,0,0,.1);touch-action:auto}.md-typeset table:not([class])+*{margin-top:1.5em}.md-typeset table:not([class]) th:not([align]),.md-typeset table:not([class]) td:not([align]){text-align:left}[dir=rtl] .md-typeset table:not([class]) th:not([align]),[dir=rtl] .md-typeset table:not([class]) td:not([align]){text-align:right}.md-typeset table:not([class]) th{min-width:5rem;padding:.6rem .8rem;color:var(--md-default-bg-color);vertical-align:top;background-color:var(--md-default-fg-color--light)}.md-typeset table:not([class]) td{padding:.6rem .8rem;vertical-align:top;border-top:.05rem solid var(--md-default-fg-color--lightest)}.md-typeset table:not([class]) tr{transition:background-color 125ms}.md-typeset table:not([class]) tr:hover{background-color:rgba(0,0,0,.035);box-shadow:0 .05rem 0 var(--md-default-bg-color) inset}.md-typeset table:not([class]) tr:first-child td{border-top:0}.md-typeset table:not([class]) a{word-break:normal}.md-typeset__scrollwrap{margin:1em -0.8rem;overflow-x:auto;touch-action:auto}.md-typeset__table{display:inline-block;margin-bottom:.5em;padding:0 .8rem}.md-typeset__table table{display:table;width:100%;margin:0;overflow:hidden}html{height:100%;overflow-x:hidden;font-size:125%;background-color:var(--md-default-bg-color)}@media screen and (min-width: 100em){html{font-size:137.5%}}@media screen and (min-width: 125em){html{font-size:150%}}body{position:relative;display:flex;flex-direction:column;width:100%;min-height:100%;font-size:.5rem}@media screen and (max-width: 59.9375em){body[data-md-state=lock]{position:fixed}}@media print{body{display:block}}hr{display:block;height:.05rem;padding:0;border:0}.md-grid{max-width:61rem;margin-right:auto;margin-left:auto}.md-container{display:flex;flex-direction:column;flex-grow:1}@media print{.md-container{display:block}}.md-main{flex-grow:1}.md-main__inner{display:flex;height:100%;margin-top:1.5rem}.md-ellipsis{display:block;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.md-toggle{display:none}.md-overlay{position:fixed;top:0;z-index:3;width:0;height:0;background-color:var(--md-default-fg-color--light);opacity:0;transition:width 0ms 250ms,height 0ms 250ms,opacity 250ms}@media screen and (max-width: 76.1875em){[data-md-toggle=drawer]:checked~.md-overlay{width:100%;height:100%;opacity:1;transition:width 0ms,height 0ms,opacity 250ms}}.md-skip{position:fixed;z-index:-1;margin:.5rem;padding:.3rem .5rem;color:var(--md-default-bg-color);font-size:.64rem;background-color:var(--md-default-fg-color);border-radius:.1rem;transform:translateY(0.4rem);opacity:0}.md-skip:focus{z-index:10;transform:translateY(0);opacity:1;transition:transform 250ms cubic-bezier(0.4, 0, 0.2, 1),opacity 175ms 75ms}@page{margin:25mm}.md-announce{overflow:auto;background-color:var(--md-default-fg-color)}.md-announce__inner{margin:.6rem auto;padding:0 .8rem;color:var(--md-default-bg-color);font-size:.7rem}@media print{.md-announce{display:none}}.md-typeset .md-button{display:inline-block;padding:.625em 2em;color:var(--md-primary-fg-color);font-weight:700;border:.1rem solid currentColor;border-radius:.1rem;transition:color 125ms,background-color 125ms,border-color 125ms}.md-typeset .md-button--primary{color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color);border-color:var(--md-primary-fg-color)}.md-typeset .md-button:focus,.md-typeset .md-button:hover{color:var(--md-accent-bg-color);background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color)}.md-clipboard{position:absolute;top:.4rem;right:.5em;z-index:1;width:1.5em;height:1.5em;color:var(--md-default-fg-color--lightest);border-radius:.1rem;cursor:pointer;transition:color 125ms}@media print{.md-clipboard{display:none}}.md-clipboard svg{width:1.125em;height:1.125em}pre:hover .md-clipboard{color:var(--md-default-fg-color--light)}pre .md-clipboard:focus,pre .md-clipboard:hover{color:var(--md-accent-fg-color)}.md-content{flex:1;max-width:100%}@media screen and (min-width: 60em)and (max-width: 76.1875em){.md-content{max-width:calc(100% - 12.1rem)}}@media screen and (min-width: 76.25em){.md-content{max-width:calc(100% - 12.1rem * 2)}}.md-content__inner{margin:0 .8rem 1.2rem;padding-top:.6rem}@media screen and (min-width: 76.25em){.md-content__inner{margin-right:1.2rem;margin-left:1.2rem}}.md-content__inner::before{display:block;height:.4rem;content:""}.md-content__inner>:last-child{margin-bottom:0}.md-content__button{float:right;margin:.4rem 0;margin-left:.4rem;padding:0}[dir=rtl] .md-content__button{float:left;margin-right:.4rem;margin-left:initial}[dir=rtl] .md-content__button svg{transform:scaleX(-1)}.md-typeset .md-content__button{color:var(--md-default-fg-color--lighter)}.md-content__button svg{display:inline;vertical-align:top}@media print{.md-content__button{display:none}}.md-dialog{box-shadow:0 2px 2px 0 rgba(0,0,0,.14),0 1px 5px 0 rgba(0,0,0,.12),0 3px 1px -2px rgba(0,0,0,.2);position:fixed;right:.8rem;bottom:.8rem;left:initial;z-index:2;display:block;min-width:11.1rem;padding:.4rem .6rem;color:var(--md-default-bg-color);font-size:.7rem;background:var(--md-default-fg-color);border:none;border-radius:.1rem;transform:translateY(100%);opacity:0;transition:transform 0ms 400ms,opacity 400ms}[dir=rtl] .md-dialog{right:initial;left:.8rem}.md-dialog[data-md-state=open]{transform:translateY(0);opacity:1;transition:transform 400ms cubic-bezier(0.075, 0.85, 0.175, 1),opacity 400ms}@media print{.md-dialog{display:none}}.md-header{position:-webkit-sticky;position:sticky;top:0;right:0;left:0;z-index:2;height:2.4rem;color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color);box-shadow:0 0 .2rem rgba(0,0,0,0),0 .2rem .4rem rgba(0,0,0,0);transition:color 250ms,background-color 250ms}.no-js .md-header{box-shadow:none;transition:none}.md-header[data-md-state=shadow]{box-shadow:0 0 .2rem rgba(0,0,0,.1),0 .2rem .4rem rgba(0,0,0,.2);transition:color 250ms,background-color 250ms,box-shadow 250ms}@media print{.md-header{display:none}}.md-header-nav{display:flex;padding:0 .2rem}.md-header-nav__button{position:relative;z-index:1;margin:.2rem;padding:.4rem;cursor:pointer;transition:opacity 250ms}[dir=rtl] .md-header-nav__button svg{transform:scaleX(-1)}.md-header-nav__button:focus,.md-header-nav__button:hover{opacity:.7}.md-header-nav__button.md-logo{margin:.2rem;padding:.4rem}.md-header-nav__button.md-logo img,.md-header-nav__button.md-logo svg{display:block;width:1.2rem;height:1.2rem;fill:currentColor}.no-js .md-header-nav__button[for=__search]{display:none}@media screen and (min-width: 60em){.md-header-nav__button[for=__search]{display:none}}@media screen and (max-width: 76.1875em){.md-header-nav__button.md-logo{display:none}}@media screen and (min-width: 76.25em){.md-header-nav__button[for=__drawer]{display:none}}.md-header-nav__topic{position:absolute;width:100%;transition:transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 150ms}.md-header-nav__topic+.md-header-nav__topic{z-index:-1;transform:translateX(1.25rem);opacity:0;transition:transform 400ms cubic-bezier(1, 0.7, 0.1, 0.1),opacity 150ms;pointer-events:none}[dir=rtl] .md-header-nav__topic+.md-header-nav__topic{transform:translateX(-1.25rem)}.no-js .md-header-nav__topic{position:initial}.no-js .md-header-nav__topic+.md-header-nav__topic{display:none}.md-header-nav__title{flex-grow:1;padding:0 1rem;font-size:.9rem;line-height:2.4rem}.md-header-nav__title[data-md-state=active] .md-header-nav__topic{z-index:-1;transform:translateX(-1.25rem);opacity:0;transition:transform 400ms cubic-bezier(1, 0.7, 0.1, 0.1),opacity 150ms;pointer-events:none}[dir=rtl] .md-header-nav__title[data-md-state=active] .md-header-nav__topic{transform:translateX(1.25rem)}.md-header-nav__title[data-md-state=active] .md-header-nav__topic+.md-header-nav__topic{z-index:0;transform:translateX(0);opacity:1;transition:transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 150ms;pointer-events:initial}.md-header-nav__title>.md-header-nav__ellipsis{position:relative;width:100%;height:100%}.md-header-nav__source{display:none}@media screen and (min-width: 60em){.md-header-nav__source{display:block;width:11.7rem;max-width:11.7rem;margin-left:1rem}[dir=rtl] .md-header-nav__source{margin-right:1rem;margin-left:initial}}@media screen and (min-width: 76.25em){.md-header-nav__source{margin-left:1.4rem}[dir=rtl] .md-header-nav__source{margin-right:1.4rem}}.md-hero{overflow:hidden;color:var(--md-primary-bg-color);font-size:1rem;background-color:var(--md-primary-fg-color);transition:background 250ms}.md-hero__inner{margin-top:1rem;padding:.8rem .8rem .4rem;transition:transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 250ms;transition-delay:100ms}@media screen and (max-width: 76.1875em){.md-hero__inner{margin-top:2.4rem;margin-bottom:1.2rem}}[data-md-state=hidden] .md-hero__inner{transform:translateY(0.625rem);opacity:0;transition:transform 0ms 400ms,opacity 100ms 0ms;pointer-events:none}.md-hero--expand .md-hero__inner{margin-bottom:1.2rem}.md-footer{color:var(--md-default-bg-color);background-color:var(--md-default-fg-color)}@media print{.md-footer{display:none}}.md-footer-nav__inner{padding:.2rem;overflow:auto}.md-footer-nav__link{display:flex;padding-top:1.4rem;padding-bottom:.4rem;transition:opacity 250ms}@media screen and (min-width: 45em){.md-footer-nav__link{width:50%}}.md-footer-nav__link:focus,.md-footer-nav__link:hover{opacity:.7}.md-footer-nav__link--prev{float:left;width:25%}[dir=rtl] .md-footer-nav__link--prev{float:right}[dir=rtl] .md-footer-nav__link--prev svg{transform:scaleX(-1)}@media screen and (max-width: 44.9375em){.md-footer-nav__link--prev .md-footer-nav__title{display:none}}.md-footer-nav__link--next{float:right;width:75%;text-align:right}[dir=rtl] .md-footer-nav__link--next{float:left;text-align:left}[dir=rtl] .md-footer-nav__link--next svg{transform:scaleX(-1)}.md-footer-nav__title{position:relative;flex-grow:1;max-width:calc(100% - 2.4rem);padding:0 1rem;font-size:.9rem;line-height:2.4rem}.md-footer-nav__button{margin:.2rem;padding:.4rem}.md-footer-nav__direction{position:absolute;right:0;left:0;margin-top:-1rem;padding:0 1rem;color:var(--md-default-bg-color--light);font-size:.64rem}.md-footer-meta{background-color:var(--md-default-fg-color--lighter)}.md-footer-meta__inner{display:flex;flex-wrap:wrap;justify-content:space-between;padding:.2rem}html .md-footer-meta.md-typeset a{color:var(--md-default-bg-color--light)}html .md-footer-meta.md-typeset a:focus,html .md-footer-meta.md-typeset a:hover{color:var(--md-default-bg-color)}.md-footer-copyright{width:100%;margin:auto .6rem;padding:.4rem 0;color:var(--md-default-bg-color--lighter);font-size:.64rem}@media screen and (min-width: 45em){.md-footer-copyright{width:auto}}.md-footer-copyright__highlight{color:var(--md-default-bg-color--light)}.md-footer-social{margin:0 .4rem;padding:.2rem 0 .6rem}@media screen and (min-width: 45em){.md-footer-social{padding:.6rem 0}}.md-footer-social__link{display:inline-block;width:1.6rem;height:1.6rem;text-align:center}.md-footer-social__link::before{line-height:1.9}.md-footer-social__link svg{max-height:.8rem;vertical-align:-25%;fill:currentColor}.md-nav{font-size:.7rem;line-height:1.3}.md-nav__title{display:block;padding:0 .6rem;overflow:hidden;font-weight:700;text-overflow:ellipsis}.md-nav__title .md-nav__button{display:none}.md-nav__title .md-nav__button img{width:100%;height:auto}.md-nav__title .md-nav__button.md-logo img,.md-nav__title .md-nav__button.md-logo svg{display:block;width:2.4rem;height:2.4rem}.md-nav__title .md-nav__button.md-logo svg{fill:currentColor}.md-nav__list{margin:0;padding:0;list-style:none}.md-nav__item{padding:0 .6rem}.md-nav__item:last-child{padding-bottom:.6rem}.md-nav__item .md-nav__item{padding-right:0}[dir=rtl] .md-nav__item .md-nav__item{padding-right:.6rem;padding-left:0}.md-nav__item .md-nav__item:last-child{padding-bottom:0}.md-nav__link{display:block;margin-top:.625em;overflow:hidden;text-overflow:ellipsis;cursor:pointer;transition:color 125ms;scroll-snap-align:start}html .md-nav__link[for=__toc]{display:none}html .md-nav__link[for=__toc]~.md-nav{display:none}.md-nav__link[data-md-state=blur]{color:var(--md-default-fg-color--light)}.md-nav__item .md-nav__link--active{color:var(--md-primary-fg-color)}.md-nav__item--nested>.md-nav__link{color:inherit}.md-nav__link:focus,.md-nav__link:hover{color:var(--md-accent-fg-color)}.md-nav__source{display:none}@media screen and (max-width: 76.1875em){.md-nav{background-color:var(--md-default-bg-color)}.md-nav--primary,.md-nav--primary .md-nav{position:absolute;top:0;right:0;left:0;z-index:1;display:flex;flex-direction:column;height:100%}.md-nav--primary .md-nav__title,.md-nav--primary .md-nav__item{font-size:.8rem;line-height:1.5}.md-nav--primary .md-nav__title{position:relative;height:5.6rem;padding:3rem .8rem .2rem;color:var(--md-default-fg-color--light);font-weight:400;line-height:2.4rem;white-space:nowrap;background-color:var(--md-default-fg-color--lightest);cursor:pointer}.md-nav--primary .md-nav__title .md-nav__icon{position:absolute;top:.4rem;left:.4rem;display:block;width:1.2rem;height:1.2rem;margin:.2rem}[dir=rtl] .md-nav--primary .md-nav__title .md-nav__icon{right:.4rem;left:initial}.md-nav--primary .md-nav__title~.md-nav__list{overflow-y:auto;background-color:var(--md-default-bg-color);box-shadow:inset 0 .05rem 0 var(--md-default-fg-color--lightest);-webkit-scroll-snap-type:y mandatory;-ms-scroll-snap-type:y mandatory;scroll-snap-type:y mandatory;touch-action:pan-y}.md-nav--primary .md-nav__title~.md-nav__list>.md-nav__item:first-child{border-top:0}.md-nav--primary .md-nav__title[for=__drawer]{position:relative;color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color)}.md-nav--primary .md-nav__title[for=__drawer] .md-nav__button{position:absolute;top:.2rem;left:.2rem;display:block;margin:.2rem;padding:.4rem;font-size:2.4rem}html [dir=rtl] .md-nav--primary .md-nav__title[for=__drawer] .md-nav__button{right:.2rem;left:initial}.md-nav--primary .md-nav__list{flex:1}.md-nav--primary .md-nav__item{padding:0;border-top:.05rem solid var(--md-default-fg-color--lightest)}[dir=rtl] .md-nav--primary .md-nav__item{padding:0}.md-nav--primary .md-nav__item--nested>.md-nav__link{padding-right:2.4rem}[dir=rtl] .md-nav--primary .md-nav__item--nested>.md-nav__link{padding-right:.8rem;padding-left:2.4rem}.md-nav--primary .md-nav__item--active>.md-nav__link{color:var(--md-primary-fg-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:focus,.md-nav--primary .md-nav__item--active>.md-nav__link:hover{color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__link{position:relative;margin-top:0;padding:.6rem .8rem}.md-nav--primary .md-nav__link .md-nav__icon{position:absolute;top:50%;right:.6rem;margin-top:-0.6rem;color:inherit;font-size:1.2rem}[dir=rtl] .md-nav--primary .md-nav__link .md-nav__icon{right:initial;left:.6rem}[dir=rtl] .md-nav--primary .md-nav__icon svg{transform:scale(-1)}.md-nav--primary .md-nav--secondary .md-nav__link{position:static}.md-nav--primary .md-nav--secondary .md-nav{position:static;background-color:transparent}.md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-left:1.4rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-right:1.4rem;padding-left:initial}.md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-left:2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-right:2rem;padding-left:initial}.md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-left:2.6rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-right:2.6rem;padding-left:initial}.md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-left:3.2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-right:3.2rem;padding-left:initial}.md-nav__toggle~.md-nav{display:flex;transform:translateX(100%);opacity:0;transition:transform 250ms cubic-bezier(0.8, 0, 0.6, 1),opacity 125ms 50ms}[dir=rtl] .md-nav__toggle~.md-nav{transform:translateX(-100%)}.md-nav__toggle:checked~.md-nav{transform:translateX(0);opacity:1;transition:transform 250ms cubic-bezier(0.4, 0, 0.2, 1),opacity 125ms 125ms}.md-nav__toggle:checked~.md-nav>.md-nav__list{-webkit-backface-visibility:hidden;backface-visibility:hidden}}@media screen and (max-width: 59.9375em){html .md-nav__link[for=__toc]{display:block;padding-right:2.4rem}html .md-nav__link[for=__toc]+.md-nav__link{display:none}html .md-nav__link[for=__toc]~.md-nav{display:flex}html [dir=rtl] .md-nav__link{padding-right:.8rem;padding-left:2.4rem}.md-nav__source{display:block;padding:0 .2rem;color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color--dark)}}@media screen and (min-width: 60em){.md-nav--secondary .md-nav__title[for=__toc]{scroll-snap-align:start}.md-nav--secondary .md-nav__title .md-nav__icon{display:none}}@media screen and (min-width: 76.25em){.md-nav{transition:max-height 250ms cubic-bezier(0.86, 0, 0.07, 1)}.md-nav--primary .md-nav__title[for=__drawer]{scroll-snap-align:start}.md-nav--primary .md-nav__title .md-nav__icon{display:none}.md-nav__toggle~.md-nav{display:none}.md-nav__toggle:checked~.md-nav{display:block}.md-nav__item--nested>.md-nav>.md-nav__title{display:none}.md-nav__icon{float:right;height:.9rem;transition:transform 250ms}[dir=rtl] .md-nav__icon{float:left;transform:rotate(180deg)}.md-nav__icon svg{display:inline-block;width:.9rem;height:.9rem;vertical-align:-0.1rem}.md-nav__item--nested .md-nav__toggle:checked~.md-nav__link .md-nav__icon{transform:rotate(90deg)}}.md-search{position:relative}.no-js .md-search{display:none}@media screen and (min-width: 60em){.md-search{padding:.2rem 0}}.md-search__overlay{z-index:1;opacity:0}@media screen and (max-width: 59.9375em){.md-search__overlay{position:absolute;top:.2rem;left:-2.2rem;width:2rem;height:2rem;overflow:hidden;background-color:var(--md-default-bg-color);border-radius:1rem;transform-origin:center;transition:transform 300ms 100ms,opacity 200ms 200ms;pointer-events:none}[dir=rtl] .md-search__overlay{right:-2.2rem;left:initial}[data-md-toggle=search]:checked~.md-header .md-search__overlay{opacity:1;transition:transform 400ms,opacity 100ms}}@media screen and (max-width: 29.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(45)}}@media screen and (min-width: 30em)and (max-width: 44.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(60)}}@media screen and (min-width: 45em)and (max-width: 59.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(75)}}@media screen and (min-width: 60em){.md-search__overlay{position:fixed;top:0;left:0;width:0;height:0;background-color:var(--md-default-fg-color--light);cursor:pointer;transition:width 0ms 250ms,height 0ms 250ms,opacity 250ms}[dir=rtl] .md-search__overlay{right:0;left:initial}[data-md-toggle=search]:checked~.md-header .md-search__overlay{width:100%;height:100%;opacity:1;transition:width 0ms,height 0ms,opacity 250ms}}.md-search__inner{-webkit-backface-visibility:hidden;backface-visibility:hidden}@media screen and (max-width: 59.9375em){.md-search__inner{position:fixed;top:0;left:100%;z-index:2;width:100%;height:100%;transform:translateX(5%);opacity:0;transition:right 0ms 300ms,left 0ms 300ms,transform 150ms 150ms cubic-bezier(0.4, 0, 0.2, 1),opacity 150ms 150ms}[data-md-toggle=search]:checked~.md-header .md-search__inner{left:0;transform:translateX(0);opacity:1;transition:right 0ms 0ms,left 0ms 0ms,transform 150ms 150ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 150ms 150ms}[dir=rtl] [data-md-toggle=search]:checked~.md-header .md-search__inner{right:0;left:initial}html [dir=rtl] .md-search__inner{right:100%;left:initial;transform:translateX(-5%)}}@media screen and (min-width: 60em){.md-search__inner{position:relative;float:right;width:11.7rem;padding:.1rem 0;transition:width 250ms cubic-bezier(0.1, 0.7, 0.1, 1)}[dir=rtl] .md-search__inner{float:left}}@media screen and (min-width: 60em)and (max-width: 76.1875em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:23.4rem}}@media screen and (min-width: 76.25em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:34.4rem}}.md-search__form{position:relative}@media screen and (min-width: 60em){.md-search__form{border-radius:.1rem}}.md-search__input{position:relative;z-index:2;padding:0 2.2rem 0 3.6rem;text-overflow:ellipsis}[dir=rtl] .md-search__input{padding:0 3.6rem 0 2.2rem}.md-search__input::-webkit-input-placeholder{-webkit-transition:color 250ms;transition:color 250ms}.md-search__input::-moz-placeholder{-moz-transition:color 250ms;transition:color 250ms}.md-search__input::-ms-input-placeholder{-ms-transition:color 250ms;transition:color 250ms}.md-search__input::placeholder{transition:color 250ms}.md-search__input::-webkit-input-placeholder{color:var(--md-default-fg-color--light)}.md-search__input::-moz-placeholder{color:var(--md-default-fg-color--light)}.md-search__input::-ms-input-placeholder{color:var(--md-default-fg-color--light)}.md-search__input~.md-search__icon,.md-search__input::placeholder{color:var(--md-default-fg-color--light)}.md-search__input::-ms-clear{display:none}@media screen and (max-width: 59.9375em){.md-search__input{width:100%;height:2.4rem;font-size:.9rem}}@media screen and (min-width: 60em){.md-search__input{width:100%;height:1.8rem;padding-left:2.2rem;color:inherit;font-size:.8rem;background-color:var(--md-default-fg-color--lighter);border-radius:.1rem;transition:color 250ms,background-color 250ms}[dir=rtl] .md-search__input{padding-right:2.2rem}.md-search__input+.md-search__icon{color:var(--md-primary-bg-color)}.md-search__input::-webkit-input-placeholder{color:var(--md-primary-bg-color--light)}.md-search__input::-moz-placeholder{color:var(--md-primary-bg-color--light)}.md-search__input::-ms-input-placeholder{color:var(--md-primary-bg-color--light)}.md-search__input::placeholder{color:var(--md-primary-bg-color--light)}.md-search__input:hover{background-color:var(--md-default-bg-color--lightest)}[data-md-toggle=search]:checked~.md-header .md-search__input{color:var(--md-default-fg-color);text-overflow:clip;background-color:var(--md-default-bg-color);border-radius:.1rem .1rem 0 0}[data-md-toggle=search]:checked~.md-header .md-search__input::-webkit-input-placeholder{color:var(--md-default-fg-color--light)}[data-md-toggle=search]:checked~.md-header .md-search__input::-moz-placeholder{color:var(--md-default-fg-color--light)}[data-md-toggle=search]:checked~.md-header .md-search__input::-ms-input-placeholder{color:var(--md-default-fg-color--light)}[data-md-toggle=search]:checked~.md-header .md-search__input+.md-search__icon,[data-md-toggle=search]:checked~.md-header .md-search__input::placeholder{color:var(--md-default-fg-color--light)}}.md-search__icon{position:absolute;z-index:2;width:1.2rem;height:1.2rem;cursor:pointer;transition:color 250ms,opacity 250ms}.md-search__icon:hover{opacity:.7}.md-search__icon[for=__search]{top:.3rem;left:.5rem}[dir=rtl] .md-search__icon[for=__search]{right:.5rem;left:initial}[dir=rtl] .md-search__icon[for=__search] svg{transform:scaleX(-1)}@media screen and (max-width: 59.9375em){.md-search__icon[for=__search]{top:.6rem;left:.8rem}[dir=rtl] .md-search__icon[for=__search]{right:.8rem;left:initial}.md-search__icon[for=__search] svg:first-child{display:none}}@media screen and (min-width: 60em){.md-search__icon[for=__search]{pointer-events:none}.md-search__icon[for=__search] svg:last-child{display:none}}.md-search__icon[type=reset]{top:.3rem;right:.5rem;transform:scale(0.75);opacity:0;transition:transform 150ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 150ms;pointer-events:none}[dir=rtl] .md-search__icon[type=reset]{right:initial;left:.5rem}@media screen and (max-width: 59.9375em){.md-search__icon[type=reset]{top:.6rem;right:.8rem}[dir=rtl] .md-search__icon[type=reset]{right:initial;left:.8rem}}[data-md-toggle=search]:checked~.md-header .md-search__input:not(:placeholder-shown)~.md-search__icon[type=reset]{transform:scale(1);opacity:1;pointer-events:initial}[data-md-toggle=search]:checked~.md-header .md-search__input:not(:placeholder-shown)~.md-search__icon[type=reset]:hover{opacity:.7}.md-search__output{position:absolute;z-index:1;width:100%;overflow:hidden;border-radius:0 0 .1rem .1rem}@media screen and (max-width: 59.9375em){.md-search__output{top:2.4rem;bottom:0}}@media screen and (min-width: 60em){.md-search__output{top:1.9rem;opacity:0;transition:opacity 400ms}[data-md-toggle=search]:checked~.md-header .md-search__output{box-shadow:0 6px 10px 0 rgba(0,0,0,.14),0 1px 18px 0 rgba(0,0,0,.12),0 3px 5px -1px rgba(0,0,0,.4);opacity:1}}.md-search__scrollwrap{height:100%;overflow-y:auto;background-color:var(--md-default-bg-color);box-shadow:inset 0 .05rem 0 var(--md-default-fg-color--lightest);-webkit-backface-visibility:hidden;backface-visibility:hidden;-webkit-scroll-snap-type:y mandatory;-ms-scroll-snap-type:y mandatory;scroll-snap-type:y mandatory;touch-action:pan-y}@media(-webkit-max-device-pixel-ratio: 1), (max-resolution: 1dppx){.md-search__scrollwrap{transform:translateZ(0)}}@media screen and (min-width: 60em)and (max-width: 76.1875em){.md-search__scrollwrap{width:23.4rem}}@media screen and (min-width: 76.25em){.md-search__scrollwrap{width:34.4rem}}@media screen and (min-width: 60em){.md-search__scrollwrap{max-height:0}[data-md-toggle=search]:checked~.md-header .md-search__scrollwrap{max-height:75vh}.md-search__scrollwrap::-webkit-scrollbar{width:.2rem;height:.2rem}.md-search__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-search__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}}.md-search-result{color:var(--md-default-fg-color);word-break:break-word}.md-search-result__meta{padding:0 .8rem;color:var(--md-default-fg-color--light);font-size:.64rem;line-height:1.8rem;background-color:var(--md-default-fg-color--lightest);scroll-snap-align:start}@media screen and (min-width: 60em){.md-search-result__meta{padding-left:2.2rem}[dir=rtl] .md-search-result__meta{padding-right:2.2rem;padding-left:initial}}.md-search-result__list{margin:0;padding:0;list-style:none;border-top:.05rem solid var(--md-default-fg-color--lightest)}.md-search-result__item{box-shadow:0 -0.05rem 0 var(--md-default-fg-color--lightest)}.md-search-result__link{display:block;outline:0;transition:background 250ms;scroll-snap-align:start}.md-search-result__link:focus,.md-search-result__link:hover{background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:focus .md-search-result__article::before,.md-search-result__link:hover .md-search-result__article::before{opacity:.7}.md-search-result__link:last-child .md-search-result__teaser{margin-bottom:.6rem}.md-search-result__article{position:relative;padding:0 .8rem;overflow:auto}@media screen and (min-width: 60em){.md-search-result__article{padding-left:2.2rem}[dir=rtl] .md-search-result__article{padding-right:2.2rem;padding-left:.8rem}}.md-search-result__article--document .md-search-result__title{margin:.55rem 0;font-weight:400;font-size:.8rem;line-height:1.4}.md-search-result__icon{position:absolute;left:0;margin:.1rem;padding:.4rem;color:var(--md-default-fg-color--light)}[dir=rtl] .md-search-result__icon{right:0;left:initial}[dir=rtl] .md-search-result__icon svg{transform:scaleX(-1)}@media screen and (max-width: 59.9375em){.md-search-result__icon{display:none}}.md-search-result__title{margin:.5em 0;font-weight:700;font-size:.64rem;line-height:1.4}.md-search-result__teaser{display:-webkit-box;max-height:1.65rem;margin:.5em 0;overflow:hidden;color:var(--md-default-fg-color--light);font-size:.64rem;line-height:1.4;text-overflow:ellipsis;-webkit-box-orient:vertical;-webkit-line-clamp:2}@media screen and (max-width: 44.9375em){.md-search-result__teaser{max-height:2.5rem;-webkit-line-clamp:3}}@media screen and (min-width: 60em)and (max-width: 76.1875em){.md-search-result__teaser{max-height:2.5rem;-webkit-line-clamp:3}}.md-search-result em{font-weight:700;font-style:normal;text-decoration:underline}.md-sidebar{position:-webkit-sticky;position:sticky;top:2.4rem;width:12.1rem;padding:1.2rem 0;overflow:hidden}@media print{.md-sidebar{display:none}}@media screen and (max-width: 76.1875em){.md-sidebar--primary{position:fixed;top:0;left:-12.1rem;z-index:3;width:12.1rem;height:100%;background-color:var(--md-default-bg-color);transform:translateX(0);transition:transform 250ms cubic-bezier(0.4, 0, 0.2, 1),box-shadow 250ms}[dir=rtl] .md-sidebar--primary{right:-12.1rem;left:initial}[data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{box-shadow:0 8px 10px 1px rgba(0,0,0,.14),0 3px 14px 2px rgba(0,0,0,.12),0 5px 5px -3px rgba(0,0,0,.4);transform:translateX(12.1rem)}[dir=rtl] [data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{transform:translateX(-12.1rem)}.md-sidebar--primary .md-sidebar__scrollwrap{overflow:hidden}}.md-sidebar--secondary{display:none;order:2}@media screen and (min-width: 60em){.md-sidebar--secondary{display:block}.md-sidebar--secondary .md-sidebar__scrollwrap{touch-action:pan-y}}.md-sidebar__scrollwrap{max-height:100%;margin:0 .2rem;overflow-y:auto;-webkit-backface-visibility:hidden;backface-visibility:hidden;-webkit-scroll-snap-type:y mandatory;-ms-scroll-snap-type:y mandatory;scroll-snap-type:y mandatory}@media screen and (max-width: 76.1875em){.md-sidebar--primary .md-sidebar__scrollwrap{position:absolute;top:0;right:0;bottom:0;left:0;margin:0;-webkit-scroll-snap-type:none;-ms-scroll-snap-type:none;scroll-snap-type:none}}.md-sidebar__scrollwrap::-webkit-scrollbar{width:.2rem;height:.2rem}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}@-webkit-keyframes md-source__facts--done{0%{height:0}100%{height:.65rem}}@keyframes md-source__facts--done{0%{height:0}100%{height:.65rem}}@-webkit-keyframes md-source__fact--done{0%{transform:translateY(100%);opacity:0}50%{opacity:0}100%{transform:translateY(0%);opacity:1}}@keyframes md-source__fact--done{0%{transform:translateY(100%);opacity:0}50%{opacity:0}100%{transform:translateY(0%);opacity:1}}.md-source{display:block;font-size:.65rem;line-height:1.2;white-space:nowrap;-webkit-backface-visibility:hidden;backface-visibility:hidden;transition:opacity 250ms}.md-source:hover{opacity:.7}.md-source__icon{display:inline-block;width:2.4rem;height:2.4rem;vertical-align:middle}.md-source__icon svg{margin-top:.6rem;margin-left:.6rem}[dir=rtl] .md-source__icon svg{margin-right:.6rem;margin-left:initial}.md-source__icon+.md-source__repository{margin-left:-2rem;padding-left:2rem}[dir=rtl] .md-source__icon+.md-source__repository{margin-right:-2rem;margin-left:initial;padding-right:2rem;padding-left:initial}.md-source__repository{display:inline-block;max-width:calc(100% - 1.2rem);margin-left:.6rem;overflow:hidden;font-weight:700;text-overflow:ellipsis;vertical-align:middle}.md-source__facts{margin:0;padding:0;overflow:hidden;font-weight:700;font-size:.55rem;list-style-type:none;opacity:.75}[data-md-state=done] .md-source__facts{-webkit-animation:md-source__facts--done 250ms ease-in;animation:md-source__facts--done 250ms ease-in}.md-source__fact{float:left}[dir=rtl] .md-source__fact{float:right}[data-md-state=done] .md-source__fact{-webkit-animation:md-source__fact--done 400ms ease-out;animation:md-source__fact--done 400ms ease-out}.md-source__fact::before{margin:0 .1rem;content:"·"}.md-source__fact:first-child::before{display:none}.md-tabs{width:100%;overflow:auto;color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color);transition:background 250ms}.no-js .md-tabs{transition:none}@media screen and (max-width: 76.1875em){.md-tabs{display:none}}@media print{.md-tabs{display:none}}.md-tabs__list{margin:0;margin-left:.2rem;padding:0;white-space:nowrap;list-style:none;contain:content}[dir=rtl] .md-tabs__list{margin-right:.2rem;margin-left:initial}.md-tabs__item{display:inline-block;height:2.4rem;padding-right:.6rem;padding-left:.6rem}.md-tabs__link{display:block;margin-top:.8rem;font-size:.7rem;opacity:.7;transition:transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 250ms}.no-js .md-tabs__link{transition:none}.md-tabs__link--active,.md-tabs__link:hover{color:inherit;opacity:1}.md-tabs__item:nth-child(2) .md-tabs__link{transition-delay:20ms}.md-tabs__item:nth-child(3) .md-tabs__link{transition-delay:40ms}.md-tabs__item:nth-child(4) .md-tabs__link{transition-delay:60ms}.md-tabs__item:nth-child(5) .md-tabs__link{transition-delay:80ms}.md-tabs__item:nth-child(6) .md-tabs__link{transition-delay:100ms}.md-tabs__item:nth-child(7) .md-tabs__link{transition-delay:120ms}.md-tabs__item:nth-child(8) .md-tabs__link{transition-delay:140ms}.md-tabs__item:nth-child(9) .md-tabs__link{transition-delay:160ms}.md-tabs__item:nth-child(10) .md-tabs__link{transition-delay:180ms}.md-tabs__item:nth-child(11) .md-tabs__link{transition-delay:200ms}.md-tabs__item:nth-child(12) .md-tabs__link{transition-delay:220ms}.md-tabs__item:nth-child(13) .md-tabs__link{transition-delay:240ms}.md-tabs__item:nth-child(14) .md-tabs__link{transition-delay:260ms}.md-tabs__item:nth-child(15) .md-tabs__link{transition-delay:280ms}.md-tabs__item:nth-child(16) .md-tabs__link{transition-delay:300ms}.md-tabs[data-md-state=hidden]{pointer-events:none}.md-tabs[data-md-state=hidden] .md-tabs__link{transform:translateY(50%);opacity:0;transition:color 250ms,transform 0ms 400ms,opacity 100ms}@media screen and (min-width: 76.25em){.md-tabs~.md-main .md-nav--primary>.md-nav__list>.md-nav__item--nested{display:none}.md-tabs--active~.md-main .md-nav--primary .md-nav__title{display:block;padding:0 .6rem;pointer-events:none;scroll-snap-align:start}.md-tabs--active~.md-main .md-nav--primary .md-nav__title[for=__drawer]{display:none}.md-tabs--active~.md-main .md-nav--primary>.md-nav__list>.md-nav__item{display:none}.md-tabs--active~.md-main .md-nav--primary>.md-nav__list>.md-nav__item--active{display:block;padding:0}.md-tabs--active~.md-main .md-nav--primary>.md-nav__list>.md-nav__item--active>.md-nav__link{display:none}.md-tabs--active~.md-main .md-nav[data-md-level="1"]>.md-nav__list>.md-nav__item{padding:0 .6rem}.md-tabs--active~.md-main .md-nav[data-md-level="1"] .md-nav .md-nav__title{display:none}}:root{--md-admonition-icon--note: url("data:image/svg+xml;utf8,");--md-admonition-icon--abstract: url("data:image/svg+xml;utf8,");--md-admonition-icon--info: url("data:image/svg+xml;utf8,");--md-admonition-icon--tip: url("data:image/svg+xml;utf8,");--md-admonition-icon--success: url("data:image/svg+xml;utf8,");--md-admonition-icon--question: url("data:image/svg+xml;utf8,");--md-admonition-icon--warning: url("data:image/svg+xml;utf8,");--md-admonition-icon--failure: url("data:image/svg+xml;utf8,");--md-admonition-icon--danger: url("data:image/svg+xml;utf8,");--md-admonition-icon--bug: url("data:image/svg+xml;utf8,");--md-admonition-icon--example: url("data:image/svg+xml;utf8,");--md-admonition-icon--quote: url("data:image/svg+xml;utf8,")}.md-typeset .admonition,.md-typeset details{margin:1.5625em 0;padding:0 .6rem;overflow:hidden;font-size:.64rem;page-break-inside:avoid;border-left:.2rem solid #448aff;border-radius:.1rem;box-shadow:0 .2rem .5rem rgba(0,0,0,.05),0 0 .05rem rgba(0,0,0,.1)}[dir=rtl] .md-typeset .admonition,[dir=rtl] .md-typeset details{border-right:.2rem solid #448aff;border-left:none}@media print{.md-typeset .admonition,.md-typeset details{box-shadow:none}}html .md-typeset .admonition>:last-child,html .md-typeset details>:last-child{margin-bottom:.6rem}.md-typeset .admonition .admonition,.md-typeset details .admonition,.md-typeset .admonition details,.md-typeset details details{margin:1em 0}.md-typeset .admonition .md-typeset__scrollwrap,.md-typeset details .md-typeset__scrollwrap{margin:1em -0.6rem}.md-typeset .admonition .md-typeset__table,.md-typeset details .md-typeset__table{padding:0 .6rem}.md-typeset .admonition-title,.md-typeset summary{position:relative;margin:0 -0.6rem;padding:.4rem .6rem .4rem 2rem;font-weight:700;background-color:rgba(68,138,255,.1)}[dir=rtl] .md-typeset .admonition-title,[dir=rtl] .md-typeset summary{padding:.4rem 2rem .4rem .6rem}html .md-typeset .admonition-title:last-child,html .md-typeset summary:last-child{margin-bottom:0}.md-typeset .admonition-title::before,.md-typeset summary::before{position:absolute;left:.6rem;width:1rem;height:1rem;background-color:#448aff;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note);content:""}[dir=rtl] .md-typeset .admonition-title::before,[dir=rtl] .md-typeset summary::before{right:.6rem;left:initial}.md-typeset .admonition-title code,.md-typeset summary code{margin:initial;padding:initial;color:currentColor;background-color:transparent;border-radius:initial;box-shadow:none}.md-typeset .admonition.note,.md-typeset details.note{border-color:#448aff}.md-typeset .note>.admonition-title,.md-typeset .note>summary{background-color:rgba(68,138,255,.1)}.md-typeset .note>.admonition-title::before,.md-typeset .note>summary::before{background-color:#448aff;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note)}.md-typeset .admonition.abstract,.md-typeset details.abstract,.md-typeset .admonition.tldr,.md-typeset details.tldr,.md-typeset .admonition.summary,.md-typeset details.summary{border-color:#00b0ff}.md-typeset .abstract>.admonition-title,.md-typeset .abstract>summary,.md-typeset .tldr>.admonition-title,.md-typeset .tldr>summary,.md-typeset .summary>.admonition-title,.md-typeset .summary>summary{background-color:rgba(0,176,255,.1)}.md-typeset .abstract>.admonition-title::before,.md-typeset .abstract>summary::before,.md-typeset .tldr>.admonition-title::before,.md-typeset .tldr>summary::before,.md-typeset .summary>.admonition-title::before,.md-typeset .summary>summary::before{background-color:#00b0ff;-webkit-mask-image:var(--md-admonition-icon--abstract);mask-image:var(--md-admonition-icon--abstract)}.md-typeset .admonition.info,.md-typeset details.info,.md-typeset .admonition.todo,.md-typeset details.todo{border-color:#00b8d4}.md-typeset .info>.admonition-title,.md-typeset .info>summary,.md-typeset .todo>.admonition-title,.md-typeset .todo>summary{background-color:rgba(0,184,212,.1)}.md-typeset .info>.admonition-title::before,.md-typeset .info>summary::before,.md-typeset .todo>.admonition-title::before,.md-typeset .todo>summary::before{background-color:#00b8d4;-webkit-mask-image:var(--md-admonition-icon--info);mask-image:var(--md-admonition-icon--info)}.md-typeset .admonition.tip,.md-typeset details.tip,.md-typeset .admonition.important,.md-typeset details.important,.md-typeset .admonition.hint,.md-typeset details.hint{border-color:#00bfa5}.md-typeset .tip>.admonition-title,.md-typeset .tip>summary,.md-typeset .important>.admonition-title,.md-typeset .important>summary,.md-typeset .hint>.admonition-title,.md-typeset .hint>summary{background-color:rgba(0,191,165,.1)}.md-typeset .tip>.admonition-title::before,.md-typeset .tip>summary::before,.md-typeset .important>.admonition-title::before,.md-typeset .important>summary::before,.md-typeset .hint>.admonition-title::before,.md-typeset .hint>summary::before{background-color:#00bfa5;-webkit-mask-image:var(--md-admonition-icon--tip);mask-image:var(--md-admonition-icon--tip)}.md-typeset .admonition.success,.md-typeset details.success,.md-typeset .admonition.done,.md-typeset details.done,.md-typeset .admonition.check,.md-typeset details.check{border-color:#00c853}.md-typeset .success>.admonition-title,.md-typeset .success>summary,.md-typeset .done>.admonition-title,.md-typeset .done>summary,.md-typeset .check>.admonition-title,.md-typeset .check>summary{background-color:rgba(0,200,83,.1)}.md-typeset .success>.admonition-title::before,.md-typeset .success>summary::before,.md-typeset .done>.admonition-title::before,.md-typeset .done>summary::before,.md-typeset .check>.admonition-title::before,.md-typeset .check>summary::before{background-color:#00c853;-webkit-mask-image:var(--md-admonition-icon--success);mask-image:var(--md-admonition-icon--success)}.md-typeset .admonition.question,.md-typeset details.question,.md-typeset .admonition.faq,.md-typeset details.faq,.md-typeset .admonition.help,.md-typeset details.help{border-color:#64dd17}.md-typeset .question>.admonition-title,.md-typeset .question>summary,.md-typeset .faq>.admonition-title,.md-typeset .faq>summary,.md-typeset .help>.admonition-title,.md-typeset .help>summary{background-color:rgba(100,221,23,.1)}.md-typeset .question>.admonition-title::before,.md-typeset .question>summary::before,.md-typeset .faq>.admonition-title::before,.md-typeset .faq>summary::before,.md-typeset .help>.admonition-title::before,.md-typeset .help>summary::before{background-color:#64dd17;-webkit-mask-image:var(--md-admonition-icon--question);mask-image:var(--md-admonition-icon--question)}.md-typeset .admonition.warning,.md-typeset details.warning,.md-typeset .admonition.attention,.md-typeset details.attention,.md-typeset .admonition.caution,.md-typeset details.caution{border-color:#ff9100}.md-typeset .warning>.admonition-title,.md-typeset .warning>summary,.md-typeset .attention>.admonition-title,.md-typeset .attention>summary,.md-typeset .caution>.admonition-title,.md-typeset .caution>summary{background-color:rgba(255,145,0,.1)}.md-typeset .warning>.admonition-title::before,.md-typeset .warning>summary::before,.md-typeset .attention>.admonition-title::before,.md-typeset .attention>summary::before,.md-typeset .caution>.admonition-title::before,.md-typeset .caution>summary::before{background-color:#ff9100;-webkit-mask-image:var(--md-admonition-icon--warning);mask-image:var(--md-admonition-icon--warning)}.md-typeset .admonition.failure,.md-typeset details.failure,.md-typeset .admonition.missing,.md-typeset details.missing,.md-typeset .admonition.fail,.md-typeset details.fail{border-color:#ff5252}.md-typeset .failure>.admonition-title,.md-typeset .failure>summary,.md-typeset .missing>.admonition-title,.md-typeset .missing>summary,.md-typeset .fail>.admonition-title,.md-typeset .fail>summary{background-color:rgba(255,82,82,.1)}.md-typeset .failure>.admonition-title::before,.md-typeset .failure>summary::before,.md-typeset .missing>.admonition-title::before,.md-typeset .missing>summary::before,.md-typeset .fail>.admonition-title::before,.md-typeset .fail>summary::before{background-color:#ff5252;-webkit-mask-image:var(--md-admonition-icon--failure);mask-image:var(--md-admonition-icon--failure)}.md-typeset .admonition.danger,.md-typeset details.danger,.md-typeset .admonition.error,.md-typeset details.error{border-color:#ff1744}.md-typeset .danger>.admonition-title,.md-typeset .danger>summary,.md-typeset .error>.admonition-title,.md-typeset .error>summary{background-color:rgba(255,23,68,.1)}.md-typeset .danger>.admonition-title::before,.md-typeset .danger>summary::before,.md-typeset .error>.admonition-title::before,.md-typeset .error>summary::before{background-color:#ff1744;-webkit-mask-image:var(--md-admonition-icon--danger);mask-image:var(--md-admonition-icon--danger)}.md-typeset .admonition.bug,.md-typeset details.bug{border-color:#f50057}.md-typeset .bug>.admonition-title,.md-typeset .bug>summary{background-color:rgba(245,0,87,.1)}.md-typeset .bug>.admonition-title::before,.md-typeset .bug>summary::before{background-color:#f50057;-webkit-mask-image:var(--md-admonition-icon--bug);mask-image:var(--md-admonition-icon--bug)}.md-typeset .admonition.example,.md-typeset details.example{border-color:#651fff}.md-typeset .example>.admonition-title,.md-typeset .example>summary{background-color:rgba(101,31,255,.1)}.md-typeset .example>.admonition-title::before,.md-typeset .example>summary::before{background-color:#651fff;-webkit-mask-image:var(--md-admonition-icon--example);mask-image:var(--md-admonition-icon--example)}.md-typeset .admonition.quote,.md-typeset details.quote,.md-typeset .admonition.cite,.md-typeset details.cite{border-color:#9e9e9e}.md-typeset .quote>.admonition-title,.md-typeset .quote>summary,.md-typeset .cite>.admonition-title,.md-typeset .cite>summary{background-color:rgba(158,158,158,.1)}.md-typeset .quote>.admonition-title::before,.md-typeset .quote>summary::before,.md-typeset .cite>.admonition-title::before,.md-typeset .cite>summary::before{background-color:#9e9e9e;-webkit-mask-image:var(--md-admonition-icon--quote);mask-image:var(--md-admonition-icon--quote)}.codehilite .o,.highlight .o{color:inherit}.codehilite .ow,.highlight .ow{color:inherit}.codehilite .ge,.highlight .ge{color:#000}.codehilite .gr,.highlight .gr{color:#a00}.codehilite .gh,.highlight .gh{color:#999}.codehilite .go,.highlight .go{color:#888}.codehilite .gp,.highlight .gp{color:#555}.codehilite .gs,.highlight .gs{color:inherit}.codehilite .gu,.highlight .gu{color:#aaa}.codehilite .gt,.highlight .gt{color:#a00}.codehilite .gd,.highlight .gd{background-color:#fdd}.codehilite .gi,.highlight .gi{background-color:#dfd}.codehilite .k,.highlight .k{color:#3b78e7}.codehilite .kc,.highlight .kc{color:#a71d5d}.codehilite .kd,.highlight .kd{color:#3b78e7}.codehilite .kn,.highlight .kn{color:#3b78e7}.codehilite .kp,.highlight .kp{color:#a71d5d}.codehilite .kr,.highlight .kr{color:#3e61a2}.codehilite .kt,.highlight .kt{color:#3e61a2}.codehilite .c,.highlight .c{color:#999}.codehilite .cm,.highlight .cm{color:#999}.codehilite .cp,.highlight .cp{color:#666}.codehilite .c1,.highlight .c1{color:#999}.codehilite .ch,.highlight .ch{color:#999}.codehilite .cs,.highlight .cs{color:#999}.codehilite .na,.highlight .na{color:#c2185b}.codehilite .nb,.highlight .nb{color:#c2185b}.codehilite .bp,.highlight .bp{color:#3e61a2}.codehilite .nc,.highlight .nc{color:#c2185b}.codehilite .no,.highlight .no{color:#3e61a2}.codehilite .nd,.highlight .nd{color:#666}.codehilite .ni,.highlight .ni{color:#666}.codehilite .ne,.highlight .ne{color:#c2185b}.codehilite .nf,.highlight .nf{color:#c2185b}.codehilite .nl,.highlight .nl{color:#3b5179}.codehilite .nn,.highlight .nn{color:#ec407a}.codehilite .nt,.highlight .nt{color:#3b78e7}.codehilite .nv,.highlight .nv{color:#3e61a2}.codehilite .vc,.highlight .vc{color:#3e61a2}.codehilite .vg,.highlight .vg{color:#3e61a2}.codehilite .vi,.highlight .vi{color:#3e61a2}.codehilite .nx,.highlight .nx{color:#ec407a}.codehilite .m,.highlight .m{color:#e74c3c}.codehilite .mf,.highlight .mf{color:#e74c3c}.codehilite .mh,.highlight .mh{color:#e74c3c}.codehilite .mi,.highlight .mi{color:#e74c3c}.codehilite .il,.highlight .il{color:#e74c3c}.codehilite .mo,.highlight .mo{color:#e74c3c}.codehilite .s,.highlight .s{color:#0d904f}.codehilite .sb,.highlight .sb{color:#0d904f}.codehilite .sc,.highlight .sc{color:#0d904f}.codehilite .sd,.highlight .sd{color:#999}.codehilite .s2,.highlight .s2{color:#0d904f}.codehilite .se,.highlight .se{color:#183691}.codehilite .sh,.highlight .sh{color:#183691}.codehilite .si,.highlight .si{color:#183691}.codehilite .sx,.highlight .sx{color:#183691}.codehilite .sr,.highlight .sr{color:#009926}.codehilite .s1,.highlight .s1{color:#0d904f}.codehilite .ss,.highlight .ss{color:#0d904f}.codehilite .err,.highlight .err{color:#a61717}.codehilite .w,.highlight .w{color:transparent}.codehilite .hll,.highlight .hll{display:block;margin:0 -1.1764705882em;padding:0 1.1764705882em;background-color:rgba(255,235,59,.5)}.codehilitetable,.highlighttable{display:block;overflow:hidden}.codehilitetable tbody,.highlighttable tbody,.codehilitetable td,.highlighttable td{display:block;padding:0}.codehilitetable tr,.highlighttable tr{display:flex}.codehilitetable pre,.highlighttable pre{margin:0}.codehilitetable .linenos,.highlighttable .linenos{padding:.525rem 1.1764705882em;padding-right:0;font-size:.85em;background-color:var(--md-code-bg-color);-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.codehilitetable .linenodiv,.highlighttable .linenodiv{padding-right:.5882352941em;box-shadow:inset -0.05rem 0 var(--md-default-fg-color--lightest)}.codehilitetable .linenodiv pre,.highlighttable .linenodiv pre{color:var(--md-default-fg-color--lighter);text-align:right}.codehilitetable .code,.highlighttable .code{flex:1;overflow:hidden}.md-typeset .codehilitetable,.md-typeset .highlighttable{margin:1em 0;direction:ltr;border-radius:.1rem}.md-typeset .codehilitetable code,.md-typeset .highlighttable code{border-radius:0}@media screen and (max-width: 44.9375em){.md-typeset>.codehilite,.md-typeset>.highlight{margin:1em -0.8rem}.md-typeset>.codehilite .hll,.md-typeset>.highlight .hll{margin:0 -0.8rem;padding:0 .8rem}.md-typeset>.codehilite code,.md-typeset>.highlight code{border-radius:0}.md-typeset>.codehilitetable,.md-typeset>.highlighttable{margin:1em -0.8rem;border-radius:0}.md-typeset>.codehilitetable .hll,.md-typeset>.highlighttable .hll{margin:0 -0.8rem;padding:0 .8rem}}:root{--md-footnotes-icon: url("data:image/svg+xml;utf8,")}.md-typeset [id^="fnref:"]{display:inline-block}.md-typeset [id^="fnref:"]:target{margin-top:-3.8rem;padding-top:3.8rem;pointer-events:none}.md-typeset [id^="fn:"]::before{display:none;height:0;content:""}.md-typeset [id^="fn:"]:target::before{display:block;margin-top:-3.5rem;padding-top:3.5rem;pointer-events:none}.md-typeset .footnote{color:var(--md-default-fg-color--light);font-size:.64rem}.md-typeset .footnote ol{margin-left:0}.md-typeset .footnote li{transition:color 125ms}.md-typeset .footnote li:target{color:var(--md-default-fg-color)}.md-typeset .footnote li :first-child{margin-top:0}.md-typeset .footnote li:hover .footnote-backref,.md-typeset .footnote li:target .footnote-backref{transform:translateX(0);opacity:1}.md-typeset .footnote li:hover .footnote-backref:hover{color:var(--md-accent-fg-color)}.md-typeset .footnote-ref{display:inline-block;pointer-events:initial}.md-typeset .footnote-backref{display:inline-block;color:var(--md-primary-fg-color);font-size:0;vertical-align:text-bottom;transform:translateX(0.25rem);opacity:0;transition:color 250ms,transform 250ms 250ms,opacity 125ms 250ms}[dir=rtl] .md-typeset .footnote-backref{transform:translateX(-0.25rem)}.md-typeset .footnote-backref::before{display:inline-block;width:.8rem;height:.8rem;background-color:currentColor;-webkit-mask-image:var(--md-footnotes-icon);mask-image:var(--md-footnotes-icon);content:""}[dir=rtl] .md-typeset .footnote-backref::before svg{transform:scaleX(-1)}@media print{.md-typeset .footnote-backref{color:var(--md-primary-fg-color);transform:translateX(0);opacity:1}}.md-typeset .headerlink{display:inline-block;margin-left:.5rem;visibility:hidden;opacity:0;transition:color 250ms,visibility 0ms 500ms,opacity 125ms}[dir=rtl] .md-typeset .headerlink{margin-right:.5rem;margin-left:initial}html body .md-typeset .headerlink{color:var(--md-default-fg-color--lighter)}@media print{.md-typeset .headerlink{display:none}}.md-typeset :hover>.headerlink,.md-typeset :target>.headerlink,.md-typeset .headerlink:focus{visibility:visible;opacity:1;transition:color 250ms,visibility 0ms,opacity 125ms}.md-typeset :target>.headerlink,.md-typeset .headerlink:focus,.md-typeset .headerlink:hover{color:var(--md-accent-fg-color)}.md-typeset h3[id]::before,.md-typeset h2[id]::before,.md-typeset h1[id]::before{display:block;margin-top:-0.4rem;padding-top:.4rem;content:""}.md-typeset h3[id]:target::before,.md-typeset h2[id]:target::before,.md-typeset h1[id]:target::before{margin-top:-3.4rem;padding-top:3.4rem}.md-typeset h4[id]::before{display:block;margin-top:-0.45rem;padding-top:.45rem;content:""}.md-typeset h4[id]:target::before{margin-top:-3.45rem;padding-top:3.45rem}.md-typeset h6[id]::before,.md-typeset h5[id]::before{display:block;margin-top:-0.6rem;padding-top:.6rem;content:""}.md-typeset h6[id]:target::before,.md-typeset h5[id]:target::before{margin-top:-3.6rem;padding-top:3.6rem}.md-typeset .MJXc-display{margin:.75em 0;padding:.75em 0;overflow:auto;touch-action:auto}@media screen and (max-width: 44.9375em){.md-typeset>p>.MJXc-display{margin:.75em -0.8rem;padding:.25em .8rem}}.md-typeset .MathJax_CHTML{outline:0}.md-typeset del.critic,.md-typeset ins.critic,.md-typeset .critic.comment{padding:0 .25em;border-radius:.1rem;-webkit-box-decoration-break:clone;box-decoration-break:clone}.md-typeset del.critic{background-color:#fdd}.md-typeset ins.critic{background-color:#dfd}.md-typeset .critic.comment{color:#999}.md-typeset .critic.comment::before{content:"/* "}.md-typeset .critic.comment::after{content:" */"}.md-typeset .critic.block{display:block;margin:1em 0;padding-right:.8rem;padding-left:.8rem;overflow:auto;box-shadow:none}.md-typeset .critic.block :first-child{margin-top:.5em}.md-typeset .critic.block :last-child{margin-bottom:.5em}:root{--md-details-icon: url("data:image/svg+xml;utf8,")}.md-typeset details{display:block;padding-top:0;overflow:visible}.md-typeset details[open]>summary::after{transform:rotate(90deg)}.md-typeset details:not([open]){padding-bottom:0}.md-typeset details:not([open])>summary{border-bottom-right-radius:.1rem}.md-typeset details::after{display:table;content:""}.md-typeset summary{display:block;min-height:1rem;padding:.4rem 1.8rem .4rem 2rem;border-top-right-radius:.1rem;cursor:pointer}[dir=rtl] .md-typeset summary{padding:.4rem 2rem .4rem 1.8rem}.md-typeset summary::-webkit-details-marker{display:none}.md-typeset summary::after{position:absolute;top:.4rem;right:.4rem;width:1rem;height:1rem;background-color:currentColor;-webkit-mask-image:var(--md-details-icon);mask-image:var(--md-details-icon);transform:rotate(0deg);transition:transform 250ms;content:""}[dir=rtl] .md-typeset summary::after{right:initial;left:.4rem;transform:rotate(180deg)}.md-typeset img.emojione,.md-typeset img.twemoji,.md-typeset img.gemoji{width:1.125em;vertical-align:-15%}.md-typeset span.twemoji{display:inline-block;height:1.125em;vertical-align:text-top}.md-typeset span.twemoji svg{width:1.125em;fill:currentColor}.highlight [data-linenos]::before{position:-webkit-sticky;position:sticky;left:-1.1764705882em;float:left;margin-right:1.1764705882em;margin-left:-1.1764705882em;padding-left:1.1764705882em;color:var(--md-default-fg-color--lighter);background-color:var(--md-code-bg-color);box-shadow:inset -0.05rem 0 var(--md-default-fg-color--lightest);content:attr(data-linenos);-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.md-typeset .tabbed-content{display:none;order:99;width:100%;box-shadow:0 -0.05rem var(--md-default-fg-color--lightest)}.md-typeset .tabbed-content>.codehilite:only-child pre,.md-typeset .tabbed-content>.codehilitetable:only-child,.md-typeset .tabbed-content>.highlight:only-child pre,.md-typeset .tabbed-content>.highlighttable:only-child{margin:0}.md-typeset .tabbed-content>.codehilite:only-child pre>code,.md-typeset .tabbed-content>.codehilitetable:only-child>code,.md-typeset .tabbed-content>.highlight:only-child pre>code,.md-typeset .tabbed-content>.highlighttable:only-child>code{border-top-left-radius:0;border-top-right-radius:0}.md-typeset .tabbed-content>.tabbed-set{margin:0}.md-typeset .tabbed-set{position:relative;display:flex;flex-wrap:wrap;margin:1em 0;border-radius:.1rem}.md-typeset .tabbed-set>input{display:none}.md-typeset .tabbed-set>input:checked+label{color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color)}.md-typeset .tabbed-set>input:checked+label+.tabbed-content{display:block}.md-typeset .tabbed-set>label{z-index:1;width:auto;padding:.6rem 1.25em .5rem;color:var(--md-default-fg-color--light);font-weight:700;font-size:.64rem;border-bottom:.1rem solid transparent;cursor:pointer;transition:color 125ms}html .md-typeset .tabbed-set>label:hover{color:var(--md-accent-fg-color)}:root{--md-tasklist-icon: url("data:image/svg+xml;utf8,");--md-tasklist-icon--checked: url("data:image/svg+xml;utf8,")}.md-typeset .task-list-item{position:relative;list-style-type:none}.md-typeset .task-list-item [type=checkbox]{position:absolute;top:.45em;left:-2em}[dir=rtl] .md-typeset .task-list-item [type=checkbox]{right:-2em;left:initial}.md-typeset .task-list-control .task-list-indicator::before{position:absolute;top:.15em;left:-1.5em;width:1.25em;height:1.25em;background-color:var(--md-default-fg-color--lightest);-webkit-mask-image:var(--md-tasklist-icon);mask-image:var(--md-tasklist-icon);content:""}[dir=rtl] .md-typeset .task-list-control .task-list-indicator::before{right:-1.5em;left:initial}.md-typeset .task-list-control [type=checkbox]:checked+.task-list-indicator::before{background-color:#00e676;-webkit-mask-image:var(--md-tasklist-icon--checked);mask-image:var(--md-tasklist-icon--checked)}.md-typeset .task-list-control [type=checkbox]{z-index:-1;opacity:0} + +/*# sourceMappingURL=main.62d34fff.min.css.map*/ \ No newline at end of file diff --git a/assets/stylesheets/main.62d34fff.min.css.map b/assets/stylesheets/main.62d34fff.min.css.map new file mode 100644 index 00000000..ebcf2a38 --- /dev/null +++ b/assets/stylesheets/main.62d34fff.min.css.map @@ -0,0 +1 @@ +{"version":3,"sources":["webpack:///./src/assets/stylesheets/main.scss","webpack:///./src/assets/stylesheets/base/_reset.scss","webpack:///./src/assets/stylesheets/base/_colors.scss","webpack:///./src/assets/stylesheets/base/_icons.scss","webpack:///./src/assets/stylesheets/base/_typeset.scss","webpack:///./src/assets/stylesheets/utilities/_break.scss","webpack:///./src/assets/stylesheets/layout/_base.scss","webpack:///./src/assets/stylesheets/layout/_announce.scss","webpack:///./src/assets/stylesheets/layout/_button.scss","webpack:///./src/assets/stylesheets/layout/_clipboard.scss","webpack:///./src/assets/stylesheets/layout/_content.scss","webpack:///./src/assets/stylesheets/layout/_dialog.scss","webpack:///./node_modules/material-shadows/material-shadows.scss","webpack:///./src/assets/stylesheets/layout/_header.scss","webpack:///./src/assets/stylesheets/layout/_hero.scss","webpack:///./src/assets/stylesheets/layout/_footer.scss","webpack:///./src/assets/stylesheets/layout/_nav.scss","webpack:///./src/assets/stylesheets/layout/_search.scss","webpack:///./src/assets/stylesheets/layout/_sidebar.scss","webpack:///./src/assets/stylesheets/layout/_source.scss","webpack:///./src/assets/stylesheets/layout/_tabs.scss","webpack:///./src/assets/stylesheets/extensions/_admonition.scss","webpack:///./src/assets/stylesheets/extensions/_codehilite.scss","webpack:///./src/assets/stylesheets/extensions/_footnotes.scss","webpack:///./src/assets/stylesheets/extensions/_permalinks.scss","webpack:///./src/assets/stylesheets/extensions/pymdown/_arithmatex.scss","webpack:///./src/assets/stylesheets/extensions/pymdown/_critic.scss","webpack:///./src/assets/stylesheets/extensions/pymdown/_details.scss","webpack:///./src/assets/stylesheets/extensions/pymdown/_emoji.scss","webpack:///./src/assets/stylesheets/extensions/pymdown/_highlight.scss","webpack:///./src/assets/stylesheets/extensions/pymdown/_tabbed.scss","webpack:///./src/assets/stylesheets/extensions/pymdown/_tasklist.scss","webpack:///./node_modules/material-design-color/material-color.scss"],"names":[],"mappings":"AAAA,KC6BA,qBACE,sBAIF,kBAGE,MAIF,6BACE,CADF,0BACE,CADF,yBACE,CADF,qBACE,MAIF,QACE,IAIF,sBACE,iBACA,sBAIF,uCAIE,GAIF,aACE,qBACA,OAIF,aACE,SAIF,iBAEE,cACA,cACA,wBACA,KAIF,cACE,KAIF,UACE,KAIF,iBACE,OAIF,wBACE,iBACA,OAIF,kBAEE,mBACA,QAIF,QACE,UACA,kBACA,uBACA,SACA,OAIF,QACE,UACA,OCjGF,4CAGE,oDACA,sDACA,uDACA,4CACA,qDACA,uDACA,yDACA,iDAGA,wDACA,uDACA,kDACA,gEACA,gDAGA,+DACA,iDACA,+DACA,wCAGA,2CACA,cCxBA,aACE,aACA,cACA,cACA,kBACA,MCRJ,kCACE,kCACA,YAIF,gCAEE,oCACA,wEACA,cAIF,gCAGE,6BACA,oDACA,aAWF,eACE,gBACA,iCACA,CADA,kBACA,oEAGA,YAIE,gBAIF,eACE,wCACA,gBACA,oBACA,gBACA,uBACA,gBAIF,mBACE,gBACA,kBACA,gBACA,uBACA,gBAIF,qBACE,gBACA,eACA,gBACA,uBACA,mBAIF,gBACE,gBAIF,cACE,gBACA,gBACA,uBACA,+BAIF,cAEE,wCACA,gBACA,iBACA,uBACA,gBAIF,wBACE,gBAIF,cACE,gEACA,eAIF,gCACE,sBACA,qCAGA,sBAEE,yCAIF,+BAEE,kDAKJ,6BAGE,cACA,cAGA,iDAPF,oBAQI,mBAKJ,uBACE,gBACA,sBACA,yCACA,oBACA,mCACA,CADA,0BACA,yHAIF,cAME,gBACA,6BACA,gBACA,oBAIF,kBACE,iBAIF,iBACE,aACA,gBACA,sBAGA,aACE,SACA,+BACA,cACA,kBACA,gBACA,mCACA,CADA,0BACA,kBACA,yCAGA,WACE,aACA,+CAIF,oDACE,qDAGA,0CACE,0CCfN,gBDyBA,kBACE,sBAGA,eACE,kBAMN,oBACE,wBACA,gBACA,gBACA,wBACA,sBACA,oBACA,+JAEE,kBAMJ,eACE,sBACA,qCACA,oBACA,mCACA,CADA,0BACA,kBAIF,oBACE,8DACA,YACA,mBAIF,WACE,iCAIF,qBAEE,qDAGA,sBACE,oBACA,wBAKJ,kBACE,wCACA,4DACA,kCAGA,mBACE,qBACA,6DACA,oBACA,gBAKJ,oBACE,+BAIF,kBAEE,UACA,mDAGA,mBACE,oBACA,qCAIF,2BACE,2CAGA,2BACE,qCAKJ,kBACE,mBACA,yDAGA,mBACE,oBACA,mGAIF,aAEE,2DAIF,eACE,qFAIF,yBAEE,6HAGA,mBACE,oBACA,gBAOR,wBACE,0BAGA,oBACE,oBACA,oDAKJ,cAGE,gCAIF,oBACE,eACA,cACA,iBACA,sCACA,oBACA,mEAEE,kBAEF,kCAKA,gBACE,+FAIF,eAEE,mHAGA,gBACE,mCAKJ,cACE,oBACA,iCACA,mBACA,mDACA,mCAIF,mBACE,mBACA,6DACA,mCAIF,iCACE,yCAGA,iCACE,uDACA,kDAIF,YACE,kCAMJ,iBACE,yBAKJ,kBACE,gBACA,kBACA,oBAIF,oBACE,mBACA,gBACA,0BAGA,aACE,WACA,SACA,gBACA,MEpbN,WACE,kBAKA,eAOA,4CACA,sCDyIE,KCvJJ,gBAkBI,uCDqIA,KCvJJ,cAuBI,OAKJ,iBACE,aACA,sBACA,WACA,gBACA,gBAGA,0CDqIE,yBC/HA,cACE,eAMJ,KArBF,aAsBI,KAKJ,aACE,cACA,UACA,SACA,UAIF,eACE,kBACA,iBACA,eAIF,YACE,sBACA,YACA,cAIA,cAPF,aAQI,WAKJ,WACE,iBAGA,YACE,YACA,kBACA,cAKJ,aACE,gBACA,mBACA,uBACA,YAQF,YACE,aAIF,cACE,MACA,UACA,QACA,SACA,mDACA,UACA,0DAEE,0CDgDA,4CCxCA,UACE,YACA,UACA,8CAEE,WAYR,cACE,WAGA,aACA,oBACA,iCACA,iBACA,4CACA,oBACA,6BACA,UACA,gBAGA,UACE,wBACA,UACA,2EAEE,OAUN,WACE,cC1LF,aACE,4CACA,qBAGA,iBACE,gBACA,iCACA,gBACA,cAIF,aAbF,YAcI,yBCXF,oBACE,mBACA,iCACA,gBACA,gCACA,oBACA,iEAEE,iCAKF,gCACE,4CACA,wCACA,2DAIF,+BAEE,2CACA,uCACA,eC3BN,iBACE,UACA,WACA,UACA,YACA,aACA,2CACA,oBACA,eACA,uBACA,cAGA,cAbF,YAcI,oBAIF,aACE,eACA,yBAIF,uCACE,iDAIF,+BAEE,aC/BJ,MACE,eACA,+DLyII,YK3IN,8BAMI,yCL0JA,YKhKJ,kCAWI,qBAIF,qBACE,kBACA,wCL+IA,mBKjJF,mBAMI,mBACA,6BAKF,aACE,aACA,WACA,gCAIF,eACE,qBAKJ,WACE,eACA,kBACA,UACA,+BAGA,UACE,mBACA,oBACA,mCAGA,oBACE,iCAKJ,yCACE,yBAIF,cACE,mBACA,cAIF,oBA9BF,YA+BI,aCvEN,gGCFE,eDKA,YACA,aACA,aACA,UACA,cACA,kBACA,oBACA,iCACA,gBACA,sCACA,YACA,oBACA,2BACA,UACA,6CAEE,sBAIF,aACE,WACA,gCAIF,uBACE,UACA,6EAEE,cAKJ,WAtCF,YAuCI,aEvCJ,uBACE,CADF,eACE,MACA,QACA,OACA,UACA,cACA,iCACA,4CACA,+DAIE,8CAGA,mBAIF,eACE,gBACA,kCAIF,gEAEI,+DAGA,cAMJ,WApCF,YAqCI,iBAKJ,YACE,gBACA,wBAGA,iBACE,UACA,aACA,cACA,eACA,yBACA,sCAME,oBACE,2DAKJ,UAEE,gCAIF,YACE,cACA,uEAGA,aAEE,aACA,cACA,kBACA,6CAKJ,YACE,qCRyEF,qCQlEE,YACE,2CRmFJ,+BQ3EE,YACE,yCRwDJ,qCQhDE,YACE,wBAMN,iBACE,WACA,wEAEE,6CAIF,UACE,8BACA,UACA,wEAEE,oBAEF,uDAGA,8BACE,8BAKJ,gBACE,oDAIF,YACE,uBAKJ,WACE,eACA,gBACA,mBACA,mEAGA,UACE,+BACA,UACA,wEAEE,oBAEF,6EAGA,6BACE,yFAIF,SACE,wBACA,UACA,wEAEE,uBAEF,gDAKJ,iBACE,WACA,YACA,wBAKJ,YACE,qCRrCA,uBQoCF,aAKI,cACA,kBACA,iBACA,kCAGA,iBACE,oBACA,yCRjDJ,uBQoCF,kBAmBI,kCAGA,mBACE,WC3NR,eACE,iCACA,eACA,4CACA,4BACA,iBAGA,eACE,0BACA,wEAEE,uBAEF,0CToKA,gBS1KF,iBAUI,qBACA,yCAIF,8BACE,UACA,iDAEE,oBAEF,kCAIF,oBACE,YClCN,gCACE,4CACA,cAGA,WALF,YAMI,wBAQF,aACE,cACA,sBAIF,YACE,mBACA,qBACA,yBACA,qCVwIA,qBU5IF,SAQI,wDAIF,UAEE,4BAIF,UACE,UACA,sCAGA,WACE,0CAGA,oBACE,0CVkIN,iDU7HE,YAII,6BAMN,WACE,UACA,iBACA,sCAGA,UACE,gBACA,0CAGA,oBACE,uBAOR,iBACE,YACA,8BACA,eACA,gBACA,mBACA,wBAIF,YACE,cACA,2BAIF,iBACE,QACA,OACA,iBACA,eACA,wCACA,iBACA,iBAKJ,oDACE,wBAGA,YACE,eACA,8BACA,cACA,mCAIF,uCACE,iFAGA,gCAEE,sBAMN,UACE,kBACA,gBACA,0CACA,iBACA,qCVqBE,qBU1BJ,UASI,kCAIF,uCACE,mBAKJ,cACE,sBACA,qCVKE,kBUPJ,eAMI,0BAIF,oBACE,aACA,cACA,kBACA,iCAGA,eACE,6BAIF,gBACE,oBACA,kBACA,SClLN,eACE,gBACA,gBAGA,aACE,gBACA,gBACA,gBACA,uBACA,gCAGA,YACE,oCAGA,UACE,YACA,uFAOA,aAEE,aACA,cACA,4CAIF,iBACE,eAOR,QACE,UACA,gBACA,eAIF,eACE,0BAGA,oBACE,6BAIF,eACE,uCAGA,mBACE,eACA,wCAIF,gBACE,eAMN,aACE,kBACA,gBACA,uBACA,eACA,uBACA,wBACA,+BAIA,YACE,uCAGA,YACE,mCAKJ,uCACE,qCAIF,gCACE,qCAIF,aACE,yCAIF,+BAEE,iBAKJ,YACE,0CX2DA,QWlLJ,2CA4HI,2CAGA,iBAEE,MACA,QACA,OACA,UACA,aACA,sBACA,YACA,gEAOA,eAEE,gBACA,iCAIF,iBACE,cACA,yBACA,wCACA,gBACA,mBACA,mBACA,sDACA,eACA,+CAGA,iBACE,UACA,WACA,cACA,aACA,cACA,aACA,yDAGA,WACE,aACA,+CAKJ,eACE,4CACA,iEAEE,qCACF,CADE,gCACF,CADE,4BACF,mBACA,yEAGA,YACE,+CAKJ,iBACE,iCACA,4CACA,+DAGA,iBACE,UACA,WACA,cACA,aACA,cACA,iBACA,8EASJ,WACE,aACA,gCAKJ,MACE,gCAIF,SACE,6DACA,0CAGA,SACE,sDAIF,oBACE,gEAGA,mBACE,oBACA,sDAKJ,gCACE,uHAGA,+BAEE,gCAMN,iBACE,aACA,oBACA,8CAGA,iBACE,QACA,YACA,mBACA,cACA,iBACA,wDAGA,aACE,WACA,8CAYF,mBACE,mDASJ,eACE,6CAIF,eACE,6BACA,2DAGA,mBACE,qEAGA,oBACE,qBACA,mEAKJ,iBACE,6EAGA,kBACE,qBACA,2EAKJ,mBACE,qFAGA,oBACE,qBACA,mFAKJ,mBACE,6FAGA,oBACE,qBACA,yBAQV,YACE,2BACA,UACA,2EAEE,mCAIF,2BACE,iCAKJ,uBACE,UACA,4EAEE,+CAIF,kCACE,CADF,0BACE,2CX3MJ,8BWqNA,aACE,qBACA,6CAGA,YACE,uCAIF,YACE,8BAKJ,mBACE,oBACA,iBAIF,aACE,gBACA,iCACA,kDACA,sCXjQF,6CW4QE,uBACE,iDAIF,YACE,yCXlRJ,QWhKJ,0DAybI,+CAME,uBACE,+CAIF,YACE,yBAKJ,YACE,iCAIF,aACE,8CAIF,YACE,eAIF,WACE,aACA,2BACA,yBAGA,UACE,yBACA,mBAIF,oBACE,YACA,aACA,uBACA,2EAIF,uBACE,aCteR,iBACE,mBAGA,YACE,qCZmJA,WYxJJ,eAUI,sBAIF,SACE,UACA,0CZ0JA,oBY5JF,iBAMI,UACA,aACA,WACA,YACA,gBACA,4CACA,mBACA,wBACA,qDAEE,oBAEF,+BAGA,aACE,aACA,gEAIF,SACE,yCAEE,2CZ8HN,+DYxHA,mBAII,gEZ6EF,+DYjFF,mBASI,gEZwEF,+DYjFF,mBAcI,sCZwFJ,oBY1IF,cAwDI,MACA,OACA,QACA,SACA,mDACA,eACA,0DAEE,+BAKF,OACE,aACA,gEAIF,UACE,YACA,UACA,8CAEE,oBAQR,kCAEE,CAFF,0BAEE,0CZkEA,kBYpEF,cAMI,MACA,UACA,UACA,WACA,YACA,yBACA,UACA,iHAEE,8DAMF,MACE,wBACA,UACA,+GAEE,wEAMF,OACE,aACA,kCAKJ,UACE,aACA,0BACA,sCZQJ,kBYlDF,iBAgDI,YACA,cACA,gBACA,sDACA,6BAGA,UACE,gEZ3BF,6DYgCF,aAII,yCZfJ,6DYWA,aASI,mBAMN,iBACE,qCZ3BA,iBY0BF,mBAKI,oBAKJ,iBACE,UACA,0BACA,uBACA,6BAGA,yBACE,8CAIF,8BACE,CADF,sBACE,CALA,oCAIF,2BACE,CADF,sBACE,CALA,yCAIF,0BACE,CADF,sBACE,CALA,+BAIF,sBACE,8CAIF,uCAEE,CANA,oCAIF,uCAEE,CANA,yCAIF,uCAEE,CANA,kEAIF,uCAEE,8BAIF,YACE,0CZ1CF,kBYkBF,UA6BI,cACA,gBACA,sCZnEF,kBYoCF,UAoCI,cACA,oBACA,cACA,gBACA,qDACA,oBACA,8CAEE,6BAIF,oBACE,oCAIF,gCACE,8CAIF,uCACE,CALA,oCAIF,uCACE,CALA,yCAIF,uCACE,CALA,+BAIF,uCACE,yBAIF,qDACE,8DAIF,gCACE,mBACA,4CACA,8BACA,yFAGA,uCAEE,CALF,+EAGA,uCAEE,CALF,oFAGA,uCAEE,CALF,wJAGA,uCAEE,mBAOR,iBACE,UACA,aACA,cACA,eACA,qCAEE,wBAIF,UACE,gCAIF,SACE,WACA,0CAGA,WACE,aACA,8CAGA,oBACE,0CZjIN,+BYsHA,SAiBI,WACA,0CAGA,WACE,aACA,gDAIF,YACE,sCZpKN,+BYwIA,mBAkCI,+CAGA,YACE,+BAMN,SACE,YACA,sBACA,UACA,wEAEE,oBAEF,wCAGA,aACE,WACA,0CZ/KJ,6BYkKA,SAkBI,YACA,wCAGA,aACE,WACA,oHAKJ,kBAEE,UACA,uBACA,yHAGA,UACE,oBAOR,iBACE,UACA,WACA,gBACA,8BACA,0CZnNA,mBY8MF,UASI,SACA,sCZ1OF,mBYgOF,UAeI,UACA,yBACA,+DAGA,kGLpYJ,UKuYM,yBAMN,WACE,gBACA,4CACA,iEACA,mCAEA,CAFA,0BAEA,qCACA,CADA,gCACA,CADA,4BACA,mBACA,oEAGA,uBAXF,uBAYI,gEZ9RA,uBYkRJ,aAiBI,yCZ9QF,uBY6PF,aAsBI,sCZnRF,uBY6PF,YA2BI,mEAGA,eACE,2CAIF,WACE,aACA,iDAIF,oDACE,uDAGA,0CACE,oBAQV,gCACE,sBACA,yBAGA,eACE,wCACA,iBACA,mBACA,sDACA,wBACA,qCZ9TA,wBYwTF,mBAUI,mCAGA,oBACE,qBACA,0BAMN,QACE,UACA,gBACA,6DACA,yBAIF,4DACE,yBAIF,aACE,UACA,4BACA,wBACA,6DAGA,uDAEE,mIAGA,UACE,8DAKJ,mBACE,4BAKJ,iBACE,gBACA,cACA,qCZrXA,2BYkXF,mBAOI,sCAGA,oBACE,mBACA,gEAQF,eACE,gBACA,gBACA,gBACA,yBAMN,iBACE,OACA,aACA,cACA,wCACA,mCAGA,OACE,aACA,uCAGA,oBACE,0CZ5YJ,wBY8XF,YAoBI,2BAKJ,aACE,gBACA,iBACA,gBACA,2BAMF,mBACE,mBACA,cACA,gBACA,wCACA,iBACA,gBACA,uBACA,4BACA,qBACA,0CZ3aA,0BYiaF,iBAcI,qBACA,gEZvdA,0BYwcJ,iBAoBI,qBACA,uBAOJ,eACE,kBACA,0BACA,aC1mBJ,uBACE,CADF,eACE,WACA,cACA,iBACA,gBACA,cAGA,YARF,YASI,2CbiKA,qBa1JA,cACE,MACA,cACA,UACA,cACA,YACA,4CACA,wBACA,yEAEE,gCAIF,cACE,aACA,oEAIF,sGNtBJ,8BMyBM,8EAGA,8BACE,8CAKJ,eACE,yBAMN,YACE,QACA,qCb+FA,uBajGF,aAMI,gDAGA,kBACE,0BAMN,eACE,eACA,gBACA,mCAEA,CAFA,0BAEA,qCACA,CADA,gCACA,CADA,4BACA,0Cb6FA,6CavFE,iBACE,MACA,QACA,SACA,OACA,SACA,8BACA,CADA,yBACA,CADA,qBACA,6CAKJ,WACE,aACA,kDAIF,oDACE,wDAGA,0CACE,2CClHR,GACE,QACE,MAGF,aACE,ED4GI,kCClHR,GACE,QACE,MAGF,aACE,2CAKJ,GACE,0BACE,UACA,KAGF,SACE,MAGF,wBACE,UACA,EAjBA,iCAKJ,GACE,0BACE,UACA,KAGF,SACE,MAGF,wBACE,UACA,aASJ,aACE,iBACA,gBACA,mBACA,mCAEA,CAFA,0BAEA,yBACA,kBAGA,UACE,kBAIF,oBACE,aACA,cACA,sBACA,sBAGA,gBACE,kBACA,gCAGA,kBACE,oBACA,yCAKJ,iBACE,kBACA,mDAGA,kBACE,oBACA,mBACA,qBACA,wBAMN,oBACE,8BACA,kBACA,gBACA,gBACA,uBACA,sBACA,mBAIF,QACE,UACA,gBACA,gBACA,iBACA,qBACA,YACA,wCAGA,sDACE,CADF,8CACE,kBAKJ,UACE,4BAGA,WACE,uCAIF,sDACE,CADF,8CACE,0BAIF,cACE,YACA,sCAIF,YACE,UCjIN,UACE,cACA,iCACA,4CACA,4BACA,iBAGA,eACE,0CfyKA,SelLJ,YAcI,eAIF,SAlBF,YAmBI,iBAIF,QACE,kBACA,UACA,mBACA,gBACA,gBACA,0BAGA,kBACE,oBACA,gBAKJ,oBACE,cACA,oBACA,mBACA,gBAKF,aACE,iBACA,gBACA,WACA,wEAEE,uBAIF,eACE,6CAIF,aAEE,UACA,4CAKA,qBACE,4CADF,qBACE,4CADF,qBACE,4CADF,qBACE,4CADF,sBACE,4CADF,sBACE,4CADF,sBACE,4CADF,sBACE,6CADF,sBACE,6CADF,sBACE,6CADF,sBACE,6CADF,sBACE,6CADF,sBACE,6CADF,sBACE,6CADF,sBACE,gCAMN,mBACE,+CAIA,yBACE,UACA,yDAEE,wCfyEJ,uEe/DA,YACE,2DAUE,aACE,gBACA,oBACA,wBACA,yEAGA,YACE,wEAKJ,YACE,gFAGA,aACE,UACA,8FAGA,YACE,kFAUN,eACE,6EAIF,YACE,QC7HV,6RAMI,8bAYA,igCAgDA,uRAiBE,gdAiCJ,+LAME,sVATK,sXASL,g5BAKE,kMAdG,8DAQP,iDACE,gPAGA,iBAZK,yJAcH,oJAdG,8MASL,+PAGA,8BAZK,mFAcH,iNAdG,kOASL,8PATK,mHAcH,uSAdG,6NASL,4RATK,wBAcH,mNAdG,oLAQP,6JACE,sHAGA,2KAEE,uNANJ,mCACE,mPATK,0EAcH,0CAdG,2KAYL,uNAZK,6SAcH,6QANJ,qBACE,gMATK,qRClIkB,wBACE,uDACD,+CACA,yLAMF,oBAwFxB,iNAjF6B,mCACH,iQAQA,wBAwF1B,sDAtF2B,8CAGD,+KAID,2NAKH,mCAwFtB,uPAnFkC,wBAyFlC,sDApF0B,8CACM,mHAEE,oBACK,mIAKH,mCAwFpC,mKApFiC,wBACC,qDACC,6CACH,qDACA,oBAwFhC,6DAtFiC,kCA0FjC,6EAtFsB,wBA2FpB,kDAEA,0CACA,6DAWF,yFAIA,oCAGE,qFAYA,8EAMA,8CAEA,+GAEA,mJAKA,qCACA,+JAaA,4EASA,4CAEA,2EAIE,6CjBnEF,0CiB2EA,UACE,0EAIE,0CACA,0CAIF,aACE,0EAMF,UACA,qFAIE,qBCnRR,2EACE,aASA,gCACE,aAGA,gCACE,aACA,gCACA,aAQF,gCAGE,6CAIF,aACE,8BACA,UACA,gCACA,UAKJ,gCACE,0CAIA,0CAKA,0CACE,UAGA,gCACE,6CAIF,aACE,0HAMA,aACA,0EAIF,UACE,gCAMN,aACE,gCACA,6CAKA,6CAGA,6CACA,aACA,gCACA,6CAEE,6CAKF,6CACE,aAIF,gCAEE,aACA,8BACA,0FAEA,0DAME,gCAOJ,6CAtCF,aAuCI,8BACA,aACA,gCC3HJ,aACE,gCAIA,aACA,0EAEE,aAKF,gCACE,aACA,gCAIF,6CACE,aAIF,gCACE,0FAKJ,6CAIE,+CAEE,8FAMJ,sCAGE,8DAYE,kCAEE,6BAEA,4GAIF,wCAEE,sDARA,4DAEA,8BAKA,gBACA,yDAVF,yBAEE,sBACA,qBACA,yEAKA,2BACA,iEC9DJ,gEpB0KA,yCoBrKF,CAII,8DAOF,gFCrBF,YAGE,cACA,wFAKF,yDAKA,+CJUmB,4EIAf,gBAIF,gBACE,0DAOF,eACA,0DAGA,kCAGA,oEC9CJ,4OAyBM,4BAIA,uDACE,kBAMF,mBACA,oBAKJ,iCAIE,qBACA,mDAEA,gCAGA,uCACE,8DAKA,iBAIF,0BAEE,aACA,CACA,yBACA,sBACA,wGAEA,gHAOE,uBACA,kECxEJ,+BACA,2BAIF,oBACE,uBACA,+BACA,oBAIE,iCACA,uCCZF,CADF,6BACE,UACA,iEAGA,yCACA,8BACA,uCACA,2EACA,4CACA,oGCZA,oBAEA,4EACA,2NAOE,mQAIE,4BACA,oDAMF,6FASF,+BACA,kFAOE,aACE,qCACA,kHAWF,kBACA,mBACA,yCACA,oBACA,mBACA,8CAEA,uCACA,uDAGA,aClEN,iDACE,oEACA,kBASA,mBACE,2BACA,4CAIA,kBAEE,0CACA,4BAGA,oBAEE,2DASJ,0EAKE,sEACA,wEAEA,0EAGA,UACE,qCACA,8DAKJ,2BCsWa,+FDpWX,wCAIF,eACE,uCACA,04I","file":"assets/stylesheets/main.62d34fff.min.css","sourcesContent":["html{box-sizing:border-box}*,*::before,*::after{box-sizing:inherit}html{text-size-adjust:none}body{margin:0}hr{box-sizing:content-box;overflow:visible}a,button,label,input{-webkit-tap-highlight-color:transparent}a{color:inherit;text-decoration:none}small{font-size:80%}sub,sup{position:relative;font-size:80%;line-height:0;vertical-align:baseline}sub{bottom:-0.25em}sup{top:-0.5em}img{border-style:none}table{border-collapse:separate;border-spacing:0}td,th{font-weight:normal;vertical-align:top}button{margin:0;padding:0;font-size:inherit;background:transparent;border:0}input{border:0;outline:0}:root{--md-default-fg-color: hsla(0, 0%, 0%, 0.87);--md-default-fg-color--light: hsla(0, 0%, 0%, 0.54);--md-default-fg-color--lighter: hsla(0, 0%, 0%, 0.26);--md-default-fg-color--lightest: hsla(0, 0%, 0%, 0.07);--md-default-bg-color: hsla(0, 0%, 100%, 1);--md-default-bg-color--light: hsla(0, 0%, 100%, 0.7);--md-default-bg-color--lighter: hsla(0, 0%, 100%, 0.3);--md-default-bg-color--lightest: hsla(0, 0%, 100%, 0.12);--md-primary-fg-color: hsla(231deg, 48%, 48%, 1);--md-primary-fg-color--light: hsla(230deg, 44%, 64%, 1);--md-primary-fg-color--dark: hsla(232deg, 54%, 41%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light);--md-accent-fg-color: hsla(231deg, 99%, 66%, 1);--md-accent-fg-color--transparent: hsla(231deg, 99%, 66%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light);--md-code-bg-color: hsla(0, 0%, 96%, 1);--md-code-fg-color: hsla(200, 18%, 26%, 1)}.md-icon svg{display:block;width:1.2rem;height:1.2rem;margin:0 auto;fill:currentColor}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}body,input{color:var(--md-default-fg-color);font-feature-settings:\"kern\",\"liga\";font-family:-apple-system,BlinkMacSystemFont,Helvetica,Arial,sans-serif}code,pre,kbd{color:var(--md-default-fg-color);font-feature-settings:\"kern\";font-family:SFMono-Regular,Consolas,Menlo,monospace}.md-typeset{font-size:.8rem;line-height:1.6;color-adjust:exact}.md-typeset p,.md-typeset ul,.md-typeset ol,.md-typeset blockquote{margin:1em 0}.md-typeset h1{margin:0 0 2rem;color:var(--md-default-fg-color--light);font-weight:300;font-size:1.5625rem;line-height:1.3;letter-spacing:-0.01em}.md-typeset h2{margin:2rem 0 .8rem;font-weight:300;font-size:1.25rem;line-height:1.4;letter-spacing:-0.01em}.md-typeset h3{margin:1.6rem 0 .8rem;font-weight:400;font-size:1rem;line-height:1.5;letter-spacing:-0.01em}.md-typeset h2+h3{margin-top:.8rem}.md-typeset h4{margin:.8rem 0;font-weight:700;font-size:.8rem;letter-spacing:-0.01em}.md-typeset h5,.md-typeset h6{margin:.8rem 0;color:var(--md-default-fg-color--light);font-weight:700;font-size:.64rem;letter-spacing:-0.01em}.md-typeset h5{text-transform:uppercase}.md-typeset hr{margin:1.5em 0;border-bottom:.05rem dotted var(--md-default-fg-color--lighter)}.md-typeset a{color:var(--md-primary-fg-color);word-break:break-word}.md-typeset a,.md-typeset a::before{transition:color 125ms}.md-typeset a:focus,.md-typeset a:hover{color:var(--md-accent-fg-color)}.md-typeset code,.md-typeset pre,.md-typeset kbd{color:var(--md-code-fg-color);direction:ltr}@media print{.md-typeset code,.md-typeset pre,.md-typeset kbd{white-space:pre-wrap}}.md-typeset code{padding:0 .2941176471em;font-size:.85em;word-break:break-word;background-color:var(--md-code-bg-color);border-radius:.1rem;box-decoration-break:clone}.md-typeset h1 code,.md-typeset h2 code,.md-typeset h3 code,.md-typeset h4 code,.md-typeset h5 code,.md-typeset h6 code{margin:initial;padding:initial;background-color:transparent;box-shadow:none}.md-typeset a>code{color:currentColor}.md-typeset pre{position:relative;margin:1em 0;line-height:1.4}.md-typeset pre>code{display:block;margin:0;padding:.525rem 1.1764705882em;overflow:auto;word-break:normal;box-shadow:none;box-decoration-break:slice;touch-action:auto}.md-typeset pre>code::-webkit-scrollbar{width:.2rem;height:.2rem}.md-typeset pre>code::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-typeset pre>code::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}@media screen and (max-width: 44.9375em){.md-typeset>pre{margin:1em -0.8rem}.md-typeset>pre code{border-radius:0}}.md-typeset kbd{display:inline-block;padding:0 .6666666667em;font-size:.75em;line-height:1.5;vertical-align:text-top;word-break:break-word;border-radius:.1rem;box-shadow:0 .1rem 0 .05rem var(--md-default-fg-color--lighter),0 .1rem 0 var(--md-default-fg-color--lighter),inset 0 -0.1rem .2rem var(--md-default-bg-color)}.md-typeset mark{padding:0 .25em;word-break:break-word;background-color:rgba(255,235,59,.5);border-radius:.1rem;box-decoration-break:clone}.md-typeset abbr{text-decoration:none;border-bottom:.05rem dotted var(--md-default-fg-color--light);cursor:help}.md-typeset small{opacity:.75}.md-typeset sup,.md-typeset sub{margin-left:.078125em}[dir=rtl] .md-typeset sup,[dir=rtl] .md-typeset sub{margin-right:.078125em;margin-left:initial}.md-typeset blockquote{padding-left:.6rem;color:var(--md-default-fg-color--light);border-left:.2rem solid var(--md-default-fg-color--lighter)}[dir=rtl] .md-typeset blockquote{padding-right:.6rem;padding-left:initial;border-right:.2rem solid var(--md-default-fg-color--lighter);border-left:initial}.md-typeset ul{list-style-type:disc}.md-typeset ul,.md-typeset ol{margin-left:.625em;padding:0}[dir=rtl] .md-typeset ul,[dir=rtl] .md-typeset ol{margin-right:.625em;margin-left:initial}.md-typeset ul ol,.md-typeset ol ol{list-style-type:lower-alpha}.md-typeset ul ol ol,.md-typeset ol ol ol{list-style-type:lower-roman}.md-typeset ul li,.md-typeset ol li{margin-bottom:.5em;margin-left:1.25em}[dir=rtl] .md-typeset ul li,[dir=rtl] .md-typeset ol li{margin-right:1.25em;margin-left:initial}.md-typeset ul li p,.md-typeset ul li blockquote,.md-typeset ol li p,.md-typeset ol li blockquote{margin:.5em 0}.md-typeset ul li:last-child,.md-typeset ol li:last-child{margin-bottom:0}.md-typeset ul li ul,.md-typeset ul li ol,.md-typeset ol li ul,.md-typeset ol li ol{margin:.5em 0 .5em .625em}[dir=rtl] .md-typeset ul li ul,[dir=rtl] .md-typeset ul li ol,[dir=rtl] .md-typeset ol li ul,[dir=rtl] .md-typeset ol li ol{margin-right:.625em;margin-left:initial}.md-typeset dd{margin:1em 0 1em 1.875em}[dir=rtl] .md-typeset dd{margin-right:1.875em;margin-left:initial}.md-typeset iframe,.md-typeset img,.md-typeset svg{max-width:100%}.md-typeset table:not([class]){display:inline-block;max-width:100%;overflow:auto;font-size:.64rem;background:var(--md-default-bg-color);border-radius:.1rem;box-shadow:0 .2rem .5rem rgba(0,0,0,.05),0 0 .05rem rgba(0,0,0,.1);touch-action:auto}.md-typeset table:not([class])+*{margin-top:1.5em}.md-typeset table:not([class]) th:not([align]),.md-typeset table:not([class]) td:not([align]){text-align:left}[dir=rtl] .md-typeset table:not([class]) th:not([align]),[dir=rtl] .md-typeset table:not([class]) td:not([align]){text-align:right}.md-typeset table:not([class]) th{min-width:5rem;padding:.6rem .8rem;color:var(--md-default-bg-color);vertical-align:top;background-color:var(--md-default-fg-color--light)}.md-typeset table:not([class]) td{padding:.6rem .8rem;vertical-align:top;border-top:.05rem solid var(--md-default-fg-color--lightest)}.md-typeset table:not([class]) tr{transition:background-color 125ms}.md-typeset table:not([class]) tr:hover{background-color:rgba(0,0,0,.035);box-shadow:0 .05rem 0 var(--md-default-bg-color) inset}.md-typeset table:not([class]) tr:first-child td{border-top:0}.md-typeset table:not([class]) a{word-break:normal}.md-typeset__scrollwrap{margin:1em -0.8rem;overflow-x:auto;touch-action:auto}.md-typeset__table{display:inline-block;margin-bottom:.5em;padding:0 .8rem}.md-typeset__table table{display:table;width:100%;margin:0;overflow:hidden}html{height:100%;overflow-x:hidden;font-size:125%;background-color:var(--md-default-bg-color)}@media screen and (min-width: 100em){html{font-size:137.5%}}@media screen and (min-width: 125em){html{font-size:150%}}body{position:relative;display:flex;flex-direction:column;width:100%;min-height:100%;font-size:.5rem}@media screen and (max-width: 59.9375em){body[data-md-state=lock]{position:fixed}}@media print{body{display:block}}hr{display:block;height:.05rem;padding:0;border:0}.md-grid{max-width:61rem;margin-right:auto;margin-left:auto}.md-container{display:flex;flex-direction:column;flex-grow:1}@media print{.md-container{display:block}}.md-main{flex-grow:1}.md-main__inner{display:flex;height:100%;margin-top:1.5rem}.md-ellipsis{display:block;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.md-toggle{display:none}.md-overlay{position:fixed;top:0;z-index:3;width:0;height:0;background-color:var(--md-default-fg-color--light);opacity:0;transition:width 0ms 250ms,height 0ms 250ms,opacity 250ms}@media screen and (max-width: 76.1875em){[data-md-toggle=drawer]:checked~.md-overlay{width:100%;height:100%;opacity:1;transition:width 0ms,height 0ms,opacity 250ms}}.md-skip{position:fixed;z-index:-1;margin:.5rem;padding:.3rem .5rem;color:var(--md-default-bg-color);font-size:.64rem;background-color:var(--md-default-fg-color);border-radius:.1rem;transform:translateY(0.4rem);opacity:0}.md-skip:focus{z-index:10;transform:translateY(0);opacity:1;transition:transform 250ms cubic-bezier(0.4, 0, 0.2, 1),opacity 175ms 75ms}@page{margin:25mm}.md-announce{overflow:auto;background-color:var(--md-default-fg-color)}.md-announce__inner{margin:.6rem auto;padding:0 .8rem;color:var(--md-default-bg-color);font-size:.7rem}@media print{.md-announce{display:none}}.md-typeset .md-button{display:inline-block;padding:.625em 2em;color:var(--md-primary-fg-color);font-weight:700;border:.1rem solid currentColor;border-radius:.1rem;transition:color 125ms,background-color 125ms,border-color 125ms}.md-typeset .md-button--primary{color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color);border-color:var(--md-primary-fg-color)}.md-typeset .md-button:focus,.md-typeset .md-button:hover{color:var(--md-accent-bg-color);background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color)}.md-clipboard{position:absolute;top:.4rem;right:.5em;z-index:1;width:1.5em;height:1.5em;color:var(--md-default-fg-color--lightest);border-radius:.1rem;cursor:pointer;transition:color 125ms}@media print{.md-clipboard{display:none}}.md-clipboard svg{width:1.125em;height:1.125em}pre:hover .md-clipboard{color:var(--md-default-fg-color--light)}pre .md-clipboard:focus,pre .md-clipboard:hover{color:var(--md-accent-fg-color)}.md-content{flex:1;max-width:100%}@media screen and (min-width: 60em)and (max-width: 76.1875em){.md-content{max-width:calc(100% - 12.1rem)}}@media screen and (min-width: 76.25em){.md-content{max-width:calc(100% - 12.1rem * 2)}}.md-content__inner{margin:0 .8rem 1.2rem;padding-top:.6rem}@media screen and (min-width: 76.25em){.md-content__inner{margin-right:1.2rem;margin-left:1.2rem}}.md-content__inner::before{display:block;height:.4rem;content:\"\"}.md-content__inner>:last-child{margin-bottom:0}.md-content__button{float:right;margin:.4rem 0;margin-left:.4rem;padding:0}[dir=rtl] .md-content__button{float:left;margin-right:.4rem;margin-left:initial}[dir=rtl] .md-content__button svg{transform:scaleX(-1)}.md-typeset .md-content__button{color:var(--md-default-fg-color--lighter)}.md-content__button svg{display:inline;vertical-align:top}@media print{.md-content__button{display:none}}.md-dialog{box-shadow:0 2px 2px 0 rgba(0,0,0,.14),0 1px 5px 0 rgba(0,0,0,.12),0 3px 1px -2px rgba(0,0,0,.2);position:fixed;right:.8rem;bottom:.8rem;left:initial;z-index:2;display:block;min-width:11.1rem;padding:.4rem .6rem;color:var(--md-default-bg-color);font-size:.7rem;background:var(--md-default-fg-color);border:none;border-radius:.1rem;transform:translateY(100%);opacity:0;transition:transform 0ms 400ms,opacity 400ms}[dir=rtl] .md-dialog{right:initial;left:.8rem}.md-dialog[data-md-state=open]{transform:translateY(0);opacity:1;transition:transform 400ms cubic-bezier(0.075, 0.85, 0.175, 1),opacity 400ms}@media print{.md-dialog{display:none}}.md-header{position:sticky;top:0;right:0;left:0;z-index:2;height:2.4rem;color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color);box-shadow:0 0 .2rem rgba(0,0,0,0),0 .2rem .4rem rgba(0,0,0,0);transition:color 250ms,background-color 250ms}.no-js .md-header{box-shadow:none;transition:none}.md-header[data-md-state=shadow]{box-shadow:0 0 .2rem rgba(0,0,0,.1),0 .2rem .4rem rgba(0,0,0,.2);transition:color 250ms,background-color 250ms,box-shadow 250ms}@media print{.md-header{display:none}}.md-header-nav{display:flex;padding:0 .2rem}.md-header-nav__button{position:relative;z-index:1;margin:.2rem;padding:.4rem;cursor:pointer;transition:opacity 250ms}[dir=rtl] .md-header-nav__button svg{transform:scaleX(-1)}.md-header-nav__button:focus,.md-header-nav__button:hover{opacity:.7}.md-header-nav__button.md-logo{margin:.2rem;padding:.4rem}.md-header-nav__button.md-logo img,.md-header-nav__button.md-logo svg{display:block;width:1.2rem;height:1.2rem;fill:currentColor}.no-js .md-header-nav__button[for=__search]{display:none}@media screen and (min-width: 60em){.md-header-nav__button[for=__search]{display:none}}@media screen and (max-width: 76.1875em){.md-header-nav__button.md-logo{display:none}}@media screen and (min-width: 76.25em){.md-header-nav__button[for=__drawer]{display:none}}.md-header-nav__topic{position:absolute;width:100%;transition:transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 150ms}.md-header-nav__topic+.md-header-nav__topic{z-index:-1;transform:translateX(1.25rem);opacity:0;transition:transform 400ms cubic-bezier(1, 0.7, 0.1, 0.1),opacity 150ms;pointer-events:none}[dir=rtl] .md-header-nav__topic+.md-header-nav__topic{transform:translateX(-1.25rem)}.no-js .md-header-nav__topic{position:initial}.no-js .md-header-nav__topic+.md-header-nav__topic{display:none}.md-header-nav__title{flex-grow:1;padding:0 1rem;font-size:.9rem;line-height:2.4rem}.md-header-nav__title[data-md-state=active] .md-header-nav__topic{z-index:-1;transform:translateX(-1.25rem);opacity:0;transition:transform 400ms cubic-bezier(1, 0.7, 0.1, 0.1),opacity 150ms;pointer-events:none}[dir=rtl] .md-header-nav__title[data-md-state=active] .md-header-nav__topic{transform:translateX(1.25rem)}.md-header-nav__title[data-md-state=active] .md-header-nav__topic+.md-header-nav__topic{z-index:0;transform:translateX(0);opacity:1;transition:transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 150ms;pointer-events:initial}.md-header-nav__title>.md-header-nav__ellipsis{position:relative;width:100%;height:100%}.md-header-nav__source{display:none}@media screen and (min-width: 60em){.md-header-nav__source{display:block;width:11.7rem;max-width:11.7rem;margin-left:1rem}[dir=rtl] .md-header-nav__source{margin-right:1rem;margin-left:initial}}@media screen and (min-width: 76.25em){.md-header-nav__source{margin-left:1.4rem}[dir=rtl] .md-header-nav__source{margin-right:1.4rem}}.md-hero{overflow:hidden;color:var(--md-primary-bg-color);font-size:1rem;background-color:var(--md-primary-fg-color);transition:background 250ms}.md-hero__inner{margin-top:1rem;padding:.8rem .8rem .4rem;transition:transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 250ms;transition-delay:100ms}@media screen and (max-width: 76.1875em){.md-hero__inner{margin-top:2.4rem;margin-bottom:1.2rem}}[data-md-state=hidden] .md-hero__inner{transform:translateY(0.625rem);opacity:0;transition:transform 0ms 400ms,opacity 100ms 0ms;pointer-events:none}.md-hero--expand .md-hero__inner{margin-bottom:1.2rem}.md-footer{color:var(--md-default-bg-color);background-color:var(--md-default-fg-color)}@media print{.md-footer{display:none}}.md-footer-nav__inner{padding:.2rem;overflow:auto}.md-footer-nav__link{display:flex;padding-top:1.4rem;padding-bottom:.4rem;transition:opacity 250ms}@media screen and (min-width: 45em){.md-footer-nav__link{width:50%}}.md-footer-nav__link:focus,.md-footer-nav__link:hover{opacity:.7}.md-footer-nav__link--prev{float:left;width:25%}[dir=rtl] .md-footer-nav__link--prev{float:right}[dir=rtl] .md-footer-nav__link--prev svg{transform:scaleX(-1)}@media screen and (max-width: 44.9375em){.md-footer-nav__link--prev .md-footer-nav__title{display:none}}.md-footer-nav__link--next{float:right;width:75%;text-align:right}[dir=rtl] .md-footer-nav__link--next{float:left;text-align:left}[dir=rtl] .md-footer-nav__link--next svg{transform:scaleX(-1)}.md-footer-nav__title{position:relative;flex-grow:1;max-width:calc(100% - 2.4rem);padding:0 1rem;font-size:.9rem;line-height:2.4rem}.md-footer-nav__button{margin:.2rem;padding:.4rem}.md-footer-nav__direction{position:absolute;right:0;left:0;margin-top:-1rem;padding:0 1rem;color:var(--md-default-bg-color--light);font-size:.64rem}.md-footer-meta{background-color:var(--md-default-fg-color--lighter)}.md-footer-meta__inner{display:flex;flex-wrap:wrap;justify-content:space-between;padding:.2rem}html .md-footer-meta.md-typeset a{color:var(--md-default-bg-color--light)}html .md-footer-meta.md-typeset a:focus,html .md-footer-meta.md-typeset a:hover{color:var(--md-default-bg-color)}.md-footer-copyright{width:100%;margin:auto .6rem;padding:.4rem 0;color:var(--md-default-bg-color--lighter);font-size:.64rem}@media screen and (min-width: 45em){.md-footer-copyright{width:auto}}.md-footer-copyright__highlight{color:var(--md-default-bg-color--light)}.md-footer-social{margin:0 .4rem;padding:.2rem 0 .6rem}@media screen and (min-width: 45em){.md-footer-social{padding:.6rem 0}}.md-footer-social__link{display:inline-block;width:1.6rem;height:1.6rem;text-align:center}.md-footer-social__link::before{line-height:1.9}.md-footer-social__link svg{max-height:.8rem;vertical-align:-25%;fill:currentColor}.md-nav{font-size:.7rem;line-height:1.3}.md-nav__title{display:block;padding:0 .6rem;overflow:hidden;font-weight:700;text-overflow:ellipsis}.md-nav__title .md-nav__button{display:none}.md-nav__title .md-nav__button img{width:100%;height:auto}.md-nav__title .md-nav__button.md-logo img,.md-nav__title .md-nav__button.md-logo svg{display:block;width:2.4rem;height:2.4rem}.md-nav__title .md-nav__button.md-logo svg{fill:currentColor}.md-nav__list{margin:0;padding:0;list-style:none}.md-nav__item{padding:0 .6rem}.md-nav__item:last-child{padding-bottom:.6rem}.md-nav__item .md-nav__item{padding-right:0}[dir=rtl] .md-nav__item .md-nav__item{padding-right:.6rem;padding-left:0}.md-nav__item .md-nav__item:last-child{padding-bottom:0}.md-nav__link{display:block;margin-top:.625em;overflow:hidden;text-overflow:ellipsis;cursor:pointer;transition:color 125ms;scroll-snap-align:start}html .md-nav__link[for=__toc]{display:none}html .md-nav__link[for=__toc]~.md-nav{display:none}.md-nav__link[data-md-state=blur]{color:var(--md-default-fg-color--light)}.md-nav__item .md-nav__link--active{color:var(--md-primary-fg-color)}.md-nav__item--nested>.md-nav__link{color:inherit}.md-nav__link:focus,.md-nav__link:hover{color:var(--md-accent-fg-color)}.md-nav__source{display:none}@media screen and (max-width: 76.1875em){.md-nav{background-color:var(--md-default-bg-color)}.md-nav--primary,.md-nav--primary .md-nav{position:absolute;top:0;right:0;left:0;z-index:1;display:flex;flex-direction:column;height:100%}.md-nav--primary .md-nav__title,.md-nav--primary .md-nav__item{font-size:.8rem;line-height:1.5}.md-nav--primary .md-nav__title{position:relative;height:5.6rem;padding:3rem .8rem .2rem;color:var(--md-default-fg-color--light);font-weight:400;line-height:2.4rem;white-space:nowrap;background-color:var(--md-default-fg-color--lightest);cursor:pointer}.md-nav--primary .md-nav__title .md-nav__icon{position:absolute;top:.4rem;left:.4rem;display:block;width:1.2rem;height:1.2rem;margin:.2rem}[dir=rtl] .md-nav--primary .md-nav__title .md-nav__icon{right:.4rem;left:initial}.md-nav--primary .md-nav__title~.md-nav__list{overflow-y:auto;background-color:var(--md-default-bg-color);box-shadow:inset 0 .05rem 0 var(--md-default-fg-color--lightest);scroll-snap-type:y mandatory;touch-action:pan-y}.md-nav--primary .md-nav__title~.md-nav__list>.md-nav__item:first-child{border-top:0}.md-nav--primary .md-nav__title[for=__drawer]{position:relative;color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color)}.md-nav--primary .md-nav__title[for=__drawer] .md-nav__button{position:absolute;top:.2rem;left:.2rem;display:block;margin:.2rem;padding:.4rem;font-size:2.4rem}html [dir=rtl] .md-nav--primary .md-nav__title[for=__drawer] .md-nav__button{right:.2rem;left:initial}.md-nav--primary .md-nav__list{flex:1}.md-nav--primary .md-nav__item{padding:0;border-top:.05rem solid var(--md-default-fg-color--lightest)}[dir=rtl] .md-nav--primary .md-nav__item{padding:0}.md-nav--primary .md-nav__item--nested>.md-nav__link{padding-right:2.4rem}[dir=rtl] .md-nav--primary .md-nav__item--nested>.md-nav__link{padding-right:.8rem;padding-left:2.4rem}.md-nav--primary .md-nav__item--active>.md-nav__link{color:var(--md-primary-fg-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:focus,.md-nav--primary .md-nav__item--active>.md-nav__link:hover{color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__link{position:relative;margin-top:0;padding:.6rem .8rem}.md-nav--primary .md-nav__link .md-nav__icon{position:absolute;top:50%;right:.6rem;margin-top:-0.6rem;color:inherit;font-size:1.2rem}[dir=rtl] .md-nav--primary .md-nav__link .md-nav__icon{right:initial;left:.6rem}[dir=rtl] .md-nav--primary .md-nav__icon svg{transform:scale(-1)}.md-nav--primary .md-nav--secondary .md-nav__link{position:static}.md-nav--primary .md-nav--secondary .md-nav{position:static;background-color:transparent}.md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-left:1.4rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-right:1.4rem;padding-left:initial}.md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-left:2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-right:2rem;padding-left:initial}.md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-left:2.6rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-right:2.6rem;padding-left:initial}.md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-left:3.2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-right:3.2rem;padding-left:initial}.md-nav__toggle~.md-nav{display:flex;transform:translateX(100%);opacity:0;transition:transform 250ms cubic-bezier(0.8, 0, 0.6, 1),opacity 125ms 50ms}[dir=rtl] .md-nav__toggle~.md-nav{transform:translateX(-100%)}.md-nav__toggle:checked~.md-nav{transform:translateX(0);opacity:1;transition:transform 250ms cubic-bezier(0.4, 0, 0.2, 1),opacity 125ms 125ms}.md-nav__toggle:checked~.md-nav>.md-nav__list{backface-visibility:hidden}}@media screen and (max-width: 59.9375em){html .md-nav__link[for=__toc]{display:block;padding-right:2.4rem}html .md-nav__link[for=__toc]+.md-nav__link{display:none}html .md-nav__link[for=__toc]~.md-nav{display:flex}html [dir=rtl] .md-nav__link{padding-right:.8rem;padding-left:2.4rem}.md-nav__source{display:block;padding:0 .2rem;color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color--dark)}}@media screen and (min-width: 60em){.md-nav--secondary .md-nav__title[for=__toc]{scroll-snap-align:start}.md-nav--secondary .md-nav__title .md-nav__icon{display:none}}@media screen and (min-width: 76.25em){.md-nav{transition:max-height 250ms cubic-bezier(0.86, 0, 0.07, 1)}.md-nav--primary .md-nav__title[for=__drawer]{scroll-snap-align:start}.md-nav--primary .md-nav__title .md-nav__icon{display:none}.md-nav__toggle~.md-nav{display:none}.md-nav__toggle:checked~.md-nav{display:block}.md-nav__item--nested>.md-nav>.md-nav__title{display:none}.md-nav__icon{float:right;height:.9rem;transition:transform 250ms}[dir=rtl] .md-nav__icon{float:left;transform:rotate(180deg)}.md-nav__icon svg{display:inline-block;width:.9rem;height:.9rem;vertical-align:-0.1rem}.md-nav__item--nested .md-nav__toggle:checked~.md-nav__link .md-nav__icon{transform:rotate(90deg)}}.md-search{position:relative}.no-js .md-search{display:none}@media screen and (min-width: 60em){.md-search{padding:.2rem 0}}.md-search__overlay{z-index:1;opacity:0}@media screen and (max-width: 59.9375em){.md-search__overlay{position:absolute;top:.2rem;left:-2.2rem;width:2rem;height:2rem;overflow:hidden;background-color:var(--md-default-bg-color);border-radius:1rem;transform-origin:center;transition:transform 300ms 100ms,opacity 200ms 200ms;pointer-events:none}[dir=rtl] .md-search__overlay{right:-2.2rem;left:initial}[data-md-toggle=search]:checked~.md-header .md-search__overlay{opacity:1;transition:transform 400ms,opacity 100ms}}@media screen and (max-width: 29.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(45)}}@media screen and (min-width: 30em)and (max-width: 44.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(60)}}@media screen and (min-width: 45em)and (max-width: 59.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(75)}}@media screen and (min-width: 60em){.md-search__overlay{position:fixed;top:0;left:0;width:0;height:0;background-color:var(--md-default-fg-color--light);cursor:pointer;transition:width 0ms 250ms,height 0ms 250ms,opacity 250ms}[dir=rtl] .md-search__overlay{right:0;left:initial}[data-md-toggle=search]:checked~.md-header .md-search__overlay{width:100%;height:100%;opacity:1;transition:width 0ms,height 0ms,opacity 250ms}}.md-search__inner{backface-visibility:hidden}@media screen and (max-width: 59.9375em){.md-search__inner{position:fixed;top:0;left:100%;z-index:2;width:100%;height:100%;transform:translateX(5%);opacity:0;transition:right 0ms 300ms,left 0ms 300ms,transform 150ms 150ms cubic-bezier(0.4, 0, 0.2, 1),opacity 150ms 150ms}[data-md-toggle=search]:checked~.md-header .md-search__inner{left:0;transform:translateX(0);opacity:1;transition:right 0ms 0ms,left 0ms 0ms,transform 150ms 150ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 150ms 150ms}[dir=rtl] [data-md-toggle=search]:checked~.md-header .md-search__inner{right:0;left:initial}html [dir=rtl] .md-search__inner{right:100%;left:initial;transform:translateX(-5%)}}@media screen and (min-width: 60em){.md-search__inner{position:relative;float:right;width:11.7rem;padding:.1rem 0;transition:width 250ms cubic-bezier(0.1, 0.7, 0.1, 1)}[dir=rtl] .md-search__inner{float:left}}@media screen and (min-width: 60em)and (max-width: 76.1875em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:23.4rem}}@media screen and (min-width: 76.25em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:34.4rem}}.md-search__form{position:relative}@media screen and (min-width: 60em){.md-search__form{border-radius:.1rem}}.md-search__input{position:relative;z-index:2;padding:0 2.2rem 0 3.6rem;text-overflow:ellipsis}[dir=rtl] .md-search__input{padding:0 3.6rem 0 2.2rem}.md-search__input::placeholder{transition:color 250ms}.md-search__input~.md-search__icon,.md-search__input::placeholder{color:var(--md-default-fg-color--light)}.md-search__input::-ms-clear{display:none}@media screen and (max-width: 59.9375em){.md-search__input{width:100%;height:2.4rem;font-size:.9rem}}@media screen and (min-width: 60em){.md-search__input{width:100%;height:1.8rem;padding-left:2.2rem;color:inherit;font-size:.8rem;background-color:var(--md-default-fg-color--lighter);border-radius:.1rem;transition:color 250ms,background-color 250ms}[dir=rtl] .md-search__input{padding-right:2.2rem}.md-search__input+.md-search__icon{color:var(--md-primary-bg-color)}.md-search__input::placeholder{color:var(--md-primary-bg-color--light)}.md-search__input:hover{background-color:var(--md-default-bg-color--lightest)}[data-md-toggle=search]:checked~.md-header .md-search__input{color:var(--md-default-fg-color);text-overflow:clip;background-color:var(--md-default-bg-color);border-radius:.1rem .1rem 0 0}[data-md-toggle=search]:checked~.md-header .md-search__input+.md-search__icon,[data-md-toggle=search]:checked~.md-header .md-search__input::placeholder{color:var(--md-default-fg-color--light)}}.md-search__icon{position:absolute;z-index:2;width:1.2rem;height:1.2rem;cursor:pointer;transition:color 250ms,opacity 250ms}.md-search__icon:hover{opacity:.7}.md-search__icon[for=__search]{top:.3rem;left:.5rem}[dir=rtl] .md-search__icon[for=__search]{right:.5rem;left:initial}[dir=rtl] .md-search__icon[for=__search] svg{transform:scaleX(-1)}@media screen and (max-width: 59.9375em){.md-search__icon[for=__search]{top:.6rem;left:.8rem}[dir=rtl] .md-search__icon[for=__search]{right:.8rem;left:initial}.md-search__icon[for=__search] svg:first-child{display:none}}@media screen and (min-width: 60em){.md-search__icon[for=__search]{pointer-events:none}.md-search__icon[for=__search] svg:last-child{display:none}}.md-search__icon[type=reset]{top:.3rem;right:.5rem;transform:scale(0.75);opacity:0;transition:transform 150ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 150ms;pointer-events:none}[dir=rtl] .md-search__icon[type=reset]{right:initial;left:.5rem}@media screen and (max-width: 59.9375em){.md-search__icon[type=reset]{top:.6rem;right:.8rem}[dir=rtl] .md-search__icon[type=reset]{right:initial;left:.8rem}}[data-md-toggle=search]:checked~.md-header .md-search__input:not(:placeholder-shown)~.md-search__icon[type=reset]{transform:scale(1);opacity:1;pointer-events:initial}[data-md-toggle=search]:checked~.md-header .md-search__input:not(:placeholder-shown)~.md-search__icon[type=reset]:hover{opacity:.7}.md-search__output{position:absolute;z-index:1;width:100%;overflow:hidden;border-radius:0 0 .1rem .1rem}@media screen and (max-width: 59.9375em){.md-search__output{top:2.4rem;bottom:0}}@media screen and (min-width: 60em){.md-search__output{top:1.9rem;opacity:0;transition:opacity 400ms}[data-md-toggle=search]:checked~.md-header .md-search__output{box-shadow:0 6px 10px 0 rgba(0,0,0,.14),0 1px 18px 0 rgba(0,0,0,.12),0 3px 5px -1px rgba(0,0,0,.4);opacity:1}}.md-search__scrollwrap{height:100%;overflow-y:auto;background-color:var(--md-default-bg-color);box-shadow:inset 0 .05rem 0 var(--md-default-fg-color--lightest);backface-visibility:hidden;scroll-snap-type:y mandatory;touch-action:pan-y}@media(max-resolution: 1dppx){.md-search__scrollwrap{transform:translateZ(0)}}@media screen and (min-width: 60em)and (max-width: 76.1875em){.md-search__scrollwrap{width:23.4rem}}@media screen and (min-width: 76.25em){.md-search__scrollwrap{width:34.4rem}}@media screen and (min-width: 60em){.md-search__scrollwrap{max-height:0}[data-md-toggle=search]:checked~.md-header .md-search__scrollwrap{max-height:75vh}.md-search__scrollwrap::-webkit-scrollbar{width:.2rem;height:.2rem}.md-search__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-search__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}}.md-search-result{color:var(--md-default-fg-color);word-break:break-word}.md-search-result__meta{padding:0 .8rem;color:var(--md-default-fg-color--light);font-size:.64rem;line-height:1.8rem;background-color:var(--md-default-fg-color--lightest);scroll-snap-align:start}@media screen and (min-width: 60em){.md-search-result__meta{padding-left:2.2rem}[dir=rtl] .md-search-result__meta{padding-right:2.2rem;padding-left:initial}}.md-search-result__list{margin:0;padding:0;list-style:none;border-top:.05rem solid var(--md-default-fg-color--lightest)}.md-search-result__item{box-shadow:0 -0.05rem 0 var(--md-default-fg-color--lightest)}.md-search-result__link{display:block;outline:0;transition:background 250ms;scroll-snap-align:start}.md-search-result__link:focus,.md-search-result__link:hover{background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:focus .md-search-result__article::before,.md-search-result__link:hover .md-search-result__article::before{opacity:.7}.md-search-result__link:last-child .md-search-result__teaser{margin-bottom:.6rem}.md-search-result__article{position:relative;padding:0 .8rem;overflow:auto}@media screen and (min-width: 60em){.md-search-result__article{padding-left:2.2rem}[dir=rtl] .md-search-result__article{padding-right:2.2rem;padding-left:.8rem}}.md-search-result__article--document .md-search-result__title{margin:.55rem 0;font-weight:400;font-size:.8rem;line-height:1.4}.md-search-result__icon{position:absolute;left:0;margin:.1rem;padding:.4rem;color:var(--md-default-fg-color--light)}[dir=rtl] .md-search-result__icon{right:0;left:initial}[dir=rtl] .md-search-result__icon svg{transform:scaleX(-1)}@media screen and (max-width: 59.9375em){.md-search-result__icon{display:none}}.md-search-result__title{margin:.5em 0;font-weight:700;font-size:.64rem;line-height:1.4}.md-search-result__teaser{display:-webkit-box;max-height:1.65rem;margin:.5em 0;overflow:hidden;color:var(--md-default-fg-color--light);font-size:.64rem;line-height:1.4;text-overflow:ellipsis;-webkit-box-orient:vertical;-webkit-line-clamp:2}@media screen and (max-width: 44.9375em){.md-search-result__teaser{max-height:2.5rem;-webkit-line-clamp:3}}@media screen and (min-width: 60em)and (max-width: 76.1875em){.md-search-result__teaser{max-height:2.5rem;-webkit-line-clamp:3}}.md-search-result em{font-weight:700;font-style:normal;text-decoration:underline}.md-sidebar{position:sticky;top:2.4rem;width:12.1rem;padding:1.2rem 0;overflow:hidden}@media print{.md-sidebar{display:none}}@media screen and (max-width: 76.1875em){.md-sidebar--primary{position:fixed;top:0;left:-12.1rem;z-index:3;width:12.1rem;height:100%;background-color:var(--md-default-bg-color);transform:translateX(0);transition:transform 250ms cubic-bezier(0.4, 0, 0.2, 1),box-shadow 250ms}[dir=rtl] .md-sidebar--primary{right:-12.1rem;left:initial}[data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{box-shadow:0 8px 10px 1px rgba(0,0,0,.14),0 3px 14px 2px rgba(0,0,0,.12),0 5px 5px -3px rgba(0,0,0,.4);transform:translateX(12.1rem)}[dir=rtl] [data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{transform:translateX(-12.1rem)}.md-sidebar--primary .md-sidebar__scrollwrap{overflow:hidden}}.md-sidebar--secondary{display:none;order:2}@media screen and (min-width: 60em){.md-sidebar--secondary{display:block}.md-sidebar--secondary .md-sidebar__scrollwrap{touch-action:pan-y}}.md-sidebar__scrollwrap{max-height:100%;margin:0 .2rem;overflow-y:auto;backface-visibility:hidden;scroll-snap-type:y mandatory}@media screen and (max-width: 76.1875em){.md-sidebar--primary .md-sidebar__scrollwrap{position:absolute;top:0;right:0;bottom:0;left:0;margin:0;scroll-snap-type:none}}.md-sidebar__scrollwrap::-webkit-scrollbar{width:.2rem;height:.2rem}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}@keyframes md-source__facts--done{0%{height:0}100%{height:.65rem}}@keyframes md-source__fact--done{0%{transform:translateY(100%);opacity:0}50%{opacity:0}100%{transform:translateY(0%);opacity:1}}.md-source{display:block;font-size:.65rem;line-height:1.2;white-space:nowrap;backface-visibility:hidden;transition:opacity 250ms}.md-source:hover{opacity:.7}.md-source__icon{display:inline-block;width:2.4rem;height:2.4rem;vertical-align:middle}.md-source__icon svg{margin-top:.6rem;margin-left:.6rem}[dir=rtl] .md-source__icon svg{margin-right:.6rem;margin-left:initial}.md-source__icon+.md-source__repository{margin-left:-2rem;padding-left:2rem}[dir=rtl] .md-source__icon+.md-source__repository{margin-right:-2rem;margin-left:initial;padding-right:2rem;padding-left:initial}.md-source__repository{display:inline-block;max-width:calc(100% - 1.2rem);margin-left:.6rem;overflow:hidden;font-weight:700;text-overflow:ellipsis;vertical-align:middle}.md-source__facts{margin:0;padding:0;overflow:hidden;font-weight:700;font-size:.55rem;list-style-type:none;opacity:.75}[data-md-state=done] .md-source__facts{animation:md-source__facts--done 250ms ease-in}.md-source__fact{float:left}[dir=rtl] .md-source__fact{float:right}[data-md-state=done] .md-source__fact{animation:md-source__fact--done 400ms ease-out}.md-source__fact::before{margin:0 .1rem;content:\"·\"}.md-source__fact:first-child::before{display:none}.md-tabs{width:100%;overflow:auto;color:var(--md-primary-bg-color);background-color:var(--md-primary-fg-color);transition:background 250ms}.no-js .md-tabs{transition:none}@media screen and (max-width: 76.1875em){.md-tabs{display:none}}@media print{.md-tabs{display:none}}.md-tabs__list{margin:0;margin-left:.2rem;padding:0;white-space:nowrap;list-style:none;contain:content}[dir=rtl] .md-tabs__list{margin-right:.2rem;margin-left:initial}.md-tabs__item{display:inline-block;height:2.4rem;padding-right:.6rem;padding-left:.6rem}.md-tabs__link{display:block;margin-top:.8rem;font-size:.7rem;opacity:.7;transition:transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),opacity 250ms}.no-js .md-tabs__link{transition:none}.md-tabs__link--active,.md-tabs__link:hover{color:inherit;opacity:1}.md-tabs__item:nth-child(2) .md-tabs__link{transition-delay:20ms}.md-tabs__item:nth-child(3) .md-tabs__link{transition-delay:40ms}.md-tabs__item:nth-child(4) .md-tabs__link{transition-delay:60ms}.md-tabs__item:nth-child(5) .md-tabs__link{transition-delay:80ms}.md-tabs__item:nth-child(6) .md-tabs__link{transition-delay:100ms}.md-tabs__item:nth-child(7) .md-tabs__link{transition-delay:120ms}.md-tabs__item:nth-child(8) .md-tabs__link{transition-delay:140ms}.md-tabs__item:nth-child(9) .md-tabs__link{transition-delay:160ms}.md-tabs__item:nth-child(10) .md-tabs__link{transition-delay:180ms}.md-tabs__item:nth-child(11) .md-tabs__link{transition-delay:200ms}.md-tabs__item:nth-child(12) .md-tabs__link{transition-delay:220ms}.md-tabs__item:nth-child(13) .md-tabs__link{transition-delay:240ms}.md-tabs__item:nth-child(14) .md-tabs__link{transition-delay:260ms}.md-tabs__item:nth-child(15) .md-tabs__link{transition-delay:280ms}.md-tabs__item:nth-child(16) .md-tabs__link{transition-delay:300ms}.md-tabs[data-md-state=hidden]{pointer-events:none}.md-tabs[data-md-state=hidden] .md-tabs__link{transform:translateY(50%);opacity:0;transition:color 250ms,transform 0ms 400ms,opacity 100ms}@media screen and (min-width: 76.25em){.md-tabs~.md-main .md-nav--primary>.md-nav__list>.md-nav__item--nested{display:none}.md-tabs--active~.md-main .md-nav--primary .md-nav__title{display:block;padding:0 .6rem;pointer-events:none;scroll-snap-align:start}.md-tabs--active~.md-main .md-nav--primary .md-nav__title[for=__drawer]{display:none}.md-tabs--active~.md-main .md-nav--primary>.md-nav__list>.md-nav__item{display:none}.md-tabs--active~.md-main .md-nav--primary>.md-nav__list>.md-nav__item--active{display:block;padding:0}.md-tabs--active~.md-main .md-nav--primary>.md-nav__list>.md-nav__item--active>.md-nav__link{display:none}.md-tabs--active~.md-main .md-nav[data-md-level=\"1\"]>.md-nav__list>.md-nav__item{padding:0 .6rem}.md-tabs--active~.md-main .md-nav[data-md-level=\"1\"] .md-nav .md-nav__title{display:none}}:root{--md-admonition-icon--note: url(\"{{ pencil }}\");--md-admonition-icon--abstract: url(\"{{ text-subject }}\");--md-admonition-icon--info: url(\"{{ information }}\");--md-admonition-icon--tip: url(\"{{ fire }}\");--md-admonition-icon--success: url(\"{{ check-circle }}\");--md-admonition-icon--question: url(\"{{ help-circle }}\");--md-admonition-icon--warning: url(\"{{ alert }}\");--md-admonition-icon--failure: url(\"{{ close-circle }}\");--md-admonition-icon--danger: url(\"{{ flash-circle }}\");--md-admonition-icon--bug: url(\"{{ bug }}\");--md-admonition-icon--example: url(\"{{ format-list-numbered }}\");--md-admonition-icon--quote: url(\"{{ format-quote-close }}\")}.md-typeset .admonition,.md-typeset details{margin:1.5625em 0;padding:0 .6rem;overflow:hidden;font-size:.64rem;page-break-inside:avoid;border-left:.2rem solid #448aff;border-radius:.1rem;box-shadow:0 .2rem .5rem rgba(0,0,0,.05),0 0 .05rem rgba(0,0,0,.1)}[dir=rtl] .md-typeset .admonition,[dir=rtl] .md-typeset details{border-right:.2rem solid #448aff;border-left:none}@media print{.md-typeset .admonition,.md-typeset details{box-shadow:none}}html .md-typeset .admonition>:last-child,html .md-typeset details>:last-child{margin-bottom:.6rem}.md-typeset .admonition .admonition,.md-typeset details .admonition,.md-typeset .admonition details,.md-typeset details details{margin:1em 0}.md-typeset .admonition .md-typeset__scrollwrap,.md-typeset details .md-typeset__scrollwrap{margin:1em -0.6rem}.md-typeset .admonition .md-typeset__table,.md-typeset details .md-typeset__table{padding:0 .6rem}.md-typeset .admonition-title,.md-typeset summary{position:relative;margin:0 -0.6rem;padding:.4rem .6rem .4rem 2rem;font-weight:700;background-color:rgba(68,138,255,.1)}[dir=rtl] .md-typeset .admonition-title,[dir=rtl] .md-typeset summary{padding:.4rem 2rem .4rem .6rem}html .md-typeset .admonition-title:last-child,html .md-typeset summary:last-child{margin-bottom:0}.md-typeset .admonition-title::before,.md-typeset summary::before{position:absolute;left:.6rem;width:1rem;height:1rem;background-color:#448aff;mask-image:var(--md-admonition-icon--note);content:\"\"}[dir=rtl] .md-typeset .admonition-title::before,[dir=rtl] .md-typeset summary::before{right:.6rem;left:initial}.md-typeset .admonition-title code,.md-typeset summary code{margin:initial;padding:initial;color:currentColor;background-color:transparent;border-radius:initial;box-shadow:none}.md-typeset .admonition.note,.md-typeset details.note{border-color:#448aff}.md-typeset .note>.admonition-title,.md-typeset .note>summary{background-color:rgba(68,138,255,.1)}.md-typeset .note>.admonition-title::before,.md-typeset .note>summary::before{background-color:#448aff;mask-image:var(--md-admonition-icon--note)}.md-typeset .admonition.abstract,.md-typeset details.abstract,.md-typeset .admonition.tldr,.md-typeset details.tldr,.md-typeset .admonition.summary,.md-typeset details.summary{border-color:#00b0ff}.md-typeset .abstract>.admonition-title,.md-typeset .abstract>summary,.md-typeset .tldr>.admonition-title,.md-typeset .tldr>summary,.md-typeset .summary>.admonition-title,.md-typeset .summary>summary{background-color:rgba(0,176,255,.1)}.md-typeset .abstract>.admonition-title::before,.md-typeset .abstract>summary::before,.md-typeset .tldr>.admonition-title::before,.md-typeset .tldr>summary::before,.md-typeset .summary>.admonition-title::before,.md-typeset .summary>summary::before{background-color:#00b0ff;mask-image:var(--md-admonition-icon--abstract)}.md-typeset .admonition.info,.md-typeset details.info,.md-typeset .admonition.todo,.md-typeset details.todo{border-color:#00b8d4}.md-typeset .info>.admonition-title,.md-typeset .info>summary,.md-typeset .todo>.admonition-title,.md-typeset .todo>summary{background-color:rgba(0,184,212,.1)}.md-typeset .info>.admonition-title::before,.md-typeset .info>summary::before,.md-typeset .todo>.admonition-title::before,.md-typeset .todo>summary::before{background-color:#00b8d4;mask-image:var(--md-admonition-icon--info)}.md-typeset .admonition.tip,.md-typeset details.tip,.md-typeset .admonition.important,.md-typeset details.important,.md-typeset .admonition.hint,.md-typeset details.hint{border-color:#00bfa5}.md-typeset .tip>.admonition-title,.md-typeset .tip>summary,.md-typeset .important>.admonition-title,.md-typeset .important>summary,.md-typeset .hint>.admonition-title,.md-typeset .hint>summary{background-color:rgba(0,191,165,.1)}.md-typeset .tip>.admonition-title::before,.md-typeset .tip>summary::before,.md-typeset .important>.admonition-title::before,.md-typeset .important>summary::before,.md-typeset .hint>.admonition-title::before,.md-typeset .hint>summary::before{background-color:#00bfa5;mask-image:var(--md-admonition-icon--tip)}.md-typeset .admonition.success,.md-typeset details.success,.md-typeset .admonition.done,.md-typeset details.done,.md-typeset .admonition.check,.md-typeset details.check{border-color:#00c853}.md-typeset .success>.admonition-title,.md-typeset .success>summary,.md-typeset .done>.admonition-title,.md-typeset .done>summary,.md-typeset .check>.admonition-title,.md-typeset .check>summary{background-color:rgba(0,200,83,.1)}.md-typeset .success>.admonition-title::before,.md-typeset .success>summary::before,.md-typeset .done>.admonition-title::before,.md-typeset .done>summary::before,.md-typeset .check>.admonition-title::before,.md-typeset .check>summary::before{background-color:#00c853;mask-image:var(--md-admonition-icon--success)}.md-typeset .admonition.question,.md-typeset details.question,.md-typeset .admonition.faq,.md-typeset details.faq,.md-typeset .admonition.help,.md-typeset details.help{border-color:#64dd17}.md-typeset .question>.admonition-title,.md-typeset .question>summary,.md-typeset .faq>.admonition-title,.md-typeset .faq>summary,.md-typeset .help>.admonition-title,.md-typeset .help>summary{background-color:rgba(100,221,23,.1)}.md-typeset .question>.admonition-title::before,.md-typeset .question>summary::before,.md-typeset .faq>.admonition-title::before,.md-typeset .faq>summary::before,.md-typeset .help>.admonition-title::before,.md-typeset .help>summary::before{background-color:#64dd17;mask-image:var(--md-admonition-icon--question)}.md-typeset .admonition.warning,.md-typeset details.warning,.md-typeset .admonition.attention,.md-typeset details.attention,.md-typeset .admonition.caution,.md-typeset details.caution{border-color:#ff9100}.md-typeset .warning>.admonition-title,.md-typeset .warning>summary,.md-typeset .attention>.admonition-title,.md-typeset .attention>summary,.md-typeset .caution>.admonition-title,.md-typeset .caution>summary{background-color:rgba(255,145,0,.1)}.md-typeset .warning>.admonition-title::before,.md-typeset .warning>summary::before,.md-typeset .attention>.admonition-title::before,.md-typeset .attention>summary::before,.md-typeset .caution>.admonition-title::before,.md-typeset .caution>summary::before{background-color:#ff9100;mask-image:var(--md-admonition-icon--warning)}.md-typeset .admonition.failure,.md-typeset details.failure,.md-typeset .admonition.missing,.md-typeset details.missing,.md-typeset .admonition.fail,.md-typeset details.fail{border-color:#ff5252}.md-typeset .failure>.admonition-title,.md-typeset .failure>summary,.md-typeset .missing>.admonition-title,.md-typeset .missing>summary,.md-typeset .fail>.admonition-title,.md-typeset .fail>summary{background-color:rgba(255,82,82,.1)}.md-typeset .failure>.admonition-title::before,.md-typeset .failure>summary::before,.md-typeset .missing>.admonition-title::before,.md-typeset .missing>summary::before,.md-typeset .fail>.admonition-title::before,.md-typeset .fail>summary::before{background-color:#ff5252;mask-image:var(--md-admonition-icon--failure)}.md-typeset .admonition.danger,.md-typeset details.danger,.md-typeset .admonition.error,.md-typeset details.error{border-color:#ff1744}.md-typeset .danger>.admonition-title,.md-typeset .danger>summary,.md-typeset .error>.admonition-title,.md-typeset .error>summary{background-color:rgba(255,23,68,.1)}.md-typeset .danger>.admonition-title::before,.md-typeset .danger>summary::before,.md-typeset .error>.admonition-title::before,.md-typeset .error>summary::before{background-color:#ff1744;mask-image:var(--md-admonition-icon--danger)}.md-typeset .admonition.bug,.md-typeset details.bug{border-color:#f50057}.md-typeset .bug>.admonition-title,.md-typeset .bug>summary{background-color:rgba(245,0,87,.1)}.md-typeset .bug>.admonition-title::before,.md-typeset .bug>summary::before{background-color:#f50057;mask-image:var(--md-admonition-icon--bug)}.md-typeset .admonition.example,.md-typeset details.example{border-color:#651fff}.md-typeset .example>.admonition-title,.md-typeset .example>summary{background-color:rgba(101,31,255,.1)}.md-typeset .example>.admonition-title::before,.md-typeset .example>summary::before{background-color:#651fff;mask-image:var(--md-admonition-icon--example)}.md-typeset .admonition.quote,.md-typeset details.quote,.md-typeset .admonition.cite,.md-typeset details.cite{border-color:#9e9e9e}.md-typeset .quote>.admonition-title,.md-typeset .quote>summary,.md-typeset .cite>.admonition-title,.md-typeset .cite>summary{background-color:rgba(158,158,158,.1)}.md-typeset .quote>.admonition-title::before,.md-typeset .quote>summary::before,.md-typeset .cite>.admonition-title::before,.md-typeset .cite>summary::before{background-color:#9e9e9e;mask-image:var(--md-admonition-icon--quote)}.codehilite .o,.highlight .o{color:inherit}.codehilite .ow,.highlight .ow{color:inherit}.codehilite .ge,.highlight .ge{color:#000}.codehilite .gr,.highlight .gr{color:#a00}.codehilite .gh,.highlight .gh{color:#999}.codehilite .go,.highlight .go{color:#888}.codehilite .gp,.highlight .gp{color:#555}.codehilite .gs,.highlight .gs{color:inherit}.codehilite .gu,.highlight .gu{color:#aaa}.codehilite .gt,.highlight .gt{color:#a00}.codehilite .gd,.highlight .gd{background-color:#fdd}.codehilite .gi,.highlight .gi{background-color:#dfd}.codehilite .k,.highlight .k{color:#3b78e7}.codehilite .kc,.highlight .kc{color:#a71d5d}.codehilite .kd,.highlight .kd{color:#3b78e7}.codehilite .kn,.highlight .kn{color:#3b78e7}.codehilite .kp,.highlight .kp{color:#a71d5d}.codehilite .kr,.highlight .kr{color:#3e61a2}.codehilite .kt,.highlight .kt{color:#3e61a2}.codehilite .c,.highlight .c{color:#999}.codehilite .cm,.highlight .cm{color:#999}.codehilite .cp,.highlight .cp{color:#666}.codehilite .c1,.highlight .c1{color:#999}.codehilite .ch,.highlight .ch{color:#999}.codehilite .cs,.highlight .cs{color:#999}.codehilite .na,.highlight .na{color:#c2185b}.codehilite .nb,.highlight .nb{color:#c2185b}.codehilite .bp,.highlight .bp{color:#3e61a2}.codehilite .nc,.highlight .nc{color:#c2185b}.codehilite .no,.highlight .no{color:#3e61a2}.codehilite .nd,.highlight .nd{color:#666}.codehilite .ni,.highlight .ni{color:#666}.codehilite .ne,.highlight .ne{color:#c2185b}.codehilite .nf,.highlight .nf{color:#c2185b}.codehilite .nl,.highlight .nl{color:#3b5179}.codehilite .nn,.highlight .nn{color:#ec407a}.codehilite .nt,.highlight .nt{color:#3b78e7}.codehilite .nv,.highlight .nv{color:#3e61a2}.codehilite .vc,.highlight .vc{color:#3e61a2}.codehilite .vg,.highlight .vg{color:#3e61a2}.codehilite .vi,.highlight .vi{color:#3e61a2}.codehilite .nx,.highlight .nx{color:#ec407a}.codehilite .m,.highlight .m{color:#e74c3c}.codehilite .mf,.highlight .mf{color:#e74c3c}.codehilite .mh,.highlight .mh{color:#e74c3c}.codehilite .mi,.highlight .mi{color:#e74c3c}.codehilite .il,.highlight .il{color:#e74c3c}.codehilite .mo,.highlight .mo{color:#e74c3c}.codehilite .s,.highlight .s{color:#0d904f}.codehilite .sb,.highlight .sb{color:#0d904f}.codehilite .sc,.highlight .sc{color:#0d904f}.codehilite .sd,.highlight .sd{color:#999}.codehilite .s2,.highlight .s2{color:#0d904f}.codehilite .se,.highlight .se{color:#183691}.codehilite .sh,.highlight .sh{color:#183691}.codehilite .si,.highlight .si{color:#183691}.codehilite .sx,.highlight .sx{color:#183691}.codehilite .sr,.highlight .sr{color:#009926}.codehilite .s1,.highlight .s1{color:#0d904f}.codehilite .ss,.highlight .ss{color:#0d904f}.codehilite .err,.highlight .err{color:#a61717}.codehilite .w,.highlight .w{color:transparent}.codehilite .hll,.highlight .hll{display:block;margin:0 -1.1764705882em;padding:0 1.1764705882em;background-color:rgba(255,235,59,.5)}.codehilitetable,.highlighttable{display:block;overflow:hidden}.codehilitetable tbody,.highlighttable tbody,.codehilitetable td,.highlighttable td{display:block;padding:0}.codehilitetable tr,.highlighttable tr{display:flex}.codehilitetable pre,.highlighttable pre{margin:0}.codehilitetable .linenos,.highlighttable .linenos{padding:.525rem 1.1764705882em;padding-right:0;font-size:.85em;background-color:var(--md-code-bg-color);user-select:none}.codehilitetable .linenodiv,.highlighttable .linenodiv{padding-right:.5882352941em;box-shadow:inset -0.05rem 0 var(--md-default-fg-color--lightest)}.codehilitetable .linenodiv pre,.highlighttable .linenodiv pre{color:var(--md-default-fg-color--lighter);text-align:right}.codehilitetable .code,.highlighttable .code{flex:1;overflow:hidden}.md-typeset .codehilitetable,.md-typeset .highlighttable{margin:1em 0;direction:ltr;border-radius:.1rem}.md-typeset .codehilitetable code,.md-typeset .highlighttable code{border-radius:0}@media screen and (max-width: 44.9375em){.md-typeset>.codehilite,.md-typeset>.highlight{margin:1em -0.8rem}.md-typeset>.codehilite .hll,.md-typeset>.highlight .hll{margin:0 -0.8rem;padding:0 .8rem}.md-typeset>.codehilite code,.md-typeset>.highlight code{border-radius:0}.md-typeset>.codehilitetable,.md-typeset>.highlighttable{margin:1em -0.8rem;border-radius:0}.md-typeset>.codehilitetable .hll,.md-typeset>.highlighttable .hll{margin:0 -0.8rem;padding:0 .8rem}}:root{--md-footnotes-icon: url(\"{{ keyboard-return }}\")}.md-typeset [id^=\"fnref:\"]{display:inline-block}.md-typeset [id^=\"fnref:\"]:target{margin-top:-3.8rem;padding-top:3.8rem;pointer-events:none}.md-typeset [id^=\"fn:\"]::before{display:none;height:0;content:\"\"}.md-typeset [id^=\"fn:\"]:target::before{display:block;margin-top:-3.5rem;padding-top:3.5rem;pointer-events:none}.md-typeset .footnote{color:var(--md-default-fg-color--light);font-size:.64rem}.md-typeset .footnote ol{margin-left:0}.md-typeset .footnote li{transition:color 125ms}.md-typeset .footnote li:target{color:var(--md-default-fg-color)}.md-typeset .footnote li :first-child{margin-top:0}.md-typeset .footnote li:hover .footnote-backref,.md-typeset .footnote li:target .footnote-backref{transform:translateX(0);opacity:1}.md-typeset .footnote li:hover .footnote-backref:hover{color:var(--md-accent-fg-color)}.md-typeset .footnote-ref{display:inline-block;pointer-events:initial}.md-typeset .footnote-backref{display:inline-block;color:var(--md-primary-fg-color);font-size:0;vertical-align:text-bottom;transform:translateX(0.25rem);opacity:0;transition:color 250ms,transform 250ms 250ms,opacity 125ms 250ms}[dir=rtl] .md-typeset .footnote-backref{transform:translateX(-0.25rem)}.md-typeset .footnote-backref::before{display:inline-block;width:.8rem;height:.8rem;background-color:currentColor;mask-image:var(--md-footnotes-icon);content:\"\"}[dir=rtl] .md-typeset .footnote-backref::before svg{transform:scaleX(-1)}@media print{.md-typeset .footnote-backref{color:var(--md-primary-fg-color);transform:translateX(0);opacity:1}}.md-typeset .headerlink{display:inline-block;margin-left:.5rem;visibility:hidden;opacity:0;transition:color 250ms,visibility 0ms 500ms,opacity 125ms}[dir=rtl] .md-typeset .headerlink{margin-right:.5rem;margin-left:initial}html body .md-typeset .headerlink{color:var(--md-default-fg-color--lighter)}@media print{.md-typeset .headerlink{display:none}}.md-typeset :hover>.headerlink,.md-typeset :target>.headerlink,.md-typeset .headerlink:focus{visibility:visible;opacity:1;transition:color 250ms,visibility 0ms,opacity 125ms}.md-typeset :target>.headerlink,.md-typeset .headerlink:focus,.md-typeset .headerlink:hover{color:var(--md-accent-fg-color)}.md-typeset h3[id]::before,.md-typeset h2[id]::before,.md-typeset h1[id]::before{display:block;margin-top:-0.4rem;padding-top:.4rem;content:\"\"}.md-typeset h3[id]:target::before,.md-typeset h2[id]:target::before,.md-typeset h1[id]:target::before{margin-top:-3.4rem;padding-top:3.4rem}.md-typeset h4[id]::before{display:block;margin-top:-0.45rem;padding-top:.45rem;content:\"\"}.md-typeset h4[id]:target::before{margin-top:-3.45rem;padding-top:3.45rem}.md-typeset h6[id]::before,.md-typeset h5[id]::before{display:block;margin-top:-0.6rem;padding-top:.6rem;content:\"\"}.md-typeset h6[id]:target::before,.md-typeset h5[id]:target::before{margin-top:-3.6rem;padding-top:3.6rem}.md-typeset .MJXc-display{margin:.75em 0;padding:.75em 0;overflow:auto;touch-action:auto}@media screen and (max-width: 44.9375em){.md-typeset>p>.MJXc-display{margin:.75em -0.8rem;padding:.25em .8rem}}.md-typeset .MathJax_CHTML{outline:0}.md-typeset del.critic,.md-typeset ins.critic,.md-typeset .critic.comment{padding:0 .25em;border-radius:.1rem;box-decoration-break:clone}.md-typeset del.critic{background-color:#fdd}.md-typeset ins.critic{background-color:#dfd}.md-typeset .critic.comment{color:#999}.md-typeset .critic.comment::before{content:\"/* \"}.md-typeset .critic.comment::after{content:\" */\"}.md-typeset .critic.block{display:block;margin:1em 0;padding-right:.8rem;padding-left:.8rem;overflow:auto;box-shadow:none}.md-typeset .critic.block :first-child{margin-top:.5em}.md-typeset .critic.block :last-child{margin-bottom:.5em}:root{--md-details-icon: url(\"{{ chevron-right }}\")}.md-typeset details{display:block;padding-top:0;overflow:visible}.md-typeset details[open]>summary::after{transform:rotate(90deg)}.md-typeset details:not([open]){padding-bottom:0}.md-typeset details:not([open])>summary{border-bottom-right-radius:.1rem}.md-typeset details::after{display:table;content:\"\"}.md-typeset summary{display:block;min-height:1rem;padding:.4rem 1.8rem .4rem 2rem;border-top-right-radius:.1rem;cursor:pointer}[dir=rtl] .md-typeset summary{padding:.4rem 2rem .4rem 1.8rem}.md-typeset summary::-webkit-details-marker{display:none}.md-typeset summary::after{position:absolute;top:.4rem;right:.4rem;width:1rem;height:1rem;background-color:currentColor;mask-image:var(--md-details-icon);transform:rotate(0deg);transition:transform 250ms;content:\"\"}[dir=rtl] .md-typeset summary::after{right:initial;left:.4rem;transform:rotate(180deg)}.md-typeset img.emojione,.md-typeset img.twemoji,.md-typeset img.gemoji{width:1.125em;vertical-align:-15%}.md-typeset span.twemoji{display:inline-block;height:1.125em;vertical-align:text-top}.md-typeset span.twemoji svg{width:1.125em;fill:currentColor}.highlight [data-linenos]::before{position:sticky;left:-1.1764705882em;float:left;margin-right:1.1764705882em;margin-left:-1.1764705882em;padding-left:1.1764705882em;color:var(--md-default-fg-color--lighter);background-color:var(--md-code-bg-color);box-shadow:inset -0.05rem 0 var(--md-default-fg-color--lightest);content:attr(data-linenos);user-select:none}.md-typeset .tabbed-content{display:none;order:99;width:100%;box-shadow:0 -0.05rem var(--md-default-fg-color--lightest)}.md-typeset .tabbed-content>.codehilite:only-child pre,.md-typeset .tabbed-content>.codehilitetable:only-child,.md-typeset .tabbed-content>.highlight:only-child pre,.md-typeset .tabbed-content>.highlighttable:only-child{margin:0}.md-typeset .tabbed-content>.codehilite:only-child pre>code,.md-typeset .tabbed-content>.codehilitetable:only-child>code,.md-typeset .tabbed-content>.highlight:only-child pre>code,.md-typeset .tabbed-content>.highlighttable:only-child>code{border-top-left-radius:0;border-top-right-radius:0}.md-typeset .tabbed-content>.tabbed-set{margin:0}.md-typeset .tabbed-set{position:relative;display:flex;flex-wrap:wrap;margin:1em 0;border-radius:.1rem}.md-typeset .tabbed-set>input{display:none}.md-typeset .tabbed-set>input:checked+label{color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color)}.md-typeset .tabbed-set>input:checked+label+.tabbed-content{display:block}.md-typeset .tabbed-set>label{z-index:1;width:auto;padding:.6rem 1.25em .5rem;color:var(--md-default-fg-color--light);font-weight:700;font-size:.64rem;border-bottom:.1rem solid transparent;cursor:pointer;transition:color 125ms}html .md-typeset .tabbed-set>label:hover{color:var(--md-accent-fg-color)}:root{--md-tasklist-icon: url(\"{{ checkbox-blank-circle }}\");--md-tasklist-icon--checked: url(\"{{ check-circle }}\")}.md-typeset .task-list-item{position:relative;list-style-type:none}.md-typeset .task-list-item [type=checkbox]{position:absolute;top:.45em;left:-2em}[dir=rtl] .md-typeset .task-list-item [type=checkbox]{right:-2em;left:initial}.md-typeset .task-list-control .task-list-indicator::before{position:absolute;top:.15em;left:-1.5em;width:1.25em;height:1.25em;background-color:var(--md-default-fg-color--lightest);mask-image:var(--md-tasklist-icon);content:\"\"}[dir=rtl] .md-typeset .task-list-control .task-list-indicator::before{right:-1.5em;left:initial}.md-typeset .task-list-control [type=checkbox]:checked+.task-list-indicator::before{background-color:#00e676;mask-image:var(--md-tasklist-icon--checked)}.md-typeset .task-list-control [type=checkbox]{z-index:-1;opacity:0}","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// stylelint-disable no-duplicate-selectors\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Enforce correct box model\nhtml {\n box-sizing: border-box;\n}\n\n// All elements shall inherit the document default\n*,\n*::before,\n*::after {\n box-sizing: inherit;\n}\n\n// Prevent adjustments of font size after orientation changes in IE and iOS\nhtml {\n text-size-adjust: none;\n}\n\n// Remove margin in all browsers\nbody {\n margin: 0;\n}\n\n// Reset horizontal rules in FF\nhr {\n box-sizing: content-box;\n overflow: visible;\n}\n\n// Reset tap outlines on iOS and Android\na,\nbutton,\nlabel,\ninput {\n -webkit-tap-highlight-color: transparent;\n}\n\n// Reset link styles\na {\n color: inherit;\n text-decoration: none;\n}\n\n// Normalize font-size in all browsers\nsmall {\n font-size: 80%;\n}\n\n// Prevent subscript and superscript from affecting line-height\nsub,\nsup {\n position: relative;\n font-size: 80%;\n line-height: 0;\n vertical-align: baseline;\n}\n\n// Correct subscript offset\nsub {\n bottom: -0.25em;\n}\n\n// Correct superscript offset\nsup {\n top: -0.5em;\n}\n\n// Remove borders on images\nimg {\n border-style: none;\n}\n\n// Reset table styles\ntable {\n border-collapse: separate;\n border-spacing: 0;\n}\n\n// Reset table cell styles\ntd,\nth {\n font-weight: normal; // stylelint-disable-line\n vertical-align: top;\n}\n\n// Reset button styles\nbutton {\n margin: 0;\n padding: 0;\n font-size: inherit;\n background: transparent;\n border: 0;\n}\n\n// Reset input styles\ninput {\n border: 0;\n outline: 0;\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Color definitions\n:root {\n\n // Default color shades\n --md-default-fg-color: hsla(0, 0%, 0%, 0.87);\n --md-default-fg-color--light: hsla(0, 0%, 0%, 0.54);\n --md-default-fg-color--lighter: hsla(0, 0%, 0%, 0.26);\n --md-default-fg-color--lightest: hsla(0, 0%, 0%, 0.07);\n --md-default-bg-color: hsla(0, 0%, 100%, 1);\n --md-default-bg-color--light: hsla(0, 0%, 100%, 0.7);\n --md-default-bg-color--lighter: hsla(0, 0%, 100%, 0.3);\n --md-default-bg-color--lightest: hsla(0, 0%, 100%, 0.12);\n\n // Primary color shades\n --md-primary-fg-color: hsla(#{hex2hsl($clr-indigo-500)}, 1);\n --md-primary-fg-color--light: hsla(#{hex2hsl($clr-indigo-300)}, 1);\n --md-primary-fg-color--dark: hsla(#{hex2hsl($clr-indigo-700)}, 1);\n --md-primary-bg-color: var(--md-default-bg-color);\n --md-primary-bg-color--light: var(--md-default-bg-color--light);\n\n // Accent color shades\n --md-accent-fg-color: hsla(#{hex2hsl($clr-indigo-a200)}, 1);\n --md-accent-fg-color--transparent: hsla(#{hex2hsl($clr-indigo-a200)}, 0.1);\n --md-accent-bg-color: var(--md-default-bg-color);\n --md-accent-bg-color--light: var(--md-default-bg-color--light);\n\n // Code block color shades\n --md-code-bg-color: hsla(0, 0%, 96%, 1);\n --md-code-fg-color: hsla(200, 18%, 26%, 1);\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Icon\n.md-icon {\n\n // SVG defaults\n svg {\n display: block;\n width: px2rem(24px);\n height: px2rem(24px);\n margin: 0 auto;\n fill: currentColor;\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules: font definitions\n// ----------------------------------------------------------------------------\n\n// Enable font-smoothing in Webkit and FF\nbody {\n -webkit-font-smoothing: antialiased;\n -moz-osx-font-smoothing: grayscale;\n}\n\n// Default fonts\nbody,\ninput {\n color: var(--md-default-fg-color);\n font-feature-settings: \"kern\", \"liga\";\n font-family: -apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif;\n}\n\n// Proportionally spaced fonts\ncode,\npre,\nkbd {\n color: var(--md-default-fg-color);\n font-feature-settings: \"kern\";\n font-family: SFMono-Regular, Consolas, Menlo, monospace;\n}\n\n// ----------------------------------------------------------------------------\n// Rules: typesetted content\n// ----------------------------------------------------------------------------\n\n// Content that is typeset - if possible, all margins, paddings and font sizes\n// should be set in ems, so nested blocks (e.g. Admonition) render correctly,\n// except headlines that should only appear on the top level and need to have\n// consistent spacing due to layout constraints.\n.md-typeset {\n font-size: ms(0);\n line-height: 1.6;\n color-adjust: exact;\n\n // Default spacing\n p,\n ul,\n ol,\n blockquote {\n margin: 1em 0;\n }\n\n // 1st level headline\n h1 {\n margin: 0 0 px2rem(40px);\n color: var(--md-default-fg-color--light);\n font-weight: 300;\n font-size: ms(3);\n line-height: 1.3;\n letter-spacing: -0.01em;\n }\n\n // 2nd level headline\n h2 {\n margin: px2rem(40px) 0 px2rem(16px);\n font-weight: 300;\n font-size: ms(2);\n line-height: 1.4;\n letter-spacing: -0.01em;\n }\n\n // 3rd level headline\n h3 {\n margin: px2rem(32px) 0 px2rem(16px);\n font-weight: 400;\n font-size: ms(1);\n line-height: 1.5;\n letter-spacing: -0.01em;\n }\n\n // 3rd level headline following an 2nd level headline\n h2 + h3 {\n margin-top: px2rem(16px);\n }\n\n // 4th level headline\n h4 {\n margin: px2rem(16px) 0;\n font-weight: 700;\n font-size: ms(0);\n letter-spacing: -0.01em;\n }\n\n // 5th and 6th level headline\n h5,\n h6 {\n margin: px2rem(16px) 0;\n color: var(--md-default-fg-color--light);\n font-weight: 700;\n font-size: ms(-1);\n letter-spacing: -0.01em;\n }\n\n // Overrides for 5th level headline\n h5 {\n text-transform: uppercase;\n }\n\n // Horizontal separators\n hr {\n margin: 1.5em 0;\n border-bottom: px2rem(1px) dotted var(--md-default-fg-color--lighter);\n }\n\n // Links\n a {\n color: var(--md-primary-fg-color);\n word-break: break-word;\n\n // Also enable color transition on pseudo elements\n &,\n &::before {\n transition: color 125ms;\n }\n\n // Focused or hover links\n &:focus,\n &:hover {\n color: var(--md-accent-fg-color);\n }\n }\n\n // Code blocks\n code,\n pre,\n kbd {\n color: var(--md-code-fg-color);\n direction: ltr;\n\n // Wrap text and hide scollbars\n @media print {\n white-space: pre-wrap;\n }\n }\n\n // Inline code blocks\n code {\n padding: 0 px2em(4px, 13.6px);\n font-size: px2em(13.6px);\n word-break: break-word;\n background-color: var(--md-code-bg-color);\n border-radius: px2rem(2px);\n box-decoration-break: clone;\n }\n\n // Disable containing block inside headlines\n h1 code,\n h2 code,\n h3 code,\n h4 code,\n h5 code,\n h6 code {\n margin: initial;\n padding: initial;\n background-color: transparent;\n box-shadow: none;\n }\n\n // Ensure link color in code blocks\n a > code {\n color: currentColor;\n }\n\n // Unformatted code blocks\n pre {\n position: relative;\n margin: 1em 0;\n line-height: 1.4;\n\n // Actual container with code, overflowing\n > code {\n display: block;\n margin: 0;\n padding: px2rem(10.5px) px2em(16px, 13.6px);\n overflow: auto;\n word-break: normal;\n box-shadow: none;\n box-decoration-break: slice;\n touch-action: auto;\n\n // Override native scrollbar styles\n &::-webkit-scrollbar {\n width: px2rem(4px);\n height: px2rem(4px);\n }\n\n // Scrollbar thumb\n &::-webkit-scrollbar-thumb {\n background-color: var(--md-default-fg-color--lighter);\n\n // Hovered scrollbar thumb\n &:hover {\n background-color: var(--md-accent-fg-color);\n }\n }\n }\n }\n\n // [mobile -]: Stretch to whole width\n @include break-to-device(mobile) {\n\n // Stretch top-level containers\n > pre {\n margin: 1em px2rem(-16px);\n\n // Remove rounded borders\n code {\n border-radius: 0;\n }\n }\n }\n\n // Keystrokes\n kbd {\n display: inline-block;\n padding: 0 px2em(8px, 12px);\n font-size: px2em(12px);\n line-height: 1.5;\n vertical-align: text-top;\n word-break: break-word;\n border-radius: px2rem(2px);\n box-shadow:\n 0 px2rem(2px) 0 px2rem(1px) var(--md-default-fg-color--lighter),\n 0 px2rem(2px) 0 var(--md-default-fg-color--lighter),\n inset 0 px2rem(-2px) px2rem(4px) var(--md-default-bg-color);\n }\n\n // Text highlighting marker\n mark {\n padding: 0 px2em(4px, 16px);\n word-break: break-word;\n background-color: transparentize($clr-yellow-500, 0.5);\n border-radius: px2rem(2px);\n box-decoration-break: clone;\n }\n\n // Abbreviations\n abbr {\n text-decoration: none;\n border-bottom: px2rem(1px) dotted var(--md-default-fg-color--light);\n cursor: help;\n }\n\n // Small text\n small {\n opacity: 0.75;\n }\n\n // Superscript and subscript\n sup,\n sub {\n margin-left: px2em(1px, 12.8px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2em(1px, 12.8px);\n margin-left: initial;\n }\n }\n\n // Blockquotes, possibly nested\n blockquote {\n padding-left: px2rem(12px);\n color: var(--md-default-fg-color--light);\n border-left: px2rem(4px) solid var(--md-default-fg-color--lighter);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(12px);\n padding-left: initial;\n border-right: px2rem(4px) solid var(--md-default-fg-color--lighter);\n border-left: initial;\n }\n }\n\n // Unordered lists\n ul {\n list-style-type: disc;\n }\n\n // Unordered and ordered lists\n ul,\n ol {\n margin-left: px2em(10px, 16px);\n padding: 0;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2em(10px, 16px);\n margin-left: initial;\n }\n\n // Nested ordered lists\n ol {\n list-style-type: lower-alpha;\n\n // Triply nested ordered list\n ol {\n list-style-type: lower-roman;\n }\n }\n\n // List elements\n li {\n margin-bottom: 0.5em;\n margin-left: px2em(20px, 16px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2em(20px, 16px);\n margin-left: initial;\n }\n\n // Decrease vertical spacing\n p,\n blockquote {\n margin: 0.5em 0;\n }\n\n // Remove margin on last element\n &:last-child {\n margin-bottom: 0;\n }\n\n // Nested lists\n ul,\n ol {\n margin: 0.5em 0 0.5em px2em(10px, 16px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2em(10px, 16px);\n margin-left: initial;\n }\n }\n }\n }\n\n // Definition lists\n dd {\n margin: 1em 0 1em px2em(30px, 16px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2em(30px, 16px);\n margin-left: initial;\n }\n }\n\n // Limit width to container\n iframe,\n img,\n svg {\n max-width: 100%;\n }\n\n // Data tables\n table:not([class]) {\n display: inline-block;\n max-width: 100%;\n overflow: auto;\n font-size: ms(-1);\n background: var(--md-default-bg-color);\n border-radius: px2rem(2px);\n box-shadow:\n 0 px2rem(4px) px2rem(10px) hsla(0, 0%, 0%, 0.05),\n 0 0 px2rem(1px) hsla(0, 0%, 0%, 0.1);\n touch-action: auto;\n\n // Due to margin collapse because of the necessary inline-block hack, we\n // cannot increase the bottom margin on the table, so we just increase the\n // top margin on the following element\n & + * {\n margin-top: 1.5em;\n }\n\n // Table headings and cells\n th:not([align]),\n td:not([align]) {\n text-align: left;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n text-align: right;\n }\n }\n\n // Table headings\n th {\n min-width: px2rem(100px);\n padding: px2rem(12px) px2rem(16px);\n color: var(--md-default-bg-color);\n vertical-align: top;\n background-color: var(--md-default-fg-color--light);\n }\n\n // Table cells\n td {\n padding: px2rem(12px) px2rem(16px);\n vertical-align: top;\n border-top: px2rem(1px) solid var(--md-default-fg-color--lightest);\n }\n\n // Table rows\n tr {\n transition: background-color 125ms;\n\n // Add background on hover\n &:hover {\n background-color: rgba(0, 0, 0, 0.035);\n box-shadow: 0 px2rem(1px) 0 var(--md-default-bg-color) inset;\n }\n\n // Remove top border on first row\n &:first-child td {\n border-top: 0;\n }\n }\n\n\n // Do not wrap links in tables\n a {\n word-break: normal;\n }\n }\n\n // Wrapper for scrolling on overflow\n &__scrollwrap {\n margin: 1em px2rem(-16px);\n overflow-x: auto;\n touch-action: auto;\n }\n\n // Data table wrapper, in case JavaScript is available\n &__table {\n display: inline-block;\n margin-bottom: 0.5em;\n padding: 0 px2rem(16px);\n\n // Data tables\n table {\n display: table;\n width: 100%;\n margin: 0;\n overflow: hidden;\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Variables\n// ----------------------------------------------------------------------------\n\n///\n/// Device-specific breakpoints\n///\n/// @example\n/// $break-devices: (\n/// mobile: (\n/// portrait: 220px 479px,\n/// landscape: 480px 719px\n/// ),\n/// tablet: (\n/// portrait: 720px 959px,\n/// landscape: 960px 1219px\n/// ),\n/// screen: (\n/// small: 1220px 1599px,\n/// medium: 1600px 1999px,\n/// large: 2000px\n/// )\n/// );\n///\n$break-devices: () !default;\n\n// ----------------------------------------------------------------------------\n// Helpers\n// ----------------------------------------------------------------------------\n\n///\n/// Choose minimum and maximum device widths\n///\n@function break-select-min-max($devices) {\n $min: 1000000;\n $max: 0;\n @each $key, $value in $devices {\n @while type-of($value) == map {\n $value: break-select-min-max($value);\n }\n @if type-of($value) == list {\n @each $number in $value {\n @if type-of($number) == number {\n $min: min($number, $min);\n @if $max != null {\n $max: max($number, $max);\n }\n } @else {\n @error \"Invalid number: #{$number}\";\n }\n }\n } @else if type-of($value) == number {\n $min: min($value, $min);\n $max: null;\n } @else {\n @error \"Invalid value: #{$value}\";\n }\n }\n @return $min, $max;\n}\n\n///\n/// Select minimum and maximum widths for a device breakpoint\n///\n@function break-select-device($device) {\n $current: $break-devices;\n @for $n from 1 through length($device) {\n @if type-of($current) == map {\n $current: map-get($current, nth($device, $n));\n } @else {\n @error \"Invalid device map: #{$devices}\";\n }\n }\n @if type-of($current) == list or type-of($current) == number {\n $current: (default: $current);\n }\n @return break-select-min-max($current);\n}\n\n// ----------------------------------------------------------------------------\n// Mixins\n// ----------------------------------------------------------------------------\n\n///\n/// A minimum-maximum media query breakpoint\n///\n@mixin break-at($breakpoint) {\n @if type-of($breakpoint) == number {\n @media screen and (min-width: $breakpoint) {\n @content;\n }\n } @else if type-of($breakpoint) == list {\n $min: nth($breakpoint, 1);\n $max: nth($breakpoint, 2);\n @if type-of($min) == number and type-of($max) == number {\n @media screen and (min-width: $min) and (max-width: $max) {\n @content;\n }\n } @else {\n @error \"Invalid breakpoint: #{$breakpoint}\";\n }\n } @else {\n @error \"Invalid breakpoint: #{$breakpoint}\";\n }\n}\n\n///\n/// An orientation media query breakpoint\n///\n@mixin break-at-orientation($breakpoint) {\n @if type-of($breakpoint) == string {\n @media screen and (orientation: $breakpoint) {\n @content;\n }\n } @else {\n @error \"Invalid breakpoint: #{$breakpoint}\";\n }\n}\n\n///\n/// A maximum-aspect-ratio media query breakpoint\n///\n@mixin break-at-ratio($breakpoint) {\n @if type-of($breakpoint) == number {\n @media screen and (max-aspect-ratio: $breakpoint) {\n @content;\n }\n } @else {\n @error \"Invalid breakpoint: #{$breakpoint}\";\n }\n}\n\n///\n/// A minimum-maximum media query device breakpoint\n///\n@mixin break-at-device($device) {\n @if type-of($device) == string {\n $device: $device,;\n }\n @if type-of($device) == list {\n $breakpoint: break-select-device($device);\n @if nth($breakpoint, 2) != null {\n $min: nth($breakpoint, 1);\n $max: nth($breakpoint, 2);\n @media screen and (min-width: $min) and (max-width: $max) {\n @content;\n }\n } @else {\n @error \"Invalid device: #{$device}\";\n }\n } @else {\n @error \"Invalid device: #{$device}\";\n }\n}\n\n///\n/// A minimum media query device breakpoint\n///\n@mixin break-from-device($device) {\n @if type-of($device) == string {\n $device: $device,;\n }\n @if type-of($device) == list {\n $breakpoint: break-select-device($device);\n $min: nth($breakpoint, 1);\n @media screen and (min-width: $min) {\n @content;\n }\n } @else {\n @error \"Invalid device: #{$device}\";\n }\n}\n\n///\n/// A maximum media query device breakpoint\n///\n@mixin break-to-device($device) {\n @if type-of($device) == string {\n $device: $device,;\n }\n @if type-of($device) == list {\n $breakpoint: break-select-device($device);\n $max: nth($breakpoint, 2);\n @media screen and (max-width: $max) {\n @content;\n }\n } @else {\n @error \"Invalid device: #{$device}\";\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Variables\n// ----------------------------------------------------------------------------\n\n// Active (toggled) drawer\n$md-toggle__drawer--checked:\n \"[data-md-toggle=\\\"drawer\\\"]:checked ~\";\n\n// ----------------------------------------------------------------------------\n// Rules: base grid and containers\n// ----------------------------------------------------------------------------\n\n// Stretch container to viewport and set base font-sizefor simple calculations\n// based on relative ems (rems)\nhtml {\n height: 100%;\n // Hack: some browsers on some operating systems don't account for scroll\n // bars when firing media queries, so we need to do this for safety. This\n // currently impacts the table of contents component between 1220 and 1234px\n // and is to current knowledge not fixable.\n overflow-x: hidden;\n // Hack: normally, we would set the base font-size to 62.5%, so we can base\n // all calculations on 10px, but Chromium and Chrome define a minimal font\n // size of 12 if the system language is set to Chinese. For this reason we\n // just double the font-size, set it to 20px which seems to do the trick.\n //\n // See https://github.com/squidfunk/mkdocs-material/issues/911\n font-size: 125%;\n background-color: var(--md-default-bg-color);\n\n // [screen medium +]: Set base font-size to 11px\n @include break-from-device(screen medium) {\n font-size: 137.50%;\n }\n\n // [screen large +]: Set base font-size to 12px\n @include break-from-device(screen large) {\n font-size: 150%;\n }\n}\n\n// Stretch body to container and leave room for footer\nbody {\n position: relative;\n display: flex;\n flex-direction: column;\n width: 100%;\n min-height: 100%;\n // Hack: reset font-size to 10px, so the spacing for all inline elements is\n // correct again. Otherwise the spacing would be based on 20px.\n font-size: 0.5rem; // stylelint-disable-line unit-whitelist\n\n // [tablet portrait -]: Lock body to disable scroll bubbling\n @include break-to-device(tablet portrait) {\n\n // Lock body to viewport height (e.g. in search mode)\n &[data-md-state=\"lock\"] {\n position: fixed;\n }\n }\n\n // Hack: we must not use flex, or Firefox will only print the first page\n // see https://mzl.la/39DgR3m\n @media print {\n display: block;\n }\n}\n\n// Horizontal separators\nhr {\n display: block;\n height: px2rem(1px);\n padding: 0;\n border: 0;\n}\n\n// Template-wide grid\n.md-grid {\n max-width: px2rem(1220px);\n margin-right: auto;\n margin-left: auto;\n}\n\n// Content wrapper\n.md-container {\n display: flex;\n flex-direction: column;\n flex-grow: 1;\n\n // Hack: we must not use flex, or Firefox will only print the first page\n // see https://mzl.la/39DgR3m\n @media print {\n display: block;\n }\n}\n\n// The main content should stretch to maximum height in the table\n.md-main {\n flex-grow: 1;\n\n // Increase top spacing of content area to give typography more room\n &__inner {\n display: flex;\n height: 100%;\n margin-top: px2rem(24px + 6px);\n }\n}\n\n// Apply ellipsis in case of overflowing text\n.md-ellipsis {\n display: block;\n overflow: hidden;\n white-space: nowrap;\n text-overflow: ellipsis;\n}\n\n// ----------------------------------------------------------------------------\n// Rules: navigational elements\n// ----------------------------------------------------------------------------\n\n// Toggle checkbox\n.md-toggle {\n display: none;\n}\n\n// Overlay below expanded drawer\n.md-overlay {\n position: fixed;\n top: 0;\n z-index: 3;\n width: 0;\n height: 0;\n background-color: var(--md-default-fg-color--light);\n opacity: 0;\n transition:\n width 0ms 250ms,\n height 0ms 250ms,\n opacity 250ms;\n\n // [tablet -]: Trigger overlay\n @include break-to-device(tablet) {\n\n // Expanded drawer\n #{$md-toggle__drawer--checked} & {\n width: 100%;\n height: 100%;\n opacity: 1;\n transition:\n width 0ms,\n height 0ms,\n opacity 250ms;\n }\n }\n}\n\n// ----------------------------------------------------------------------------\n// Rules: skip link\n// ----------------------------------------------------------------------------\n\n// Skip link\n.md-skip {\n position: fixed;\n // Hack: if we don't set the negative z-index, the skip link will induce the\n // creation of new layers when code blocks are near the header on scrolling\n z-index: -1;\n margin: px2rem(10px);\n padding: px2rem(6px) px2rem(10px);\n color: var(--md-default-bg-color);\n font-size: ms(-1);\n background-color: var(--md-default-fg-color);\n border-radius: px2rem(2px);\n transform: translateY(px2rem(8px));\n opacity: 0;\n\n // Show skip link on focus\n &:focus {\n z-index: 10;\n transform: translateY(0);\n opacity: 1;\n transition:\n transform 250ms cubic-bezier(0.4, 0, 0.2, 1),\n opacity 175ms 75ms;\n }\n}\n\n// ----------------------------------------------------------------------------\n// Rules: print styles\n// ----------------------------------------------------------------------------\n\n// Add margins to page\n@page {\n margin: 25mm;\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Announcement bar\n.md-announce {\n overflow: auto;\n background-color: var(--md-default-fg-color);\n\n // Actual content\n &__inner {\n margin: px2rem(12px) auto;\n padding: 0 px2rem(16px);\n color: var(--md-default-bg-color);\n font-size: px2rem(14px);\n }\n\n // Hide for print\n @media print {\n display: none;\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // Button\n .md-button {\n display: inline-block;\n padding: px2em(10px, 16px) px2em(32px, 16px);\n color: var(--md-primary-fg-color);\n font-weight: 700;\n border: px2rem(2px) solid currentColor;\n border-radius: px2rem(2px);\n transition:\n color 125ms,\n background-color 125ms,\n border-color 125ms;\n\n // Primary button\n &--primary {\n color: var(--md-primary-bg-color);\n background-color: var(--md-primary-fg-color);\n border-color: var(--md-primary-fg-color);\n }\n\n // Focused or hovered button\n &:focus,\n &:hover {\n color: var(--md-accent-bg-color);\n background-color: var(--md-accent-fg-color);\n border-color: var(--md-accent-fg-color);\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Copy to clipboard\n.md-clipboard {\n position: absolute;\n top: px2rem(8px);\n right: px2em(8px, 16px);\n z-index: 1;\n width: px2em(24px, 16px);\n height: px2em(24px, 16px);\n color: var(--md-default-fg-color--lightest);\n border-radius: px2rem(2px);\n cursor: pointer;\n transition: color 125ms;\n\n // Hide for print\n @media print {\n display: none;\n }\n\n // Slightly smaller icon\n svg {\n width: px2em(18px, 16px);\n height: px2em(18px, 16px);\n }\n\n // Show on container hover\n pre:hover & {\n color: var(--md-default-fg-color--light);\n }\n\n // Focused or hovered icon\n pre &:focus,\n pre &:hover {\n color: var(--md-accent-fg-color);\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Content container\n.md-content {\n flex: 1;\n max-width: 100%;\n\n // [tablet landscape]: Decrease horizontal width\n @include break-at-device(tablet landscape) {\n max-width: calc(100% - #{px2rem(242px)});\n }\n\n // [screen +]: Decrease horizontal width\n @include break-from-device(screen) {\n max-width: calc(100% - #{px2rem(242px)} * 2);\n }\n\n // Define spacing\n &__inner {\n margin: 0 px2rem(16px) px2rem(24px);\n padding-top: px2rem(12px);\n\n // [screen +]: Increase horizontal spacing\n @include break-from-device(screen) {\n margin-right: px2rem(24px);\n margin-left: px2rem(24px);\n }\n\n // Hack: add pseudo element for spacing, as the overflow of the content\n // container may not be hidden due to an imminent offset error on targets\n &::before {\n display: block;\n height: px2rem(8px);\n content: \"\";\n }\n\n // Hack: remove bottom spacing of last element, due to margin collapse\n > :last-child {\n margin-bottom: 0;\n }\n }\n\n // Button next to the title\n &__button {\n float: right;\n margin: px2rem(8px) 0;\n margin-left: px2rem(8px);\n padding: 0;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n float: left;\n margin-right: px2rem(8px);\n margin-left: initial;\n\n // Flip icon vertically\n svg {\n transform: scaleX(-1);\n }\n }\n\n // Override default link color for icons\n .md-typeset & {\n color: var(--md-default-fg-color--lighter);\n }\n\n // Align text with icon\n svg {\n display: inline;\n vertical-align: top;\n }\n\n // Hide for print\n @media print {\n display: none;\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Dialog rendered as snackbar\n.md-dialog {\n @include z-depth(2);\n\n position: fixed;\n right: px2rem(16px);\n bottom: px2rem(16px);\n left: initial;\n z-index: 2;\n display: block;\n min-width: px2rem(222px);\n padding: px2rem(8px) px2rem(12px);\n color: var(--md-default-bg-color);\n font-size: px2rem(14px);\n background: var(--md-default-fg-color);\n border: none;\n border-radius: px2rem(2px);\n transform: translateY(100%);\n opacity: 0;\n transition:\n transform 0ms 400ms,\n opacity 400ms;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: initial;\n left: px2rem(16px);\n }\n\n // Show open dialog\n &[data-md-state=\"open\"] {\n transform: translateY(0);\n opacity: 1;\n transition:\n transform 400ms cubic-bezier(0.075, 0.85, 0.175, 1),\n opacity 400ms;\n }\n\n // Hide for print\n @media print {\n display: none;\n }\n}\n","//\n// Name: Material Shadows\n// Description: Mixins for Material Design Shadows.\n// Version: 3.0.1\n//\n// Author: Denis Malinochkin\n// Git: https://github.com/mrmlnc/material-shadows\n//\n// twitter: @mrmlnc\n//\n// ------------------------------------\n\n\n// Mixins\n// ------------------------------------\n\n@mixin z-depth-transition() {\n transition: box-shadow .28s cubic-bezier(.4, 0, .2, 1);\n}\n\n@mixin z-depth-focus() {\n box-shadow: 0 0 8px rgba(0, 0, 0, .18), 0 8px 16px rgba(0, 0, 0, .36);\n}\n\n@mixin z-depth-2dp() {\n box-shadow: 0 2px 2px 0 rgba(0, 0, 0, .14),\n 0 1px 5px 0 rgba(0, 0, 0, .12),\n 0 3px 1px -2px rgba(0, 0, 0, .2);\n}\n\n@mixin z-depth-3dp() {\n box-shadow: 0 3px 4px 0 rgba(0, 0, 0, .14),\n 0 1px 8px 0 rgba(0, 0, 0, .12),\n 0 3px 3px -2px rgba(0, 0, 0, .4);\n}\n\n@mixin z-depth-4dp() {\n box-shadow: 0 4px 5px 0 rgba(0, 0, 0, .14),\n 0 1px 10px 0 rgba(0, 0, 0, .12),\n 0 2px 4px -1px rgba(0, 0, 0, .4);\n}\n\n@mixin z-depth-6dp() {\n box-shadow: 0 6px 10px 0 rgba(0, 0, 0, .14),\n 0 1px 18px 0 rgba(0, 0, 0, .12),\n 0 3px 5px -1px rgba(0, 0, 0, .4);\n}\n\n@mixin z-depth-8dp() {\n box-shadow: 0 8px 10px 1px rgba(0, 0, 0, .14),\n 0 3px 14px 2px rgba(0, 0, 0, .12),\n 0 5px 5px -3px rgba(0, 0, 0, .4);\n}\n\n@mixin z-depth-16dp() {\n box-shadow: 0 16px 24px 2px rgba(0, 0, 0, .14),\n 0 6px 30px 5px rgba(0, 0, 0, .12),\n 0 8px 10px -5px rgba(0, 0, 0, .4);\n}\n\n@mixin z-depth-24dp() {\n box-shadow: 0 9px 46px 8px rgba(0, 0, 0, .14),\n 0 24px 38px 3px rgba(0, 0, 0, .12),\n 0 11px 15px -7px rgba(0, 0, 0, .4);\n}\n\n@mixin z-depth($dp: 2) {\n @if $dp == 2 {\n @include z-depth-2dp();\n } @else if $dp == 3 {\n @include z-depth-3dp();\n } @else if $dp == 4 {\n @include z-depth-4dp();\n } @else if $dp == 6 {\n @include z-depth-6dp();\n } @else if $dp == 8 {\n @include z-depth-8dp();\n } @else if $dp == 16 {\n @include z-depth-16dp();\n } @else if $dp == 24 {\n @include z-depth-24dp();\n }\n}\n\n\n// Class generator\n// ------------------------------------\n\n@mixin z-depth-classes($transition: false, $focus: false) {\n @if $transition == true {\n &-transition {\n @include z-depth-transition();\n }\n }\n\n @if $focus == true {\n &-focus {\n @include z-depth-focus();\n }\n }\n\n // The available values for the shadow depth\n @each $depth in 2, 3, 4, 6, 8, 16, 24 {\n &-#{$depth}dp {\n @include z-depth($depth);\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Application header (stays always on top)\n.md-header {\n position: sticky;\n top: 0;\n right: 0;\n left: 0;\n z-index: 2;\n height: px2rem(48px);\n color: var(--md-primary-bg-color);\n background-color: var(--md-primary-fg-color);\n // Hack: reduce jitter by adding a transparent box shadow of the same size\n // so the size of the layer doesn't change during animation\n box-shadow:\n 0 0 px2rem(4px) rgba(0, 0, 0, 0),\n 0 px2rem(4px) px2rem(8px) rgba(0, 0, 0, 0);\n transition:\n color 250ms,\n background-color 250ms;\n\n // Always hide shadow, in case JavaScript is not available\n .no-js & {\n box-shadow: none;\n transition: none;\n }\n\n // Show and animate shadow\n &[data-md-state=\"shadow\"] {\n box-shadow:\n 0 0 px2rem(4px) rgba(0, 0, 0, 0.1),\n 0 px2rem(4px) px2rem(8px) rgba(0, 0, 0, 0.2);\n transition:\n color 250ms,\n background-color 250ms,\n box-shadow 250ms;\n }\n\n // Hide for print\n @media print {\n display: none;\n }\n}\n\n// Navigation within header\n.md-header-nav {\n display: flex;\n padding: 0 px2rem(4px);\n\n // Icon buttons\n &__button {\n position: relative;\n z-index: 1;\n margin: px2rem(4px);\n padding: px2rem(8px);\n cursor: pointer;\n transition: opacity 250ms;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n\n // Flip icon vertically\n svg {\n transform: scaleX(-1);\n }\n }\n\n // Focused or hovered icon\n &:focus,\n &:hover {\n opacity: 0.7;\n }\n\n // Logo\n &.md-logo {\n margin: px2rem(4px);\n padding: px2rem(8px);\n\n // Image or icon\n img,\n svg {\n display: block;\n width: px2rem(24px);\n height: px2rem(24px);\n fill: currentColor;\n }\n }\n\n // Hide search icon, if JavaScript is not available.\n .no-js &[for=\"__search\"] {\n display: none;\n }\n\n // [tablet landscape +]: Hide the search button\n @include break-from-device(tablet landscape) {\n\n // Search button\n &[for=\"__search\"] {\n display: none;\n }\n }\n\n // [tablet -]: Hide the logo\n @include break-to-device(tablet) {\n\n // Logo\n &.md-logo {\n display: none;\n }\n }\n\n // [screen +]: Hide the menu button\n @include break-from-device(screen) {\n\n // Menu button\n &[for=\"__drawer\"] {\n display: none;\n }\n }\n }\n\n // Header topics\n &__topic {\n position: absolute;\n width: 100%;\n transition:\n transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),\n opacity 150ms;\n\n // Page title\n & + & {\n z-index: -1;\n transform: translateX(px2rem(25px));\n opacity: 0;\n transition:\n transform 400ms cubic-bezier(1, 0.7, 0.1, 0.1),\n opacity 150ms;\n pointer-events: none;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n transform: translateX(px2rem(-25px));\n }\n }\n\n // Induce ellipsis, if no JavaScript is available\n .no-js & {\n position: initial;\n }\n\n // Hide page title as it is invisible anyway and will overflow the header\n .no-js & + & {\n display: none;\n }\n }\n\n // Header title - set line height to match icon for correct alignment\n &__title {\n flex-grow: 1;\n padding: 0 px2rem(20px);\n font-size: px2rem(18px);\n line-height: px2rem(48px);\n\n // Show page title\n &[data-md-state=\"active\"] .md-header-nav__topic {\n z-index: -1;\n transform: translateX(px2rem(-25px));\n opacity: 0;\n transition:\n transform 400ms cubic-bezier(1, 0.7, 0.1, 0.1),\n opacity 150ms;\n pointer-events: none;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n transform: translateX(px2rem(25px));\n }\n\n // Page title\n & + .md-header-nav__topic {\n z-index: 0;\n transform: translateX(0);\n opacity: 1;\n transition:\n transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),\n opacity 150ms;\n pointer-events: initial;\n }\n }\n\n // Patch ellipsis\n > .md-header-nav__ellipsis {\n position: relative;\n width: 100%;\n height: 100%;\n }\n }\n\n // Repository containing source\n &__source {\n display: none;\n\n // [tablet landscape +]: Show the reposistory from tablet\n @include break-from-device(tablet landscape) {\n display: block;\n width: px2rem(234px);\n max-width: px2rem(234px);\n margin-left: px2rem(20px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2rem(20px);\n margin-left: initial;\n }\n }\n\n // [screen +]: Increase spacing of search bar\n @include break-from-device(screen) {\n margin-left: px2rem(28px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2rem(28px);\n }\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Hero teaser\n.md-hero {\n overflow: hidden;\n color: var(--md-primary-bg-color);\n font-size: ms(1);\n background-color: var(--md-primary-fg-color);\n transition: background 250ms;\n\n // Inner wrapper\n &__inner {\n margin-top: px2rem(20px);\n padding: px2rem(16px) px2rem(16px) px2rem(8px);\n transition:\n transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),\n opacity 250ms;\n transition-delay: 100ms;\n\n // [tablet -]: Compensate for missing tabs\n @include break-to-device(tablet) {\n margin-top: px2rem(48px);\n margin-bottom: px2rem(24px);\n }\n\n // Fade-out tabs background upon scrolling\n [data-md-state=\"hidden\"] & {\n transform: translateY(px2rem(12.5px));\n opacity: 0;\n transition:\n transform 0ms 400ms,\n opacity 100ms 0ms;\n pointer-events: none;\n }\n\n // Adjust bottom spacing if there are no tabs\n .md-hero--expand & {\n margin-bottom: px2rem(24px);\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Application footer\n.md-footer {\n color: var(--md-default-bg-color);\n background-color: var(--md-default-fg-color);\n\n // Hide for print\n @media print {\n display: none;\n }\n}\n\n// Navigation within footer\n.md-footer-nav {\n\n // Set spacing\n &__inner {\n padding: px2rem(4px);\n overflow: auto;\n }\n\n // Links to previous and next page\n &__link {\n display: flex;\n padding-top: px2rem(28px);\n padding-bottom: px2rem(8px);\n transition: opacity 250ms;\n\n // [tablet +]: Set proportional width\n @include break-from-device(tablet) {\n width: 50%;\n }\n\n // Focused or hovered links\n &:focus,\n &:hover {\n opacity: 0.7;\n }\n\n // Link to previous page\n &--prev {\n float: left;\n width: 25%;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n float: right;\n\n // Flip icon vertically\n svg {\n transform: scaleX(-1);\n }\n }\n\n // Title\n .md-footer-nav__title {\n\n // [mobile -]: Hide title for previous page\n @include break-to-device(mobile) {\n display: none;\n }\n }\n }\n\n // Link to next page\n &--next {\n float: right;\n width: 75%;\n text-align: right;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n float: left;\n text-align: left;\n\n // Flip icon vertically\n svg {\n transform: scaleX(-1);\n }\n }\n }\n }\n\n // Link title - set line height to match icon for correct alignment\n &__title {\n position: relative;\n flex-grow: 1;\n max-width: calc(100% - #{px2rem(48px)});\n padding: 0 px2rem(20px);\n font-size: px2rem(18px);\n line-height: px2rem(48px);\n }\n\n // Link button\n &__button {\n margin: px2rem(4px);\n padding: px2rem(8px);\n }\n\n // Link direction\n &__direction {\n position: absolute;\n right: 0;\n left: 0;\n margin-top: px2rem(-20px);\n padding: 0 px2rem(20px);\n color: var(--md-default-bg-color--light);\n font-size: ms(-1);\n }\n}\n\n// Non-navigational information\n.md-footer-meta {\n background-color: var(--md-default-fg-color--lighter);\n\n // Set spacing\n &__inner {\n display: flex;\n flex-wrap: wrap;\n justify-content: space-between;\n padding: px2rem(4px);\n }\n\n // Use a decent color for non-hovered links and ensure specificity\n html &.md-typeset a {\n color: var(--md-default-bg-color--light);\n\n // Focused or hovered link\n &:focus,\n &:hover {\n color: var(--md-default-bg-color);\n }\n }\n}\n\n// Copyright and theme information\n.md-footer-copyright {\n width: 100%;\n margin: auto px2rem(12px);\n padding: px2rem(8px) 0;\n color: var(--md-default-bg-color--lighter);\n font-size: ms(-1);\n\n // [tablet portrait +]: Show next to social media links\n @include break-from-device(tablet portrait) {\n width: auto;\n }\n\n // Highlight copyright information\n &__highlight {\n color: var(--md-default-bg-color--light);\n }\n}\n\n// Social links\n.md-footer-social {\n margin: 0 px2rem(8px);\n padding: px2rem(4px) 0 px2rem(12px);\n\n // [tablet portrait +]: Show next to copyright information\n @include break-from-device(tablet portrait) {\n padding: px2rem(12px) 0;\n }\n\n // Link with icon\n &__link {\n display: inline-block;\n width: px2rem(32px);\n height: px2rem(32px);\n text-align: center;\n\n // Adjust line-height to match height for correct alignment\n &::before {\n line-height: 1.9;\n }\n\n // Social icon\n svg {\n max-height: px2rem(16px);\n vertical-align: -25%;\n fill: currentColor;\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Navigation container\n.md-nav {\n font-size: px2rem(14px);\n line-height: 1.3;\n\n // List title\n &__title {\n display: block;\n padding: 0 px2rem(12px);\n overflow: hidden;\n font-weight: 700;\n text-overflow: ellipsis;\n\n // Hide buttons by default\n .md-nav__button {\n display: none;\n\n // Stretch images\n img {\n width: 100%;\n height: auto;\n }\n\n // Logo\n &.md-logo {\n\n // Image or icon\n img,\n svg {\n display: block;\n width: px2rem(48px);\n height: px2rem(48px);\n }\n\n // Icon\n svg {\n fill: currentColor;\n }\n }\n }\n }\n\n // List of items\n &__list {\n margin: 0;\n padding: 0;\n list-style: none;\n }\n\n // List item\n &__item {\n padding: 0 px2rem(12px);\n\n // Add bottom spacing to last item\n &:last-child {\n padding-bottom: px2rem(12px);\n }\n\n // 2nd+ level items\n & & {\n padding-right: 0;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(12px);\n padding-left: 0;\n }\n\n // Remove bottom spacing for nested items\n &:last-child {\n padding-bottom: 0;\n }\n }\n }\n\n // Link inside item\n &__link {\n display: block;\n margin-top: 0.625em;\n overflow: hidden;\n text-overflow: ellipsis;\n cursor: pointer;\n transition: color 125ms;\n scroll-snap-align: start;\n\n // Hide link to table of contents by default - this will only match the\n // table of contents inside the drawer below and including tablet portrait\n html &[for=\"__toc\"] {\n display: none;\n\n // Hide table of contents by default\n & ~ .md-nav {\n display: none;\n }\n }\n\n // Blurred link\n &[data-md-state=\"blur\"] {\n color: var(--md-default-fg-color--light);\n }\n\n // Active link\n .md-nav__item &--active {\n color: var(--md-primary-fg-color);\n }\n\n // Reset active color for nested list titles\n .md-nav__item--nested > & {\n color: inherit;\n }\n\n // Focused or hovered link\n &:focus,\n &:hover {\n color: var(--md-accent-fg-color);\n }\n }\n\n // Repository containing source\n &__source {\n display: none;\n }\n\n // [tablet -]: Layered navigation\n @include break-to-device(tablet) {\n background-color: var(--md-default-bg-color);\n\n // Stretch primary navigation to drawer\n &--primary,\n &--primary .md-nav {\n position: absolute;\n top: 0;\n right: 0;\n left: 0;\n z-index: 1;\n display: flex;\n flex-direction: column;\n height: 100%;\n }\n\n // Adjust styles for primary navigation\n &--primary {\n\n // List title and item\n .md-nav__title,\n .md-nav__item {\n font-size: px2rem(16px);\n line-height: 1.5;\n }\n\n // List title\n .md-nav__title {\n position: relative;\n height: px2rem(112px);\n padding: px2rem(60px) px2rem(16px) px2rem(4px);\n color: var(--md-default-fg-color--light);\n font-weight: 400;\n line-height: px2rem(48px);\n white-space: nowrap;\n background-color: var(--md-default-fg-color--lightest);\n cursor: pointer;\n\n // Icon\n .md-nav__icon {\n position: absolute;\n top: px2rem(8px);\n left: px2rem(8px);\n display: block;\n width: px2rem(24px);\n height: px2rem(24px);\n margin: px2rem(4px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: px2rem(8px);\n left: initial;\n }\n }\n\n // Main lists\n ~ .md-nav__list {\n overflow-y: auto;\n background-color: var(--md-default-bg-color);\n box-shadow:\n inset 0 px2rem(1px) 0 var(--md-default-fg-color--lightest);\n scroll-snap-type: y mandatory;\n touch-action: pan-y;\n\n // Remove border for first list item\n > .md-nav__item:first-child {\n border-top: 0;\n }\n }\n\n // Site title in main navigation\n &[for=\"__drawer\"] {\n position: relative;\n color: var(--md-primary-bg-color);\n background-color: var(--md-primary-fg-color);\n\n // Site logo\n .md-nav__button {\n position: absolute;\n top: px2rem(4px);\n left: px2rem(4px);\n display: block;\n margin: px2rem(4px);\n padding: px2rem(8px);\n font-size: px2rem(48px);\n }\n }\n }\n\n // Adjust for right-to-left languages\n html [dir=\"rtl\"] & .md-nav__title {\n\n // Site title in main navigation\n &[for=\"__drawer\"] .md-nav__button {\n right: px2rem(4px);\n left: initial;\n }\n }\n\n // List of items\n .md-nav__list {\n flex: 1;\n }\n\n // List item\n .md-nav__item {\n padding: 0;\n border-top: px2rem(1px) solid var(--md-default-fg-color--lightest);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding: 0;\n }\n\n // Increase spacing to account for icon\n &--nested > .md-nav__link {\n padding-right: px2rem(48px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(16px);\n padding-left: px2rem(48px);\n }\n }\n\n // Active parent item\n &--active > .md-nav__link {\n color: var(--md-primary-fg-color);\n\n // Focused or hovered linl\n &:focus,\n &:hover {\n color: var(--md-accent-fg-color);\n }\n }\n }\n\n // Link inside item\n .md-nav__link {\n position: relative;\n margin-top: 0;\n padding: px2rem(12px) px2rem(16px);\n\n // Icon\n .md-nav__icon {\n position: absolute;\n top: 50%;\n right: px2rem(12px);\n margin-top: px2rem(-12px);\n color: inherit;\n font-size: px2rem(24px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: initial;\n left: px2rem(12px);\n }\n }\n }\n\n // Icon\n .md-nav__icon {\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n\n // Flip icon vertically\n svg {\n transform: scale(-1);\n }\n }\n }\n\n // Table of contents inside navigation\n .md-nav--secondary {\n\n // Set links to static to avoid unnecessary layering\n .md-nav__link {\n position: static;\n }\n\n // Set nested navigation for table of contents to static\n .md-nav {\n position: static;\n background-color: transparent;\n\n // 3rd level link\n .md-nav__link {\n padding-left: px2rem(28px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(28px);\n padding-left: initial;\n }\n }\n\n // 4th level link\n .md-nav .md-nav__link {\n padding-left: px2rem(40px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(40px);\n padding-left: initial;\n }\n }\n\n // 5th level link\n .md-nav .md-nav .md-nav__link {\n padding-left: px2rem(52px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(52px);\n padding-left: initial;\n }\n }\n\n // 6th level link\n .md-nav .md-nav .md-nav .md-nav__link {\n padding-left: px2rem(64px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(64px);\n padding-left: initial;\n }\n }\n }\n }\n }\n\n // Hide nested navigation by default\n .md-nav__toggle ~ & {\n display: flex;\n transform: translateX(100%);\n opacity: 0;\n transition:\n transform 250ms cubic-bezier(0.8, 0, 0.6, 1),\n opacity 125ms 50ms;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n transform: translateX(-100%);\n }\n }\n\n // Expand nested navigation, if toggle is checked\n .md-nav__toggle:checked ~ & {\n transform: translateX(0);\n opacity: 1;\n transition:\n transform 250ms cubic-bezier(0.4, 0, 0.2, 1),\n opacity 125ms 125ms;\n\n // Hack: reduce jitter\n > .md-nav__list {\n backface-visibility: hidden;\n }\n }\n }\n\n // [tablet portrait -]: Show table of contents in drawer\n @include break-to-device(tablet portrait) {\n\n // Show link to table of contents - higher specificity is necessary to\n // display the table of contents inside the drawer\n html &__link[for=\"__toc\"] {\n display: block;\n padding-right: px2rem(48px);\n\n // Hide link to current item\n + .md-nav__link {\n display: none;\n }\n\n // Show table of contents\n & ~ .md-nav {\n display: flex;\n }\n }\n\n // Adjust for right-to-left languages\n html [dir=\"rtl\"] &__link {\n padding-right: px2rem(16px);\n padding-left: px2rem(48px);\n }\n\n // Repository containing source\n &__source {\n display: block;\n padding: 0 px2rem(4px);\n color: var(--md-primary-bg-color);\n background-color: var(--md-primary-fg-color--dark);\n }\n }\n\n // [tablet landscape +]: Tree-like navigation\n @include break-from-device(tablet landscape) {\n\n // List title\n &--secondary .md-nav__title {\n\n // Snap to table of contents title\n &[for=\"__toc\"] {\n scroll-snap-align: start;\n }\n\n // Hide icon\n .md-nav__icon {\n display: none;\n }\n }\n }\n\n // [screen +]: Tree-like navigation\n @include break-from-device(screen) {\n transition: max-height 250ms cubic-bezier(0.86, 0, 0.07, 1);\n\n // List title\n &--primary .md-nav__title {\n\n // Snap to site title\n &[for=\"__drawer\"] {\n scroll-snap-align: start;\n }\n\n // Hide icon\n .md-nav__icon {\n display: none;\n }\n }\n\n // Hide nested navigation by default\n .md-nav__toggle ~ & {\n display: none;\n }\n\n // Show nested navigation, if toggle is checked\n .md-nav__toggle:checked ~ & {\n display: block;\n }\n\n // Hide titles for nested navigation\n &__item--nested > .md-nav > &__title {\n display: none;\n }\n\n // Icon\n &__icon {\n float: right;\n height: px2rem(18px);\n transition: transform 250ms;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n float: left;\n transform: rotate(180deg);\n }\n\n // Inline icon and adjust to match font size\n svg {\n display: inline-block;\n width: px2rem(18px);\n height: px2rem(18px);\n vertical-align: px2rem(-2px);\n }\n\n // Rotate icon for expanded lists\n .md-nav__item--nested .md-nav__toggle:checked ~ .md-nav__link & {\n transform: rotate(90deg);\n }\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Variables\n// ----------------------------------------------------------------------------\n\n// Active (toggled) search\n$md-toggle__search--checked:\n \"[data-md-toggle=\\\"search\\\"]:checked ~ .md-header\";\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Search container\n.md-search {\n position: relative;\n\n // Hide search, if JavaScript is not available.\n .no-js & {\n display: none;\n }\n\n // [tablet landscape +]: Header-embedded search\n @include break-from-device(tablet landscape) {\n padding: px2rem(4px) 0;\n }\n\n // Search modal overlay\n &__overlay {\n z-index: 1;\n opacity: 0;\n\n // [tablet portrait -]: Full-screen search bar\n @include break-to-device(tablet portrait) {\n position: absolute;\n top: px2rem(4px);\n left: px2rem(-44px);\n width: px2rem(40px);\n height: px2rem(40px);\n overflow: hidden;\n background-color: var(--md-default-bg-color);\n border-radius: px2rem(20px);\n transform-origin: center;\n transition:\n transform 300ms 100ms,\n opacity 200ms 200ms;\n pointer-events: none;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: px2rem(-44px);\n left: initial;\n }\n\n // Expanded overlay\n #{$md-toggle__search--checked} & {\n opacity: 1;\n transition:\n transform 400ms,\n opacity 100ms;\n }\n }\n\n // Set scale factors\n #{$md-toggle__search--checked} & {\n\n // [mobile portrait -]: Scale up 45 times\n @include break-to-device(mobile portrait) {\n transform: scale(45);\n }\n\n // [mobile landscape]: Scale up 60 times\n @include break-at-device(mobile landscape) {\n transform: scale(60);\n }\n\n // [tablet portrait]: Scale up 75 times\n @include break-at-device(tablet portrait) {\n transform: scale(75);\n }\n }\n\n // [tablet landscape +]: Overlay for better focus on search\n @include break-from-device(tablet landscape) {\n position: fixed;\n top: 0;\n left: 0;\n width: 0;\n height: 0;\n background-color: var(--md-default-fg-color--light);\n cursor: pointer;\n transition:\n width 0ms 250ms,\n height 0ms 250ms,\n opacity 250ms;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: 0;\n left: initial;\n }\n\n // Expanded overlay\n #{$md-toggle__search--checked} & {\n width: 100%;\n height: 100%;\n opacity: 1;\n transition:\n width 0ms,\n height 0ms,\n opacity 250ms;\n }\n }\n }\n\n // Search modal wrapper\n &__inner {\n // Hack: reduce jitter\n backface-visibility: hidden;\n\n // [tablet portrait -]: Put search modal off-canvas by default\n @include break-to-device(tablet portrait) {\n position: fixed;\n top: 0;\n left: 100%;\n z-index: 2;\n width: 100%;\n height: 100%;\n transform: translateX(5%);\n opacity: 0;\n transition:\n right 0ms 300ms,\n left 0ms 300ms,\n transform 150ms 150ms cubic-bezier(0.4, 0, 0.2, 1),\n opacity 150ms 150ms;\n\n // Active search modal\n #{$md-toggle__search--checked} & {\n left: 0;\n transform: translateX(0);\n opacity: 1;\n transition:\n right 0ms 0ms,\n left 0ms 0ms,\n transform 150ms 150ms cubic-bezier(0.1, 0.7, 0.1, 1),\n opacity 150ms 150ms;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: 0;\n left: initial;\n }\n }\n\n // Adjust for right-to-left languages\n html [dir=\"rtl\"] & {\n right: 100%;\n left: initial;\n transform: translateX(-5%);\n }\n }\n\n // [tablet landscape +]: Header-embedded search\n @include break-from-device(tablet landscape) {\n position: relative;\n float: right;\n width: px2rem(234px);\n padding: px2rem(2px) 0;\n transition: width 250ms cubic-bezier(0.1, 0.7, 0.1, 1);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n float: left;\n }\n }\n\n // Set maximum width\n #{$md-toggle__search--checked} & {\n\n // [tablet landscape]: Do not overlay title\n @include break-at-device(tablet landscape) {\n width: px2rem(468px);\n }\n\n // [screen +]: Match content width\n @include break-from-device(screen) {\n width: px2rem(688px);\n }\n }\n }\n\n // Search form\n &__form {\n position: relative;\n\n // [tablet landscape +]: Header-embedded search\n @include break-from-device(tablet landscape) {\n border-radius: px2rem(2px);\n }\n }\n\n // Search input\n &__input {\n position: relative;\n z-index: 2;\n padding: 0 px2rem(44px) 0 px2rem(72px);\n text-overflow: ellipsis;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding: 0 px2rem(72px) 0 px2rem(44px);\n }\n\n // Transition on placeholder\n &::placeholder {\n transition: color 250ms;\n }\n\n // Placeholder and icon color in active state\n ~ .md-search__icon,\n &::placeholder {\n color: var(--md-default-fg-color--light);\n }\n\n // Remove the \"x\" rendered by Internet Explorer\n &::-ms-clear {\n display: none;\n }\n\n // [tablet portrait -]: Full-screen search bar\n @include break-to-device(tablet portrait) {\n width: 100%;\n height: px2rem(48px);\n font-size: px2rem(18px);\n }\n\n // [tablet landscape +]: Header-embedded search\n @include break-from-device(tablet landscape) {\n width: 100%;\n height: px2rem(36px);\n padding-left: px2rem(44px);\n color: inherit;\n font-size: ms(0);\n background-color: var(--md-default-fg-color--lighter);\n border-radius: px2rem(2px);\n transition:\n color 250ms,\n background-color 250ms;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(44px);\n }\n\n // Icon color\n + .md-search__icon {\n color: var(--md-primary-bg-color);\n }\n\n // Placeholder color\n &::placeholder {\n color: var(--md-primary-bg-color--light);\n }\n\n // Hovered search field\n &:hover {\n background-color: var(--md-default-bg-color--lightest);\n }\n\n // Set light background on active search field\n #{$md-toggle__search--checked} & {\n color: var(--md-default-fg-color);\n text-overflow: clip;\n background-color: var(--md-default-bg-color);\n border-radius: px2rem(2px) px2rem(2px) 0 0;\n\n // Icon and placeholder color in active state\n + .md-search__icon,\n &::placeholder {\n color: var(--md-default-fg-color--light);\n }\n }\n }\n }\n\n // Icon\n &__icon {\n position: absolute;\n z-index: 2;\n width: px2rem(24px);\n height: px2rem(24px);\n cursor: pointer;\n transition:\n color 250ms,\n opacity 250ms;\n\n // Hovered icon\n &:hover {\n opacity: 0.7;\n }\n\n // Search icon\n &[for=\"__search\"] {\n top: px2rem(6px);\n left: px2rem(10px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: px2rem(10px);\n left: initial;\n\n // Flip icon vertically\n svg {\n transform: scaleX(-1);\n }\n }\n\n // [tablet portrait -]: Full-screen search bar\n @include break-to-device(tablet portrait) {\n top: px2rem(12px);\n left: px2rem(16px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: px2rem(16px);\n left: initial;\n }\n\n // Hide the magnifying glass (1st icon)\n svg:first-child {\n display: none;\n }\n }\n\n // [tablet landscape +]: Header-embedded search\n @include break-from-device(tablet landscape) {\n pointer-events: none;\n\n // Hide the arrow (2nd icon)\n svg:last-child {\n display: none;\n }\n }\n }\n\n // Reset button\n &[type=\"reset\"] {\n top: px2rem(6px);\n right: px2rem(10px);\n transform: scale(0.75);\n opacity: 0;\n transition:\n transform 150ms cubic-bezier(0.1, 0.7, 0.1, 1),\n opacity 150ms;\n pointer-events: none;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: initial;\n left: px2rem(10px);\n }\n\n // [tablet portrait -]: Full-screen search bar\n @include break-to-device(tablet portrait) {\n top: px2rem(12px);\n right: px2rem(16px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: initial;\n left: px2rem(16px);\n }\n }\n\n // Show reset button if search is active and input non-empty\n #{$md-toggle__search--checked}\n .md-search__input:not(:placeholder-shown) ~ & {\n transform: scale(1);\n opacity: 1;\n pointer-events: initial;\n\n // Hovered icon\n &:hover {\n opacity: 0.7;\n }\n }\n }\n }\n\n // Search output container\n &__output {\n position: absolute;\n z-index: 1;\n width: 100%;\n overflow: hidden;\n border-radius: 0 0 px2rem(2px) px2rem(2px);\n\n // [tablet portrait -]: Full-screen search bar\n @include break-to-device(tablet portrait) {\n top: px2rem(48px);\n bottom: 0;\n }\n\n // [tablet landscape +]: Header-embedded search\n @include break-from-device(tablet landscape) {\n top: px2rem(38px);\n opacity: 0;\n transition: opacity 400ms;\n\n // Show search output in active state\n #{$md-toggle__search--checked} & {\n @include z-depth(6);\n\n opacity: 1;\n }\n }\n }\n\n // Wrapper for scrolling on overflow\n &__scrollwrap {\n height: 100%;\n overflow-y: auto;\n background-color: var(--md-default-bg-color);\n box-shadow: inset 0 px2rem(1px) 0 var(--md-default-fg-color--lightest);\n // Hack: reduce jitter\n backface-visibility: hidden;\n scroll-snap-type: y mandatory;\n touch-action: pan-y;\n\n // Mitigiate excessive repaints on non-retina devices\n @media (max-resolution: 1dppx) {\n transform: translateZ(0);\n }\n\n // [tablet landscape]: Set absolute width to omit unnecessary reflow\n @include break-at-device(tablet landscape) {\n width: px2rem(468px);\n }\n\n // [screen +]: Set absolute width to omit unnecessary reflow\n @include break-from-device(screen) {\n width: px2rem(688px);\n }\n\n // [tablet landscape +]: Limit height to viewport\n @include break-from-device(tablet landscape) {\n max-height: 0;\n\n // Expand in active state\n #{$md-toggle__search--checked} & {\n max-height: 75vh;\n }\n\n // Override native scrollbar styles\n &::-webkit-scrollbar {\n width: px2rem(4px);\n height: px2rem(4px);\n }\n\n // Scrollbar thumb\n &::-webkit-scrollbar-thumb {\n background-color: var(--md-default-fg-color--lighter);\n\n // Hovered scrollbar thumb\n &:hover {\n background-color: var(--md-accent-fg-color);\n }\n }\n }\n }\n}\n\n// Search result\n.md-search-result {\n color: var(--md-default-fg-color);\n word-break: break-word;\n\n // Search metadata\n &__meta {\n padding: 0 px2rem(16px);\n color: var(--md-default-fg-color--light);\n font-size: ms(-1);\n line-height: px2rem(36px);\n background-color: var(--md-default-fg-color--lightest);\n scroll-snap-align: start;\n\n // [tablet landscape +]: Increase left indent\n @include break-from-device(tablet landscape) {\n padding-left: px2rem(44px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(44px);\n padding-left: initial;\n }\n }\n }\n\n // List of items\n &__list {\n margin: 0;\n padding: 0;\n list-style: none;\n border-top: px2rem(1px) solid var(--md-default-fg-color--lightest);\n }\n\n // List item\n &__item {\n box-shadow: 0 px2rem(-1px) 0 var(--md-default-fg-color--lightest);\n }\n\n // Link inside item\n &__link {\n display: block;\n outline: 0;\n transition: background 250ms;\n scroll-snap-align: start;\n\n // Focused or hovered link\n &:focus,\n &:hover {\n background-color: var(--md-accent-fg-color--transparent);\n\n // Slightly transparent icon\n .md-search-result__article::before {\n opacity: 0.7;\n }\n }\n\n // Add a little spacing on the teaser of the last link\n &:last-child .md-search-result__teaser {\n margin-bottom: px2rem(12px);\n }\n }\n\n // Article - document or section\n &__article {\n position: relative;\n padding: 0 px2rem(16px);\n overflow: auto;\n\n // [tablet landscape +]: Increase left indent\n @include break-from-device(tablet landscape) {\n padding-left: px2rem(44px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding-right: px2rem(44px);\n padding-left: px2rem(16px);\n }\n }\n\n // Document\n &--document {\n\n // Title\n .md-search-result__title {\n margin: px2rem(11px) 0;\n font-weight: 400;\n font-size: ms(0);\n line-height: 1.4;\n }\n }\n }\n\n // Icon\n &__icon {\n position: absolute;\n left: 0;\n margin: px2rem(2px);\n padding: px2rem(8px);\n color: var(--md-default-fg-color--light);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: 0;\n left: initial;\n\n // Flip icon vertically\n svg {\n transform: scaleX(-1);\n }\n }\n\n // [tablet portrait -]: Hide page icon\n @include break-to-device(tablet portrait) {\n display: none;\n }\n }\n\n // Title\n &__title {\n margin: 0.5em 0;\n font-weight: 700;\n font-size: ms(-1);\n line-height: 1.4;\n }\n\n // stylelint-disable value-no-vendor-prefix, property-no-vendor-prefix\n\n // Teaser\n &__teaser {\n display: -webkit-box;\n max-height: px2rem(33px);\n margin: 0.5em 0;\n overflow: hidden;\n color: var(--md-default-fg-color--light);\n font-size: ms(-1);\n line-height: 1.4;\n text-overflow: ellipsis;\n -webkit-box-orient: vertical;\n -webkit-line-clamp: 2;\n\n // [mobile -]: Increase number of lines\n @include break-to-device(mobile) {\n max-height: px2rem(50px);\n -webkit-line-clamp: 3;\n }\n\n // [tablet landscape]: Increase number of lines\n @include break-at-device(tablet landscape) {\n max-height: px2rem(50px);\n -webkit-line-clamp: 3;\n }\n }\n\n // stylelint-enable value-no-vendor-prefix, property-no-vendor-prefix\n\n // Search term highlighting\n em {\n font-weight: 700;\n font-style: normal;\n text-decoration: underline;\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Variables\n// ----------------------------------------------------------------------------\n\n// Active (toggled) drawer\n$md-toggle__drawer--checked:\n \"[data-md-toggle=\\\"drawer\\\"]:checked ~ .md-container\";\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Sidebar container\n.md-sidebar {\n position: sticky;\n top: px2rem(48px);\n width: px2rem(242px);\n padding: px2rem(24px) 0;\n overflow: hidden;\n\n // Hide for print\n @media print {\n display: none;\n }\n\n // [tablet -]: Convert navigation to drawer\n @include break-to-device(tablet) {\n\n // Render primary sidebar as a slideout container\n &--primary {\n position: fixed;\n top: 0;\n left: px2rem(-242px);\n z-index: 3;\n width: px2rem(242px);\n height: 100%;\n background-color: var(--md-default-bg-color);\n transform: translateX(0);\n transition:\n transform 250ms cubic-bezier(0.4, 0, 0.2, 1),\n box-shadow 250ms;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: px2rem(-242px);\n left: initial;\n }\n\n // Expanded drawer\n #{$md-toggle__drawer--checked} & {\n @include z-depth(8);\n\n transform: translateX(px2rem(242px));\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n transform: translateX(px2rem(-242px));\n }\n }\n\n // Hide overflow for nested navigation\n .md-sidebar__scrollwrap {\n overflow: hidden;\n }\n }\n }\n\n // Secondary sidebar with table of contents\n &--secondary {\n display: none;\n order: 2;\n\n // [tablet landscape +]: Show table of contents next to body copy\n @include break-from-device(tablet landscape) {\n display: block;\n\n // Ensure smooth scrolling on iOS\n .md-sidebar__scrollwrap {\n touch-action: pan-y;\n }\n }\n }\n\n // Wrapper for scrolling on overflow\n &__scrollwrap {\n max-height: 100%;\n margin: 0 px2rem(4px);\n overflow-y: auto;\n // Hack: reduce jitter\n backface-visibility: hidden;\n scroll-snap-type: y mandatory;\n\n // [tablet -]: Adjust margins\n @include break-to-device(tablet) {\n\n // Stretch scrollwrap for primary sidebar\n .md-sidebar--primary & {\n position: absolute;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n margin: 0;\n scroll-snap-type: none;\n }\n }\n\n // Override native scrollbar styles\n &::-webkit-scrollbar {\n width: px2rem(4px);\n height: px2rem(4px);\n }\n\n // Scrollbar thumb\n &::-webkit-scrollbar-thumb {\n background-color: var(--md-default-fg-color--lighter);\n\n // Hovered scrollbar thumb\n &:hover {\n background-color: var(--md-accent-fg-color);\n }\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Keyframes\n// ----------------------------------------------------------------------------\n\n// Show source facts\n@keyframes md-source__facts--done {\n 0% {\n height: 0;\n }\n\n 100% {\n height: px2rem(13px);\n }\n}\n\n// Show source fact\n@keyframes md-source__fact--done {\n 0% {\n transform: translateY(100%);\n opacity: 0;\n }\n\n 50% {\n opacity: 0;\n }\n\n 100% {\n transform: translateY(0%);\n opacity: 1;\n }\n}\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Source container\n.md-source {\n display: block;\n font-size: px2rem(13px);\n line-height: 1.2;\n white-space: nowrap;\n // Hack: reduce jitter\n backface-visibility: hidden;\n transition: opacity 250ms;\n\n // Hovered source container\n &:hover {\n opacity: 0.7;\n }\n\n // Repository platform icon\n &__icon {\n display: inline-block;\n width: px2rem(48px);\n height: px2rem(48px);\n vertical-align: middle;\n\n // Align with margin only (as opposed to normal button alignment)\n svg {\n margin-top: px2rem(12px);\n margin-left: px2rem(12px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2rem(12px);\n margin-left: initial;\n }\n }\n\n // Correct alignment, if icon is present\n + .md-source__repository {\n margin-left: px2rem(-40px);\n padding-left: px2rem(40px);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2rem(-40px);\n margin-left: initial;\n padding-right: px2rem(40px);\n padding-left: initial;\n }\n }\n }\n\n // Repository name\n &__repository {\n display: inline-block;\n max-width: calc(100% - #{px2rem(24px)});\n margin-left: px2rem(12px);\n overflow: hidden;\n font-weight: 700;\n text-overflow: ellipsis;\n vertical-align: middle;\n }\n\n // Source facts (statistics etc.)\n &__facts {\n margin: 0;\n padding: 0;\n overflow: hidden;\n font-weight: 700;\n font-size: px2rem(11px);\n list-style-type: none;\n opacity: 0.75;\n\n // Show after the data was loaded\n [data-md-state=\"done\"] & {\n animation: md-source__facts--done 250ms ease-in;\n }\n }\n\n // Fact\n &__fact {\n float: left;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n float: right;\n }\n\n // Show after the data was loaded\n [data-md-state=\"done\"] & {\n animation: md-source__fact--done 400ms ease-out;\n }\n\n // Middle dot before fact\n &::before {\n margin: 0 px2rem(2px);\n content: \"\\00B7\";\n }\n\n // Remove middle dot on first fact\n &:first-child::before {\n display: none;\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Tabs with outline\n.md-tabs {\n width: 100%;\n overflow: auto;\n color: var(--md-primary-bg-color);\n background-color: var(--md-primary-fg-color);\n transition: background 250ms;\n\n // Omit transitions, in case JavaScript is not available\n .no-js & {\n transition: none;\n }\n\n // [tablet -]: Hide tabs for tablet and below, as they don't make any sense\n @include break-to-device(tablet) {\n display: none;\n }\n\n // Hide for print\n @media print {\n display: none;\n }\n\n // List of items\n &__list {\n margin: 0;\n margin-left: px2rem(4px);\n padding: 0;\n white-space: nowrap;\n list-style: none;\n contain: content;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n margin-right: px2rem(4px);\n margin-left: initial;\n }\n }\n\n // List item\n &__item {\n display: inline-block;\n height: px2rem(48px);\n padding-right: px2rem(12px);\n padding-left: px2rem(12px);\n }\n\n // Link inside item - could be defined as block elements and aligned via\n // line height, but this would imply more repaints when scrolling\n &__link {\n display: block;\n margin-top: px2rem(16px);\n font-size: px2rem(14px);\n opacity: 0.7;\n transition:\n transform 400ms cubic-bezier(0.1, 0.7, 0.1, 1),\n opacity 250ms;\n\n // Omit transitions, in case JavaScript is not available\n .no-js & {\n transition: none;\n }\n\n // Active or hovered link\n &--active,\n &:hover {\n color: inherit;\n opacity: 1;\n }\n\n // Delay transitions by a small amount\n @for $i from 2 through 16 {\n .md-tabs__item:nth-child(#{$i}) & {\n transition-delay: 20ms * ($i - 1);\n }\n }\n }\n\n // Fade-out tabs background upon scrolling\n &[data-md-state=\"hidden\"] {\n pointer-events: none;\n\n // Hide tabs upon scrolling - disable transition to minimizes repaints\n // while scrolling down, while scrolling up seems to be okay\n .md-tabs__link {\n transform: translateY(50%);\n opacity: 0;\n transition:\n color 250ms,\n transform 0ms 400ms,\n opacity 100ms;\n }\n }\n\n // [screen +]: Adjust main navigation styles\n @include break-from-device(screen) {\n\n // Hide 1st level nested items, as they are listed in the tabs\n ~ .md-main .md-nav--primary > .md-nav__list > .md-nav__item--nested {\n display: none;\n }\n\n // Active tab\n &--active ~ .md-main {\n\n // Adjust 1st level styles\n .md-nav--primary {\n\n // Show title and remove spacing\n .md-nav__title {\n display: block;\n padding: 0 px2rem(12px);\n pointer-events: none;\n scroll-snap-align: start;\n\n // Hide site title\n &[for=\"__drawer\"] {\n display: none;\n }\n }\n\n // Hide 1st level items\n > .md-nav__list > .md-nav__item {\n display: none;\n\n // Show 1st level active nested items\n &--active {\n display: block;\n padding: 0;\n\n // Hide nested links\n > .md-nav__link {\n display: none;\n }\n }\n }\n }\n\n // Always expand nested navigation on 2nd level\n .md-nav[data-md-level=\"1\"] {\n\n // Remove spacing on 2nd level items\n > .md-nav__list > .md-nav__item {\n padding: 0 px2rem(12px);\n }\n\n // Hide titles from 2nd level on\n .md-nav .md-nav__title {\n display: none;\n }\n }\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Variables\n// ----------------------------------------------------------------------------\n\n///\n/// Admonition flavours\n///\n$admonitions: (\n note: pencil $clr-blue-a200,\n abstract summary tldr: text-subject $clr-light-blue-a400,\n info todo: information $clr-cyan-a700,\n tip hint important: fire $clr-teal-a700,\n success check done: check-circle $clr-green-a700,\n question help faq: help-circle $clr-light-green-a700,\n warning caution attention: alert $clr-orange-a400,\n failure fail missing: close-circle $clr-red-a200,\n danger error: flash-circle $clr-red-a400,\n bug: bug $clr-pink-a400,\n example: format-list-numbered $clr-deep-purple-a400,\n quote cite: format-quote-close $clr-grey\n) !default;\n\n// ----------------------------------------------------------------------------\n// Rules: layout\n// ----------------------------------------------------------------------------\n\n// Icon definitions\n:root {\n @each $names, $props in $admonitions {\n $name: nth($names, 1);\n $icon: nth($props, 1);\n\n // Inline icon through string-replace-loader in webpack\n --md-admonition-icon--#{$name}: url(\"{{ #{$icon} }}\");\n }\n}\n\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // Admonition extension\n .admonition {\n margin: 1.5625em 0;\n padding: 0 px2rem(12px);\n overflow: hidden;\n font-size: ms(-1);\n page-break-inside: avoid;\n border-left: px2rem(4px) solid $clr-blue-a200;\n border-radius: px2rem(2px);\n box-shadow:\n 0 px2rem(4px) px2rem(10px) hsla(0, 0%, 0%, 0.05),\n 0 0 px2rem(1px) hsla(0, 0%, 0%, 0.1);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n border-right: px2rem(4px) solid $clr-blue-a200;\n border-left: none;\n }\n\n // Hack: omit rendering errors for print\n @media print {\n box-shadow: none;\n }\n\n // Adjust spacing on last element\n html & > :last-child {\n margin-bottom: px2rem(12px);\n }\n\n // Adjust margin for nested admonition blocks\n .admonition {\n margin: 1em 0;\n }\n\n // Wrapper for scrolling on overflow\n .md-typeset__scrollwrap {\n margin: 1em px2rem(-12px);\n }\n\n // Data table wrapper, in case JavaScript is available\n .md-typeset__table {\n padding: 0 px2rem(12px);\n }\n }\n\n // Admonition title\n .admonition-title {\n position: relative;\n margin: 0 px2rem(-12px);\n padding: px2rem(8px) px2rem(12px) px2rem(8px) px2rem(40px);\n font-weight: 700;\n background-color: transparentize($clr-blue-a200, 0.9);\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding: px2rem(8px) px2rem(40px) px2rem(8px) px2rem(12px);\n }\n\n // Reset spacing, if title is the only element\n html &:last-child {\n margin-bottom: 0;\n }\n\n // Icon\n &::before {\n position: absolute;\n left: px2rem(12px);\n width: px2rem(20px);\n height: px2rem(20px);\n background-color: $clr-blue-a200;\n mask-image: var(--md-admonition-icon--note);\n content: \"\";\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: px2rem(12px);\n left: initial;\n }\n }\n\n // Reset code inside Admonition titles\n code {\n margin: initial;\n padding: initial;\n color: currentColor;\n background-color: transparent;\n border-radius: initial;\n box-shadow: none;\n }\n }\n}\n\n// ----------------------------------------------------------------------------\n// Rules: flavours\n// ----------------------------------------------------------------------------\n\n@each $names, $props in $admonitions {\n $name: nth($names, 1);\n $tint: nth($props, 2);\n\n // Define base class\n .md-typeset .admonition.#{$name} {\n border-color: $tint;\n }\n\n // Define base class\n .md-typeset .#{$name} > .admonition-title {\n background-color: transparentize($tint, 0.9);\n\n // Icon\n &::before {\n background-color: $tint;\n mask-image: var(--md-admonition-icon--#{$name});\n }\n }\n\n // Define synonyms for base class\n @if length($names) > 1 {\n @for $n from 2 through length($names) {\n .#{nth($names, $n)} {\n @extend .#{$name};\n }\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Variables\n// ----------------------------------------------------------------------------\n\n// Operators\n$codehilite-operator: inherit;\n$codehilite-operator-word: inherit;\n\n// Generics\n$codehilite-generic-emph: #000000;\n$codehilite-generic-error: #AA0000;\n$codehilite-generic-heading: #999999;\n$codehilite-generic-output: #888888;\n$codehilite-generic-prompt: #555555;\n$codehilite-generic-strong: inherit;\n$codehilite-generic-subheading: #AAAAAA;\n$codehilite-generic-traceback: #AA0000;\n\n// Diffs\n$codehilite-diff-deleted: #FFDDDD;\n$codehilite-diff-inserted: #DDFFDD;\n\n// Keywords\n$codehilite-keyword: #3B78E7;\n$codehilite-keyword-constant: #A71D5D;\n$codehilite-keyword-declaration: #3B78E7;\n$codehilite-keyword-namespace: #3B78E7;\n$codehilite-keyword-pseudo: #A71D5D;\n$codehilite-keyword-reserved: #3E61A2;\n$codehilite-keyword-type: #3E61A2;\n\n// Comments\n$codehilite-comment: #999999;\n$codehilite-comment-multiline: #999999;\n$codehilite-comment-preproc: #666666;\n$codehilite-comment-single: #999999;\n$codehilite-comment-shebang: #999999;\n$codehilite-comment-special: #999999;\n\n// Names\n$codehilite-name-attribute: #C2185B;\n$codehilite-name-builtin: #C2185B;\n$codehilite-name-builtin-pseudo: #3E61A2;\n$codehilite-name-class: #C2185B;\n$codehilite-name-constant: #3E61A2;\n$codehilite-name-decorator: #666666;\n$codehilite-name-entity: #666666;\n$codehilite-name-exception: #C2185B;\n$codehilite-name-function: #C2185B;\n$codehilite-name-label: #3B5179;\n$codehilite-name-namespace: #EC407A;\n$codehilite-name-tag: #3B78E7;\n$codehilite-name-variable: #3E61A2;\n$codehilite-name-variable-class: #3E61A2;\n$codehilite-name-variable-instance: #3E61A2;\n$codehilite-name-variable-global: #3E61A2;\n$codehilite-name-extension: #EC407A;\n\n// Numbers\n$codehilite-literal-number: #E74C3C;\n$codehilite-literal-number-float: #E74C3C;\n$codehilite-literal-number-hex: #E74C3C;\n$codehilite-literal-number-integer: #E74C3C;\n$codehilite-literal-number-integer-long: #E74C3C;\n$codehilite-literal-number-oct: #E74C3C;\n\n// Strings\n$codehilite-literal-string: #0D904F;\n$codehilite-literal-string-backticks: #0D904F;\n$codehilite-literal-string-char: #0D904F;\n$codehilite-literal-string-doc: #999999;\n$codehilite-literal-string-double: #0D904F;\n$codehilite-literal-string-escape: #183691;\n$codehilite-literal-string-heredoc: #183691;\n$codehilite-literal-string-interpol: #183691;\n$codehilite-literal-string-other: #183691;\n$codehilite-literal-string-regex: #009926;\n$codehilite-literal-string-single: #0D904F;\n$codehilite-literal-string-symbol: #0D904F;\n\n// Miscellaneous\n$codehilite-error: #A61717;\n$codehilite-whitespace: transparent;\n\n// ----------------------------------------------------------------------------\n// Rules: syntax highlighting\n// ----------------------------------------------------------------------------\n\n// Codehilite extension\n.codehilite {\n\n // Operators\n .o { color: $codehilite-operator; }\n .ow { color: $codehilite-operator-word; }\n\n // Generics\n .ge { color: $codehilite-generic-emph; }\n .gr { color: $codehilite-generic-error; }\n .gh { color: $codehilite-generic-heading; }\n .go { color: $codehilite-generic-output; }\n .gp { color: $codehilite-generic-prompt; }\n .gs { color: $codehilite-generic-strong; }\n .gu { color: $codehilite-generic-subheading; }\n .gt { color: $codehilite-generic-traceback; }\n\n // Diffs\n .gd { background-color: $codehilite-diff-deleted; }\n .gi { background-color: $codehilite-diff-inserted; }\n\n // Keywords\n .k { color: $codehilite-keyword; }\n .kc { color: $codehilite-keyword-constant; }\n .kd { color: $codehilite-keyword-declaration; }\n .kn { color: $codehilite-keyword-namespace; }\n .kp { color: $codehilite-keyword-pseudo; }\n .kr { color: $codehilite-keyword-reserved; }\n .kt { color: $codehilite-keyword-type; }\n\n // Comments\n .c { color: $codehilite-comment; }\n .cm { color: $codehilite-comment-multiline; }\n .cp { color: $codehilite-comment-preproc; }\n .c1 { color: $codehilite-comment-single; }\n .ch { color: $codehilite-comment-shebang; }\n .cs { color: $codehilite-comment-special; }\n\n // Names\n .na { color: $codehilite-name-attribute; }\n .nb { color: $codehilite-name-builtin; }\n .bp { color: $codehilite-name-builtin-pseudo; }\n .nc { color: $codehilite-name-class; }\n .no { color: $codehilite-name-constant; }\n .nd { color: $codehilite-name-entity; }\n .ni { color: $codehilite-name-entity; }\n .ne { color: $codehilite-name-exception; }\n .nf { color: $codehilite-name-function; }\n .nl { color: $codehilite-name-label; }\n .nn { color: $codehilite-name-namespace; }\n .nt { color: $codehilite-name-tag; }\n .nv { color: $codehilite-name-variable; }\n .vc { color: $codehilite-name-variable-class; }\n .vg { color: $codehilite-name-variable-global; }\n .vi { color: $codehilite-name-variable-instance; }\n .nx { color: $codehilite-name-extension; }\n\n // Numbers\n .m { color: $codehilite-literal-number; }\n .mf { color: $codehilite-literal-number-float; }\n .mh { color: $codehilite-literal-number-hex; }\n .mi { color: $codehilite-literal-number-integer; }\n .il { color: $codehilite-literal-number-integer-long; }\n .mo { color: $codehilite-literal-number-oct; }\n\n // Strings\n .s { color: $codehilite-literal-string; }\n .sb { color: $codehilite-literal-string-backticks; }\n .sc { color: $codehilite-literal-string-char; }\n .sd { color: $codehilite-literal-string-doc; }\n .s2 { color: $codehilite-literal-string-double; }\n .se { color: $codehilite-literal-string-escape; }\n .sh { color: $codehilite-literal-string-heredoc; }\n .si { color: $codehilite-literal-string-interpol; }\n .sx { color: $codehilite-literal-string-other; }\n .sr { color: $codehilite-literal-string-regex; }\n .s1 { color: $codehilite-literal-string-single; }\n .ss { color: $codehilite-literal-string-symbol; }\n\n // Miscellaneous\n .err { color: $codehilite-error; }\n .w { color: $codehilite-whitespace; }\n\n // Highlighted lines\n .hll {\n display: block;\n margin: 0 px2em(-16px, 13.6px);\n padding: 0 px2em(16px, 13.6px);\n background-color: transparentize($clr-yellow-500, 0.5);\n }\n}\n\n// ----------------------------------------------------------------------------\n// Rules: layout\n// ----------------------------------------------------------------------------\n\n// Block with line numbers\n.codehilitetable {\n display: block;\n overflow: hidden;\n\n // Set table elements to block layout, because otherwise the whole flexbox\n // hacking won't work correctly\n tbody,\n td {\n display: block;\n padding: 0;\n }\n\n // We need to use flexbox layout, because otherwise it's not possible to\n // make the code container scroll while keeping the line numbers static\n tr {\n display: flex;\n }\n\n // The pre tags are nested inside a table, so we need to remove the\n // margin because it collapses below all the overflows\n pre {\n margin: 0;\n }\n\n // Disable user selection, so code can be easily copied without\n // accidentally also copying the line numbers\n .linenos {\n padding: px2rem(10.5px) px2em(16px, 13.6px);\n padding-right: 0;\n font-size: px2em(13.6px);\n background-color: var(--md-code-bg-color);\n user-select: none;\n }\n\n // Add spacing to line number container\n .linenodiv {\n padding-right: px2em(8px, 13.6px);\n box-shadow: inset px2rem(-1px) 0 var(--md-default-fg-color--lightest);\n\n // Reset spacings\n pre {\n color: var(--md-default-fg-color--lighter);\n text-align: right;\n }\n }\n\n // The table cell containing the code container wrapper and code should\n // stretch horizontally to the remaining space\n .code {\n flex: 1;\n overflow: hidden;\n }\n}\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // Block with line numbers\n .codehilitetable {\n margin: 1em 0;\n direction: ltr;\n border-radius: px2rem(2px);\n\n // Remove rounded borders\n code {\n border-radius: 0;\n }\n }\n\n // [mobile -]: Stretch to whole width\n @include break-to-device(mobile) {\n\n // Full-width container\n > .codehilite {\n margin: 1em px2rem(-16px);\n\n // Stretch highlighted lines\n .hll {\n margin: 0 px2rem(-16px);\n padding: 0 px2rem(16px);\n }\n\n // Remove rounded borders\n code {\n border-radius: 0;\n }\n }\n\n // Full-width container on top-level\n > .codehilitetable {\n margin: 1em px2rem(-16px);\n border-radius: 0;\n\n // Stretch highlighted lines\n .hll {\n margin: 0 px2rem(-16px);\n padding: 0 px2rem(16px);\n }\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Icon definitions\n:root {\n --md-footnotes-icon: url(\"{{ keyboard-return }}\");\n}\n\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // All footnote references\n [id^=\"fnref:\"] {\n display: inline-block;\n\n // Targeted anchor\n &:target {\n margin-top: -1 * px2rem(48px + 12px + 16px);\n padding-top: px2rem(48px + 12px + 16px);\n pointer-events: none;\n }\n }\n\n // All footnote back references\n [id^=\"fn:\"] {\n\n // Add spacing to anchor for offset\n &::before {\n display: none;\n height: 0;\n content: \"\";\n }\n\n // Targeted anchor\n &:target::before {\n display: block;\n margin-top: -1 * px2rem(48px + 12px + 10px);\n padding-top: px2rem(48px + 12px + 10px);\n pointer-events: none;\n }\n }\n\n // Footnotes extension\n .footnote {\n color: var(--md-default-fg-color--light);\n font-size: ms(-1);\n\n // Remove additional spacing on footnotes\n ol {\n margin-left: 0;\n }\n\n // Footnote\n li {\n transition: color 125ms;\n\n // Darken color for targeted footnote\n &:target {\n color: var(--md-default-fg-color);\n }\n\n // Remove spacing on first element\n :first-child {\n margin-top: 0;\n }\n\n // Make back references visible on container hover\n &:hover .footnote-backref,\n &:target .footnote-backref {\n transform: translateX(0);\n opacity: 1;\n }\n\n // Hovered back reference\n &:hover .footnote-backref:hover {\n color: var(--md-accent-fg-color);\n }\n }\n }\n\n // Footnote reference\n .footnote-ref {\n display: inline-block;\n pointer-events: initial;\n }\n\n // Footnote back reference\n .footnote-backref {\n display: inline-block;\n color: var(--md-primary-fg-color);\n // Hack: remove Unicode arrow for icon\n font-size: 0;\n vertical-align: text-bottom;\n transform: translateX(px2rem(5px));\n opacity: 0;\n transition:\n color 250ms,\n transform 250ms 250ms,\n opacity 125ms 250ms;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n transform: translateX(px2rem(-5px));\n }\n\n // Back reference icon\n &::before {\n display: inline-block;\n width: px2rem(16px);\n height: px2rem(16px);\n background-color: currentColor;\n mask-image: var(--md-footnotes-icon);\n content: \"\";\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n\n // Flip icon vertically\n svg {\n transform: scaleX(-1)\n }\n }\n }\n\n // Always show for print\n @media print {\n color: var(--md-primary-fg-color);\n transform: translateX(0);\n opacity: 1;\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // Permalinks extension\n .headerlink {\n display: inline-block;\n margin-left: px2rem(10px);\n // Hack: if we don't set visibility hidden, the text content of the node\n // will include the headerlink character, which is why Google indexes them.\n visibility: hidden;\n opacity: 0;\n transition:\n color 250ms,\n visibility 0ms 500ms,\n opacity 125ms;\n\n // Adjust for RTL languages\n [dir=\"rtl\"] & {\n margin-right: px2rem(10px);\n margin-left: initial;\n }\n\n // Higher specificity for color due to palettes integration\n html body & {\n color: var(--md-default-fg-color--lighter);\n }\n\n // Hide for print\n @media print {\n display: none;\n }\n }\n\n // Make permalink visible on hover\n :hover > .headerlink,\n :target > .headerlink,\n .headerlink:focus {\n visibility: visible;\n opacity: 1;\n transition:\n color 250ms,\n visibility 0ms,\n opacity 125ms;\n }\n\n // Active or targeted permalink\n :target > .headerlink,\n .headerlink:focus,\n .headerlink:hover {\n color: var(--md-accent-fg-color);\n }\n\n // Correct anchor offset for link blurring\n @each $level, $delta in (\n h1 h2 h3: 8px,\n h4: 9px,\n h5 h6: 12px,\n ) {\n %#{nth($level, 1)} {\n\n // Un-targeted anchor\n &::before {\n display: block;\n margin-top: -1 * px2rem($delta);\n padding-top: px2rem($delta);\n content: \"\";\n }\n\n // Targeted anchor (48px from header, 12px from sidebar offset)\n &:target::before {\n margin-top: -1 * px2rem(48px + 12px + $delta);\n padding-top: px2rem(48px + 12px + $delta);\n }\n }\n\n // Define levels\n @for $n from 1 through length($level) {\n #{nth($level, $n)}[id] {\n @extend %#{nth($level, 1)};\n }\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// stylelint-disable selector-class-pattern\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // MathJax integration - add padding to omit vertical scrollbar\n .MJXc-display {\n margin: 0.75em 0;\n padding: 0.75em 0;\n overflow: auto;\n touch-action: auto;\n }\n\n // Stretch top-level containers\n > p > .MJXc-display {\n\n // [mobile -]: Stretch to whole width\n @include break-to-device(mobile) {\n margin: 0.75em px2rem(-16px);\n padding: 0.25em px2rem(16px);\n }\n }\n\n // Remove outline on tab index\n .MathJax_CHTML {\n outline: 0;\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // Deletions, additions and comments\n del.critic,\n ins.critic,\n .critic.comment {\n padding: 0 px2em(4px, 16px);\n border-radius: px2rem(2px);\n box-decoration-break: clone;\n }\n\n // Deletion\n del.critic {\n background-color: $codehilite-diff-deleted;\n }\n\n // Addition\n ins.critic {\n background-color: $codehilite-diff-inserted;\n }\n\n // Comment\n .critic.comment {\n color: $codehilite-comment;\n\n // Comment opening mark\n &::before {\n content: \"/* \";\n }\n\n // Comment closing mark\n &::after {\n content: \" */\";\n }\n }\n\n // Block\n .critic.block {\n display: block;\n margin: 1em 0;\n padding-right: px2rem(16px);\n padding-left: px2rem(16px);\n overflow: auto;\n box-shadow: none;\n\n // Decrease spacing on first element\n :first-child {\n margin-top: 0.5em;\n }\n\n // Decrease spacing on last element\n :last-child {\n margin-bottom: 0.5em;\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Icon definitions\n:root {\n --md-details-icon: url(\"{{ chevron-right }}\");\n}\n\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // Details extension\n details {\n @extend .admonition;\n\n display: block;\n padding-top: 0;\n overflow: visible;\n\n\n // Rotate title icon\n &[open] > summary::after {\n transform: rotate(90deg);\n }\n\n // Remove bottom spacing for closed details\n &:not([open]) {\n padding-bottom: 0;\n\n // We cannot set overflow: hidden, as the outline would not be visible,\n // so we need to correct the border radius\n > summary {\n border-bottom-right-radius: px2rem(2px);\n }\n }\n\n // Hack: omit margin collapse\n &::after {\n display: table;\n content: \"\";\n }\n }\n\n // Details title\n summary {\n @extend .admonition-title;\n\n display: block;\n min-height: px2rem(20px);\n padding: px2rem(8px) px2rem(36px) px2rem(8px) px2rem(40px);\n border-top-right-radius: px2rem(2px);\n cursor: pointer;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n padding: px2rem(8px) px2rem(40px) px2rem(8px) px2rem(36px);\n }\n\n // Remove default details marker\n &::-webkit-details-marker {\n display: none;\n }\n\n // Details marker\n &::after {\n position: absolute;\n top: px2rem(8px);\n right: px2rem(8px);\n width: px2rem(20px);\n height: px2rem(20px);\n background-color: currentColor;\n mask-image: var(--md-details-icon);\n transform: rotate(0deg);\n transition: transform 250ms;\n content: \"\";\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: initial;\n left: px2rem(8px);\n transform: rotate(180deg);\n }\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // Emojis\n img.emojione,\n img.twemoji,\n img.gemoji {\n width: px2em(18px);\n vertical-align: -15%;\n }\n\n // Inlined SVG icons via mkdocs-material-extensions\n span.twemoji {\n display: inline-block;\n height: px2em(18px);\n vertical-align: text-top;\n\n // Icon\n svg {\n width: px2em(18px);\n fill: currentColor;\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// When pymdownx.superfences is enabled but codehilite is disabled,\n// pymdownx.highlight will be used. When this happens, the outer container\n// and tables get this class names by default\n.highlight {\n @extend .codehilite;\n\n // Inline line numbers\n [data-linenos]::before {\n position: sticky;\n left: px2em(-16px, 13.6px);\n float: left;\n margin-right: px2em(16px, 13.6px);\n margin-left: px2em(-16px, 13.6px);\n padding-left: px2em(16px, 13.6px);\n color: var(--md-default-fg-color--lighter);\n background-color: var(--md-code-bg-color);\n box-shadow: inset px2rem(-1px) 0 var(--md-default-fg-color--lightest);\n content: attr(data-linenos);\n user-select: none;\n }\n}\n\n// Same as above, but for code blocks with line numbers enabled\n.highlighttable {\n @extend .codehilitetable;\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // Tabbed block content\n .tabbed-content {\n display: none;\n order: 99;\n width: 100%;\n box-shadow: 0 px2rem(-1px) var(--md-default-fg-color--lightest);\n\n // Mirror old superfences behavior, if there's only a single code block.\n > .codehilite:only-child pre,\n > .codehilitetable:only-child,\n > .highlight:only-child pre,\n > .highlighttable:only-child {\n margin: 0;\n\n // Remove rounded borders at the top\n > code {\n border-top-left-radius: 0;\n border-top-right-radius: 0;\n }\n }\n\n // Nested tabs\n > .tabbed-set {\n margin: 0;\n }\n }\n\n // Tabbed block container\n .tabbed-set {\n position: relative;\n display: flex;\n flex-wrap: wrap;\n margin: 1em 0;\n border-radius: px2rem(2px);\n\n // Hide radio buttons\n > input {\n display: none;\n\n // Active tab label\n &:checked + label {\n color: var(--md-accent-fg-color);\n border-color: var(--md-accent-fg-color);\n\n // Show tabbed block content\n & + .tabbed-content {\n display: block;\n }\n }\n }\n\n // Tab label\n > label {\n z-index: 1;\n width: auto;\n padding: px2rem(12px) 1.25em px2rem(10px);\n color: var(--md-default-fg-color--light);\n font-weight: 700;\n font-size: ms(-1);\n border-bottom: px2rem(2px) solid transparent;\n cursor: pointer;\n transition: color 125ms;\n\n // Hovered tab label\n html &:hover {\n color: var(--md-accent-fg-color);\n }\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Rules\n// ----------------------------------------------------------------------------\n\n// Icon definitions\n:root {\n --md-tasklist-icon: url(\"{{ checkbox-blank-circle }}\");\n --md-tasklist-icon--checked: url(\"{{ check-circle }}\");\n}\n\n// ----------------------------------------------------------------------------\n\n// Scoped in typesetted content to match specificity of regular content\n.md-typeset {\n\n // Remove list icon on task items\n .task-list-item {\n position: relative;\n list-style-type: none;\n\n // Make checkbox items align with normal list items, but position\n // everything in ems for correct layout at smaller font sizes\n [type=\"checkbox\"] {\n position: absolute;\n top: 0.45em;\n left: -2em;\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: -2em;\n left: initial;\n }\n }\n }\n\n // Wrapper for list controls, in case custom checkboxes are enabled\n .task-list-control {\n\n // Checkbox icon in unchecked state\n .task-list-indicator::before {\n position: absolute;\n top: 0.15em;\n left: px2em(-24px);\n width: px2em(20px);\n height: px2em(20px);\n background-color: var(--md-default-fg-color--lightest);\n mask-image: var(--md-tasklist-icon);\n content: \"\";\n\n // Adjust for right-to-left languages\n [dir=\"rtl\"] & {\n right: px2em(-24px);\n left: initial;\n }\n }\n\n // Checkbox icon in checked state\n [type=\"checkbox\"]:checked + .task-list-indicator::before {\n background-color: $clr-green-a400;\n mask-image: var(--md-tasklist-icon--checked);\n }\n\n // Hide original checkbox behind icon\n [type=\"checkbox\"] {\n z-index: -1;\n opacity: 0;\n }\n }\n}\n","// ==========================================================================\n//\n// Name: UI Color Palette\n// Description: The color palette of material design.\n// Version: 2.3.1\n//\n// Author: Denis Malinochkin\n// Git: https://github.com/mrmlnc/material-color\n//\n// twitter: @mrmlnc\n//\n// ==========================================================================\n\n\n//\n// List of base colors\n//\n\n// $clr-red\n// $clr-pink\n// $clr-purple\n// $clr-deep-purple\n// $clr-indigo\n// $clr-blue\n// $clr-light-blue\n// $clr-cyan\n// $clr-teal\n// $clr-green\n// $clr-light-green\n// $clr-lime\n// $clr-yellow\n// $clr-amber\n// $clr-orange\n// $clr-deep-orange\n// $clr-brown\n// $clr-grey\n// $clr-blue-grey\n// $clr-black\n// $clr-white\n\n\n//\n// Red\n//\n\n$clr-red-list: (\n \"base\": #f44336,\n \"50\": #ffebee,\n \"100\": #ffcdd2,\n \"200\": #ef9a9a,\n \"300\": #e57373,\n \"400\": #ef5350,\n \"500\": #f44336,\n \"600\": #e53935,\n \"700\": #d32f2f,\n \"800\": #c62828,\n \"900\": #b71c1c,\n \"a100\": #ff8a80,\n \"a200\": #ff5252,\n \"a400\": #ff1744,\n \"a700\": #d50000\n);\n\n$clr-red: map-get($clr-red-list, \"base\");\n\n$clr-red-50: map-get($clr-red-list, \"50\");\n$clr-red-100: map-get($clr-red-list, \"100\");\n$clr-red-200: map-get($clr-red-list, \"200\");\n$clr-red-300: map-get($clr-red-list, \"300\");\n$clr-red-400: map-get($clr-red-list, \"400\");\n$clr-red-500: map-get($clr-red-list, \"500\");\n$clr-red-600: map-get($clr-red-list, \"600\");\n$clr-red-700: map-get($clr-red-list, \"700\");\n$clr-red-800: map-get($clr-red-list, \"800\");\n$clr-red-900: map-get($clr-red-list, \"900\");\n$clr-red-a100: map-get($clr-red-list, \"a100\");\n$clr-red-a200: map-get($clr-red-list, \"a200\");\n$clr-red-a400: map-get($clr-red-list, \"a400\");\n$clr-red-a700: map-get($clr-red-list, \"a700\");\n\n\n//\n// Pink\n//\n\n$clr-pink-list: (\n \"base\": #e91e63,\n \"50\": #fce4ec,\n \"100\": #f8bbd0,\n \"200\": #f48fb1,\n \"300\": #f06292,\n \"400\": #ec407a,\n \"500\": #e91e63,\n \"600\": #d81b60,\n \"700\": #c2185b,\n \"800\": #ad1457,\n \"900\": #880e4f,\n \"a100\": #ff80ab,\n \"a200\": #ff4081,\n \"a400\": #f50057,\n \"a700\": #c51162\n);\n\n$clr-pink: map-get($clr-pink-list, \"base\");\n\n$clr-pink-50: map-get($clr-pink-list, \"50\");\n$clr-pink-100: map-get($clr-pink-list, \"100\");\n$clr-pink-200: map-get($clr-pink-list, \"200\");\n$clr-pink-300: map-get($clr-pink-list, \"300\");\n$clr-pink-400: map-get($clr-pink-list, \"400\");\n$clr-pink-500: map-get($clr-pink-list, \"500\");\n$clr-pink-600: map-get($clr-pink-list, \"600\");\n$clr-pink-700: map-get($clr-pink-list, \"700\");\n$clr-pink-800: map-get($clr-pink-list, \"800\");\n$clr-pink-900: map-get($clr-pink-list, \"900\");\n$clr-pink-a100: map-get($clr-pink-list, \"a100\");\n$clr-pink-a200: map-get($clr-pink-list, \"a200\");\n$clr-pink-a400: map-get($clr-pink-list, \"a400\");\n$clr-pink-a700: map-get($clr-pink-list, \"a700\");\n\n\n//\n// Purple\n//\n\n$clr-purple-list: (\n \"base\": #9c27b0,\n \"50\": #f3e5f5,\n \"100\": #e1bee7,\n \"200\": #ce93d8,\n \"300\": #ba68c8,\n \"400\": #ab47bc,\n \"500\": #9c27b0,\n \"600\": #8e24aa,\n \"700\": #7b1fa2,\n \"800\": #6a1b9a,\n \"900\": #4a148c,\n \"a100\": #ea80fc,\n \"a200\": #e040fb,\n \"a400\": #d500f9,\n \"a700\": #aa00ff\n);\n\n$clr-purple: map-get($clr-purple-list, \"base\");\n\n$clr-purple-50: map-get($clr-purple-list, \"50\");\n$clr-purple-100: map-get($clr-purple-list, \"100\");\n$clr-purple-200: map-get($clr-purple-list, \"200\");\n$clr-purple-300: map-get($clr-purple-list, \"300\");\n$clr-purple-400: map-get($clr-purple-list, \"400\");\n$clr-purple-500: map-get($clr-purple-list, \"500\");\n$clr-purple-600: map-get($clr-purple-list, \"600\");\n$clr-purple-700: map-get($clr-purple-list, \"700\");\n$clr-purple-800: map-get($clr-purple-list, \"800\");\n$clr-purple-900: map-get($clr-purple-list, \"900\");\n$clr-purple-a100: map-get($clr-purple-list, \"a100\");\n$clr-purple-a200: map-get($clr-purple-list, \"a200\");\n$clr-purple-a400: map-get($clr-purple-list, \"a400\");\n$clr-purple-a700: map-get($clr-purple-list, \"a700\");\n\n\n//\n// Deep purple\n//\n\n$clr-deep-purple-list: (\n \"base\": #673ab7,\n \"50\": #ede7f6,\n \"100\": #d1c4e9,\n \"200\": #b39ddb,\n \"300\": #9575cd,\n \"400\": #7e57c2,\n \"500\": #673ab7,\n \"600\": #5e35b1,\n \"700\": #512da8,\n \"800\": #4527a0,\n \"900\": #311b92,\n \"a100\": #b388ff,\n \"a200\": #7c4dff,\n \"a400\": #651fff,\n \"a700\": #6200ea\n);\n\n$clr-deep-purple: map-get($clr-deep-purple-list, \"base\");\n\n$clr-deep-purple-50: map-get($clr-deep-purple-list, \"50\");\n$clr-deep-purple-100: map-get($clr-deep-purple-list, \"100\");\n$clr-deep-purple-200: map-get($clr-deep-purple-list, \"200\");\n$clr-deep-purple-300: map-get($clr-deep-purple-list, \"300\");\n$clr-deep-purple-400: map-get($clr-deep-purple-list, \"400\");\n$clr-deep-purple-500: map-get($clr-deep-purple-list, \"500\");\n$clr-deep-purple-600: map-get($clr-deep-purple-list, \"600\");\n$clr-deep-purple-700: map-get($clr-deep-purple-list, \"700\");\n$clr-deep-purple-800: map-get($clr-deep-purple-list, \"800\");\n$clr-deep-purple-900: map-get($clr-deep-purple-list, \"900\");\n$clr-deep-purple-a100: map-get($clr-deep-purple-list, \"a100\");\n$clr-deep-purple-a200: map-get($clr-deep-purple-list, \"a200\");\n$clr-deep-purple-a400: map-get($clr-deep-purple-list, \"a400\");\n$clr-deep-purple-a700: map-get($clr-deep-purple-list, \"a700\");\n\n\n//\n// Indigo\n//\n\n$clr-indigo-list: (\n \"base\": #3f51b5,\n \"50\": #e8eaf6,\n \"100\": #c5cae9,\n \"200\": #9fa8da,\n \"300\": #7986cb,\n \"400\": #5c6bc0,\n \"500\": #3f51b5,\n \"600\": #3949ab,\n \"700\": #303f9f,\n \"800\": #283593,\n \"900\": #1a237e,\n \"a100\": #8c9eff,\n \"a200\": #536dfe,\n \"a400\": #3d5afe,\n \"a700\": #304ffe\n);\n\n$clr-indigo: map-get($clr-indigo-list, \"base\");\n\n$clr-indigo-50: map-get($clr-indigo-list, \"50\");\n$clr-indigo-100: map-get($clr-indigo-list, \"100\");\n$clr-indigo-200: map-get($clr-indigo-list, \"200\");\n$clr-indigo-300: map-get($clr-indigo-list, \"300\");\n$clr-indigo-400: map-get($clr-indigo-list, \"400\");\n$clr-indigo-500: map-get($clr-indigo-list, \"500\");\n$clr-indigo-600: map-get($clr-indigo-list, \"600\");\n$clr-indigo-700: map-get($clr-indigo-list, \"700\");\n$clr-indigo-800: map-get($clr-indigo-list, \"800\");\n$clr-indigo-900: map-get($clr-indigo-list, \"900\");\n$clr-indigo-a100: map-get($clr-indigo-list, \"a100\");\n$clr-indigo-a200: map-get($clr-indigo-list, \"a200\");\n$clr-indigo-a400: map-get($clr-indigo-list, \"a400\");\n$clr-indigo-a700: map-get($clr-indigo-list, \"a700\");\n\n\n//\n// Blue\n//\n\n$clr-blue-list: (\n \"base\": #2196f3,\n \"50\": #e3f2fd,\n \"100\": #bbdefb,\n \"200\": #90caf9,\n \"300\": #64b5f6,\n \"400\": #42a5f5,\n \"500\": #2196f3,\n \"600\": #1e88e5,\n \"700\": #1976d2,\n \"800\": #1565c0,\n \"900\": #0d47a1,\n \"a100\": #82b1ff,\n \"a200\": #448aff,\n \"a400\": #2979ff,\n \"a700\": #2962ff\n);\n\n$clr-blue: map-get($clr-blue-list, \"base\");\n\n$clr-blue-50: map-get($clr-blue-list, \"50\");\n$clr-blue-100: map-get($clr-blue-list, \"100\");\n$clr-blue-200: map-get($clr-blue-list, \"200\");\n$clr-blue-300: map-get($clr-blue-list, \"300\");\n$clr-blue-400: map-get($clr-blue-list, \"400\");\n$clr-blue-500: map-get($clr-blue-list, \"500\");\n$clr-blue-600: map-get($clr-blue-list, \"600\");\n$clr-blue-700: map-get($clr-blue-list, \"700\");\n$clr-blue-800: map-get($clr-blue-list, \"800\");\n$clr-blue-900: map-get($clr-blue-list, \"900\");\n$clr-blue-a100: map-get($clr-blue-list, \"a100\");\n$clr-blue-a200: map-get($clr-blue-list, \"a200\");\n$clr-blue-a400: map-get($clr-blue-list, \"a400\");\n$clr-blue-a700: map-get($clr-blue-list, \"a700\");\n\n\n//\n// Light Blue\n//\n\n$clr-light-blue-list: (\n \"base\": #03a9f4,\n \"50\": #e1f5fe,\n \"100\": #b3e5fc,\n \"200\": #81d4fa,\n \"300\": #4fc3f7,\n \"400\": #29b6f6,\n \"500\": #03a9f4,\n \"600\": #039be5,\n \"700\": #0288d1,\n \"800\": #0277bd,\n \"900\": #01579b,\n \"a100\": #80d8ff,\n \"a200\": #40c4ff,\n \"a400\": #00b0ff,\n \"a700\": #0091ea\n);\n\n$clr-light-blue: map-get($clr-light-blue-list, \"base\");\n\n$clr-light-blue-50: map-get($clr-light-blue-list, \"50\");\n$clr-light-blue-100: map-get($clr-light-blue-list, \"100\");\n$clr-light-blue-200: map-get($clr-light-blue-list, \"200\");\n$clr-light-blue-300: map-get($clr-light-blue-list, \"300\");\n$clr-light-blue-400: map-get($clr-light-blue-list, \"400\");\n$clr-light-blue-500: map-get($clr-light-blue-list, \"500\");\n$clr-light-blue-600: map-get($clr-light-blue-list, \"600\");\n$clr-light-blue-700: map-get($clr-light-blue-list, \"700\");\n$clr-light-blue-800: map-get($clr-light-blue-list, \"800\");\n$clr-light-blue-900: map-get($clr-light-blue-list, \"900\");\n$clr-light-blue-a100: map-get($clr-light-blue-list, \"a100\");\n$clr-light-blue-a200: map-get($clr-light-blue-list, \"a200\");\n$clr-light-blue-a400: map-get($clr-light-blue-list, \"a400\");\n$clr-light-blue-a700: map-get($clr-light-blue-list, \"a700\");\n\n\n//\n// Cyan\n//\n\n$clr-cyan-list: (\n \"base\": #00bcd4,\n \"50\": #e0f7fa,\n \"100\": #b2ebf2,\n \"200\": #80deea,\n \"300\": #4dd0e1,\n \"400\": #26c6da,\n \"500\": #00bcd4,\n \"600\": #00acc1,\n \"700\": #0097a7,\n \"800\": #00838f,\n \"900\": #006064,\n \"a100\": #84ffff,\n \"a200\": #18ffff,\n \"a400\": #00e5ff,\n \"a700\": #00b8d4\n);\n\n$clr-cyan: map-get($clr-cyan-list, \"base\");\n\n$clr-cyan-50: map-get($clr-cyan-list, \"50\");\n$clr-cyan-100: map-get($clr-cyan-list, \"100\");\n$clr-cyan-200: map-get($clr-cyan-list, \"200\");\n$clr-cyan-300: map-get($clr-cyan-list, \"300\");\n$clr-cyan-400: map-get($clr-cyan-list, \"400\");\n$clr-cyan-500: map-get($clr-cyan-list, \"500\");\n$clr-cyan-600: map-get($clr-cyan-list, \"600\");\n$clr-cyan-700: map-get($clr-cyan-list, \"700\");\n$clr-cyan-800: map-get($clr-cyan-list, \"800\");\n$clr-cyan-900: map-get($clr-cyan-list, \"900\");\n$clr-cyan-a100: map-get($clr-cyan-list, \"a100\");\n$clr-cyan-a200: map-get($clr-cyan-list, \"a200\");\n$clr-cyan-a400: map-get($clr-cyan-list, \"a400\");\n$clr-cyan-a700: map-get($clr-cyan-list, \"a700\");\n\n\n//\n// Teal\n//\n\n$clr-teal-list: (\n \"base\": #009688,\n \"50\": #e0f2f1,\n \"100\": #b2dfdb,\n \"200\": #80cbc4,\n \"300\": #4db6ac,\n \"400\": #26a69a,\n \"500\": #009688,\n \"600\": #00897b,\n \"700\": #00796b,\n \"800\": #00695c,\n \"900\": #004d40,\n \"a100\": #a7ffeb,\n \"a200\": #64ffda,\n \"a400\": #1de9b6,\n \"a700\": #00bfa5\n);\n\n$clr-teal: map-get($clr-teal-list, \"base\");\n\n$clr-teal-50: map-get($clr-teal-list, \"50\");\n$clr-teal-100: map-get($clr-teal-list, \"100\");\n$clr-teal-200: map-get($clr-teal-list, \"200\");\n$clr-teal-300: map-get($clr-teal-list, \"300\");\n$clr-teal-400: map-get($clr-teal-list, \"400\");\n$clr-teal-500: map-get($clr-teal-list, \"500\");\n$clr-teal-600: map-get($clr-teal-list, \"600\");\n$clr-teal-700: map-get($clr-teal-list, \"700\");\n$clr-teal-800: map-get($clr-teal-list, \"800\");\n$clr-teal-900: map-get($clr-teal-list, \"900\");\n$clr-teal-a100: map-get($clr-teal-list, \"a100\");\n$clr-teal-a200: map-get($clr-teal-list, \"a200\");\n$clr-teal-a400: map-get($clr-teal-list, \"a400\");\n$clr-teal-a700: map-get($clr-teal-list, \"a700\");\n\n\n//\n// Green\n//\n\n$clr-green-list: (\n \"base\": #4caf50,\n \"50\": #e8f5e9,\n \"100\": #c8e6c9,\n \"200\": #a5d6a7,\n \"300\": #81c784,\n \"400\": #66bb6a,\n \"500\": #4caf50,\n \"600\": #43a047,\n \"700\": #388e3c,\n \"800\": #2e7d32,\n \"900\": #1b5e20,\n \"a100\": #b9f6ca,\n \"a200\": #69f0ae,\n \"a400\": #00e676,\n \"a700\": #00c853\n);\n\n$clr-green: map-get($clr-green-list, \"base\");\n\n$clr-green-50: map-get($clr-green-list, \"50\");\n$clr-green-100: map-get($clr-green-list, \"100\");\n$clr-green-200: map-get($clr-green-list, \"200\");\n$clr-green-300: map-get($clr-green-list, \"300\");\n$clr-green-400: map-get($clr-green-list, \"400\");\n$clr-green-500: map-get($clr-green-list, \"500\");\n$clr-green-600: map-get($clr-green-list, \"600\");\n$clr-green-700: map-get($clr-green-list, \"700\");\n$clr-green-800: map-get($clr-green-list, \"800\");\n$clr-green-900: map-get($clr-green-list, \"900\");\n$clr-green-a100: map-get($clr-green-list, \"a100\");\n$clr-green-a200: map-get($clr-green-list, \"a200\");\n$clr-green-a400: map-get($clr-green-list, \"a400\");\n$clr-green-a700: map-get($clr-green-list, \"a700\");\n\n\n//\n// Light green\n//\n\n$clr-light-green-list: (\n \"base\": #8bc34a,\n \"50\": #f1f8e9,\n \"100\": #dcedc8,\n \"200\": #c5e1a5,\n \"300\": #aed581,\n \"400\": #9ccc65,\n \"500\": #8bc34a,\n \"600\": #7cb342,\n \"700\": #689f38,\n \"800\": #558b2f,\n \"900\": #33691e,\n \"a100\": #ccff90,\n \"a200\": #b2ff59,\n \"a400\": #76ff03,\n \"a700\": #64dd17\n);\n\n$clr-light-green: map-get($clr-light-green-list, \"base\");\n\n$clr-light-green-50: map-get($clr-light-green-list, \"50\");\n$clr-light-green-100: map-get($clr-light-green-list, \"100\");\n$clr-light-green-200: map-get($clr-light-green-list, \"200\");\n$clr-light-green-300: map-get($clr-light-green-list, \"300\");\n$clr-light-green-400: map-get($clr-light-green-list, \"400\");\n$clr-light-green-500: map-get($clr-light-green-list, \"500\");\n$clr-light-green-600: map-get($clr-light-green-list, \"600\");\n$clr-light-green-700: map-get($clr-light-green-list, \"700\");\n$clr-light-green-800: map-get($clr-light-green-list, \"800\");\n$clr-light-green-900: map-get($clr-light-green-list, \"900\");\n$clr-light-green-a100: map-get($clr-light-green-list, \"a100\");\n$clr-light-green-a200: map-get($clr-light-green-list, \"a200\");\n$clr-light-green-a400: map-get($clr-light-green-list, \"a400\");\n$clr-light-green-a700: map-get($clr-light-green-list, \"a700\");\n\n\n//\n// Lime\n//\n\n$clr-lime-list: (\n \"base\": #cddc39,\n \"50\": #f9fbe7,\n \"100\": #f0f4c3,\n \"200\": #e6ee9c,\n \"300\": #dce775,\n \"400\": #d4e157,\n \"500\": #cddc39,\n \"600\": #c0ca33,\n \"700\": #afb42b,\n \"800\": #9e9d24,\n \"900\": #827717,\n \"a100\": #f4ff81,\n \"a200\": #eeff41,\n \"a400\": #c6ff00,\n \"a700\": #aeea00\n);\n\n$clr-lime: map-get($clr-lime-list, \"base\");\n\n$clr-lime-50: map-get($clr-lime-list, \"50\");\n$clr-lime-100: map-get($clr-lime-list, \"100\");\n$clr-lime-200: map-get($clr-lime-list, \"200\");\n$clr-lime-300: map-get($clr-lime-list, \"300\");\n$clr-lime-400: map-get($clr-lime-list, \"400\");\n$clr-lime-500: map-get($clr-lime-list, \"500\");\n$clr-lime-600: map-get($clr-lime-list, \"600\");\n$clr-lime-700: map-get($clr-lime-list, \"700\");\n$clr-lime-800: map-get($clr-lime-list, \"800\");\n$clr-lime-900: map-get($clr-lime-list, \"900\");\n$clr-lime-a100: map-get($clr-lime-list, \"a100\");\n$clr-lime-a200: map-get($clr-lime-list, \"a200\");\n$clr-lime-a400: map-get($clr-lime-list, \"a400\");\n$clr-lime-a700: map-get($clr-lime-list, \"a700\");\n\n\n//\n// Yellow\n//\n\n$clr-yellow-list: (\n \"base\": #ffeb3b,\n \"50\": #fffde7,\n \"100\": #fff9c4,\n \"200\": #fff59d,\n \"300\": #fff176,\n \"400\": #ffee58,\n \"500\": #ffeb3b,\n \"600\": #fdd835,\n \"700\": #fbc02d,\n \"800\": #f9a825,\n \"900\": #f57f17,\n \"a100\": #ffff8d,\n \"a200\": #ffff00,\n \"a400\": #ffea00,\n \"a700\": #ffd600\n);\n\n$clr-yellow: map-get($clr-yellow-list, \"base\");\n\n$clr-yellow-50: map-get($clr-yellow-list, \"50\");\n$clr-yellow-100: map-get($clr-yellow-list, \"100\");\n$clr-yellow-200: map-get($clr-yellow-list, \"200\");\n$clr-yellow-300: map-get($clr-yellow-list, \"300\");\n$clr-yellow-400: map-get($clr-yellow-list, \"400\");\n$clr-yellow-500: map-get($clr-yellow-list, \"500\");\n$clr-yellow-600: map-get($clr-yellow-list, \"600\");\n$clr-yellow-700: map-get($clr-yellow-list, \"700\");\n$clr-yellow-800: map-get($clr-yellow-list, \"800\");\n$clr-yellow-900: map-get($clr-yellow-list, \"900\");\n$clr-yellow-a100: map-get($clr-yellow-list, \"a100\");\n$clr-yellow-a200: map-get($clr-yellow-list, \"a200\");\n$clr-yellow-a400: map-get($clr-yellow-list, \"a400\");\n$clr-yellow-a700: map-get($clr-yellow-list, \"a700\");\n\n\n//\n// amber\n//\n\n$clr-amber-list: (\n \"base\": #ffc107,\n \"50\": #fff8e1,\n \"100\": #ffecb3,\n \"200\": #ffe082,\n \"300\": #ffd54f,\n \"400\": #ffca28,\n \"500\": #ffc107,\n \"600\": #ffb300,\n \"700\": #ffa000,\n \"800\": #ff8f00,\n \"900\": #ff6f00,\n \"a100\": #ffe57f,\n \"a200\": #ffd740,\n \"a400\": #ffc400,\n \"a700\": #ffab00\n);\n\n$clr-amber: map-get($clr-amber-list, \"base\");\n\n$clr-amber-50: map-get($clr-amber-list, \"50\");\n$clr-amber-100: map-get($clr-amber-list, \"100\");\n$clr-amber-200: map-get($clr-amber-list, \"200\");\n$clr-amber-300: map-get($clr-amber-list, \"300\");\n$clr-amber-400: map-get($clr-amber-list, \"400\");\n$clr-amber-500: map-get($clr-amber-list, \"500\");\n$clr-amber-600: map-get($clr-amber-list, \"600\");\n$clr-amber-700: map-get($clr-amber-list, \"700\");\n$clr-amber-800: map-get($clr-amber-list, \"800\");\n$clr-amber-900: map-get($clr-amber-list, \"900\");\n$clr-amber-a100: map-get($clr-amber-list, \"a100\");\n$clr-amber-a200: map-get($clr-amber-list, \"a200\");\n$clr-amber-a400: map-get($clr-amber-list, \"a400\");\n$clr-amber-a700: map-get($clr-amber-list, \"a700\");\n\n\n//\n// Orange\n//\n\n$clr-orange-list: (\n \"base\": #ff9800,\n \"50\": #fff3e0,\n \"100\": #ffe0b2,\n \"200\": #ffcc80,\n \"300\": #ffb74d,\n \"400\": #ffa726,\n \"500\": #ff9800,\n \"600\": #fb8c00,\n \"700\": #f57c00,\n \"800\": #ef6c00,\n \"900\": #e65100,\n \"a100\": #ffd180,\n \"a200\": #ffab40,\n \"a400\": #ff9100,\n \"a700\": #ff6d00\n);\n\n$clr-orange: map-get($clr-orange-list, \"base\");\n\n$clr-orange-50: map-get($clr-orange-list, \"50\");\n$clr-orange-100: map-get($clr-orange-list, \"100\");\n$clr-orange-200: map-get($clr-orange-list, \"200\");\n$clr-orange-300: map-get($clr-orange-list, \"300\");\n$clr-orange-400: map-get($clr-orange-list, \"400\");\n$clr-orange-500: map-get($clr-orange-list, \"500\");\n$clr-orange-600: map-get($clr-orange-list, \"600\");\n$clr-orange-700: map-get($clr-orange-list, \"700\");\n$clr-orange-800: map-get($clr-orange-list, \"800\");\n$clr-orange-900: map-get($clr-orange-list, \"900\");\n$clr-orange-a100: map-get($clr-orange-list, \"a100\");\n$clr-orange-a200: map-get($clr-orange-list, \"a200\");\n$clr-orange-a400: map-get($clr-orange-list, \"a400\");\n$clr-orange-a700: map-get($clr-orange-list, \"a700\");\n\n\n//\n// Deep orange\n//\n\n$clr-deep-orange-list: (\n \"base\": #ff5722,\n \"50\": #fbe9e7,\n \"100\": #ffccbc,\n \"200\": #ffab91,\n \"300\": #ff8a65,\n \"400\": #ff7043,\n \"500\": #ff5722,\n \"600\": #f4511e,\n \"700\": #e64a19,\n \"800\": #d84315,\n \"900\": #bf360c,\n \"a100\": #ff9e80,\n \"a200\": #ff6e40,\n \"a400\": #ff3d00,\n \"a700\": #dd2c00\n);\n\n$clr-deep-orange: map-get($clr-deep-orange-list, \"base\");\n\n$clr-deep-orange-50: map-get($clr-deep-orange-list, \"50\");\n$clr-deep-orange-100: map-get($clr-deep-orange-list, \"100\");\n$clr-deep-orange-200: map-get($clr-deep-orange-list, \"200\");\n$clr-deep-orange-300: map-get($clr-deep-orange-list, \"300\");\n$clr-deep-orange-400: map-get($clr-deep-orange-list, \"400\");\n$clr-deep-orange-500: map-get($clr-deep-orange-list, \"500\");\n$clr-deep-orange-600: map-get($clr-deep-orange-list, \"600\");\n$clr-deep-orange-700: map-get($clr-deep-orange-list, \"700\");\n$clr-deep-orange-800: map-get($clr-deep-orange-list, \"800\");\n$clr-deep-orange-900: map-get($clr-deep-orange-list, \"900\");\n$clr-deep-orange-a100: map-get($clr-deep-orange-list, \"a100\");\n$clr-deep-orange-a200: map-get($clr-deep-orange-list, \"a200\");\n$clr-deep-orange-a400: map-get($clr-deep-orange-list, \"a400\");\n$clr-deep-orange-a700: map-get($clr-deep-orange-list, \"a700\");\n\n\n//\n// Brown\n//\n\n$clr-brown-list: (\n \"base\": #795548,\n \"50\": #efebe9,\n \"100\": #d7ccc8,\n \"200\": #bcaaa4,\n \"300\": #a1887f,\n \"400\": #8d6e63,\n \"500\": #795548,\n \"600\": #6d4c41,\n \"700\": #5d4037,\n \"800\": #4e342e,\n \"900\": #3e2723,\n);\n\n$clr-brown: map-get($clr-brown-list, \"base\");\n\n$clr-brown-50: map-get($clr-brown-list, \"50\");\n$clr-brown-100: map-get($clr-brown-list, \"100\");\n$clr-brown-200: map-get($clr-brown-list, \"200\");\n$clr-brown-300: map-get($clr-brown-list, \"300\");\n$clr-brown-400: map-get($clr-brown-list, \"400\");\n$clr-brown-500: map-get($clr-brown-list, \"500\");\n$clr-brown-600: map-get($clr-brown-list, \"600\");\n$clr-brown-700: map-get($clr-brown-list, \"700\");\n$clr-brown-800: map-get($clr-brown-list, \"800\");\n$clr-brown-900: map-get($clr-brown-list, \"900\");\n\n\n//\n// Grey\n//\n\n$clr-grey-list: (\n \"base\": #9e9e9e,\n \"50\": #fafafa,\n \"100\": #f5f5f5,\n \"200\": #eeeeee,\n \"300\": #e0e0e0,\n \"400\": #bdbdbd,\n \"500\": #9e9e9e,\n \"600\": #757575,\n \"700\": #616161,\n \"800\": #424242,\n \"900\": #212121,\n);\n\n$clr-grey: map-get($clr-grey-list, \"base\");\n\n$clr-grey-50: map-get($clr-grey-list, \"50\");\n$clr-grey-100: map-get($clr-grey-list, \"100\");\n$clr-grey-200: map-get($clr-grey-list, \"200\");\n$clr-grey-300: map-get($clr-grey-list, \"300\");\n$clr-grey-400: map-get($clr-grey-list, \"400\");\n$clr-grey-500: map-get($clr-grey-list, \"500\");\n$clr-grey-600: map-get($clr-grey-list, \"600\");\n$clr-grey-700: map-get($clr-grey-list, \"700\");\n$clr-grey-800: map-get($clr-grey-list, \"800\");\n$clr-grey-900: map-get($clr-grey-list, \"900\");\n\n\n//\n// Blue grey\n//\n\n$clr-blue-grey-list: (\n \"base\": #607d8b,\n \"50\": #eceff1,\n \"100\": #cfd8dc,\n \"200\": #b0bec5,\n \"300\": #90a4ae,\n \"400\": #78909c,\n \"500\": #607d8b,\n \"600\": #546e7a,\n \"700\": #455a64,\n \"800\": #37474f,\n \"900\": #263238,\n);\n\n$clr-blue-grey: map-get($clr-blue-grey-list, \"base\");\n\n$clr-blue-grey-50: map-get($clr-blue-grey-list, \"50\");\n$clr-blue-grey-100: map-get($clr-blue-grey-list, \"100\");\n$clr-blue-grey-200: map-get($clr-blue-grey-list, \"200\");\n$clr-blue-grey-300: map-get($clr-blue-grey-list, \"300\");\n$clr-blue-grey-400: map-get($clr-blue-grey-list, \"400\");\n$clr-blue-grey-500: map-get($clr-blue-grey-list, \"500\");\n$clr-blue-grey-600: map-get($clr-blue-grey-list, \"600\");\n$clr-blue-grey-700: map-get($clr-blue-grey-list, \"700\");\n$clr-blue-grey-800: map-get($clr-blue-grey-list, \"800\");\n$clr-blue-grey-900: map-get($clr-blue-grey-list, \"900\");\n\n\n//\n// Black\n//\n\n$clr-black-list: (\n \"base\": #000\n);\n\n$clr-black: map-get($clr-black-list, \"base\");\n\n\n//\n// White\n//\n\n$clr-white-list: (\n \"base\": #fff\n);\n\n$clr-white: map-get($clr-white-list, \"base\");\n\n\n//\n// List for all Colors for looping\n//\n\n$clr-list-all: (\n \"red\": $clr-red-list,\n \"pink\": $clr-pink-list,\n \"purple\": $clr-purple-list,\n \"deep-purple\": $clr-deep-purple-list,\n \"indigo\": $clr-indigo-list,\n \"blue\": $clr-blue-list,\n \"light-blue\": $clr-light-blue-list,\n \"cyan\": $clr-cyan-list,\n \"teal\": $clr-teal-list,\n \"green\": $clr-green-list,\n \"light-green\": $clr-light-green-list,\n \"lime\": $clr-lime-list,\n \"yellow\": $clr-yellow-list,\n \"amber\": $clr-amber-list,\n \"orange\": $clr-orange-list,\n \"deep-orange\": $clr-deep-orange-list,\n \"brown\": $clr-brown-list,\n \"grey\": $clr-grey-list,\n \"blue-grey\": $clr-blue-grey-list,\n \"black\": $clr-black-list,\n \"white\": $clr-white-list\n);\n\n\n//\n// Typography\n//\n\n$clr-ui-display-4: $clr-grey-600;\n$clr-ui-display-3: $clr-grey-600;\n$clr-ui-display-2: $clr-grey-600;\n$clr-ui-display-1: $clr-grey-600;\n$clr-ui-headline: $clr-grey-900;\n$clr-ui-title: $clr-grey-900;\n$clr-ui-subhead-1: $clr-grey-900;\n$clr-ui-body-2: $clr-grey-900;\n$clr-ui-body-1: $clr-grey-900;\n$clr-ui-caption: $clr-grey-600;\n$clr-ui-menu: $clr-grey-900;\n$clr-ui-button: $clr-grey-900;\n"],"sourceRoot":""} \ No newline at end of file diff --git a/assets/stylesheets/palette.c8acc6db.min.css b/assets/stylesheets/palette.c8acc6db.min.css new file mode 100644 index 00000000..70231c19 --- /dev/null +++ b/assets/stylesheets/palette.c8acc6db.min.css @@ -0,0 +1,3 @@ +[data-md-color-primary=red]{--md-primary-fg-color: hsla(1deg, 83%, 63%, 1);--md-primary-fg-color--light: hsla(0deg, 73%, 77%, 1);--md-primary-fg-color--dark: hsla(1deg, 77%, 55%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=pink]{--md-primary-fg-color: hsla(340deg, 82%, 52%, 1);--md-primary-fg-color--light: hsla(340deg, 82%, 76%, 1);--md-primary-fg-color--dark: hsla(336deg, 78%, 43%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=purple]{--md-primary-fg-color: hsla(291deg, 47%, 51%, 1);--md-primary-fg-color--light: hsla(291deg, 47%, 71%, 1);--md-primary-fg-color--dark: hsla(287deg, 65%, 40%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=deep-purple]{--md-primary-fg-color: hsla(262deg, 47%, 55%, 1);--md-primary-fg-color--light: hsla(261deg, 46%, 74%, 1);--md-primary-fg-color--dark: hsla(262deg, 52%, 47%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=indigo]{--md-primary-fg-color: hsla(231deg, 48%, 48%, 1);--md-primary-fg-color--light: hsla(231deg, 44%, 74%, 1);--md-primary-fg-color--dark: hsla(232deg, 54%, 41%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=blue]{--md-primary-fg-color: hsla(207deg, 90%, 54%, 1);--md-primary-fg-color--light: hsla(207deg, 90%, 77%, 1);--md-primary-fg-color--dark: hsla(210deg, 79%, 46%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=light-blue]{--md-primary-fg-color: hsla(199deg, 98%, 48%, 1);--md-primary-fg-color--light: hsla(199deg, 92%, 74%, 1);--md-primary-fg-color--dark: hsla(201deg, 98%, 41%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=cyan]{--md-primary-fg-color: hsla(187deg, 100%, 42%, 1);--md-primary-fg-color--light: hsla(187deg, 72%, 71%, 1);--md-primary-fg-color--dark: hsla(186deg, 100%, 33%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=teal]{--md-primary-fg-color: hsla(174deg, 100%, 29%, 1);--md-primary-fg-color--light: hsla(174deg, 42%, 65%, 1);--md-primary-fg-color--dark: hsla(173deg, 100%, 24%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=green]{--md-primary-fg-color: hsla(122deg, 39%, 49%, 1);--md-primary-fg-color--light: hsla(122deg, 37%, 74%, 1);--md-primary-fg-color--dark: hsla(123deg, 43%, 39%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=light-green]{--md-primary-fg-color: hsla(88deg, 50%, 53%, 1);--md-primary-fg-color--light: hsla(88deg, 50%, 76%, 1);--md-primary-fg-color--dark: hsla(92deg, 48%, 42%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=lime]{--md-primary-fg-color: hsla(66deg, 70%, 54%, 1);--md-primary-fg-color--light: hsla(66deg, 71%, 77%, 1);--md-primary-fg-color--dark: hsla(62deg, 61%, 44%, 1);--md-primary-bg-color: var(--md-default-fg-color);--md-primary-bg-color--light: var(--md-default-fg-color--light)}[data-md-color-primary=yellow]{--md-primary-fg-color: hsla(54deg, 100%, 62%, 1);--md-primary-fg-color--light: hsla(54deg, 100%, 81%, 1);--md-primary-fg-color--dark: hsla(43deg, 96%, 58%, 1);--md-primary-bg-color: var(--md-default-fg-color);--md-primary-bg-color--light: var(--md-default-fg-color--light)}[data-md-color-primary=amber]{--md-primary-fg-color: hsla(45deg, 100%, 51%, 1);--md-primary-fg-color--light: hsla(45deg, 100%, 75%, 1);--md-primary-fg-color--dark: hsla(38deg, 100%, 50%, 1);--md-primary-bg-color: var(--md-default-fg-color);--md-primary-bg-color--light: var(--md-default-fg-color--light)}[data-md-color-primary=orange]{--md-primary-fg-color: hsla(36deg, 100%, 57%, 1);--md-primary-fg-color--light: hsla(36deg, 100%, 75%, 1);--md-primary-fg-color--dark: hsla(33deg, 100%, 49%, 1);--md-primary-bg-color: var(--md-default-fg-color);--md-primary-bg-color--light: var(--md-default-fg-color--light)}[data-md-color-primary=deep-orange]{--md-primary-fg-color: hsla(14deg, 100%, 63%, 1);--md-primary-fg-color--light: hsla(14deg, 100%, 78%, 1);--md-primary-fg-color--dark: hsla(14deg, 91%, 54%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=brown]{--md-primary-fg-color: hsla(16deg, 25%, 38%, 1);--md-primary-fg-color--light: hsla(15deg, 15%, 69%, 1);--md-primary-fg-color--dark: hsla(14deg, 26%, 29%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=grey]{--md-primary-fg-color: hsla(0deg, 0%, 46%, 1);--md-primary-fg-color--light: hsla(0deg, 0%, 93%, 1);--md-primary-fg-color--dark: hsla(0deg, 0%, 38%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=blue-grey]{--md-primary-fg-color: hsla(199deg, 18%, 40%, 1);--md-primary-fg-color--light: hsla(200deg, 15%, 73%, 1);--md-primary-fg-color--dark: hsla(199deg, 18%, 33%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=white]{--md-primary-fg-color: hsla(231deg, 48%, 48%, 1);--md-primary-fg-color--light: hsla(230deg, 44%, 64%, 1);--md-primary-fg-color--dark: hsla(232deg, 54%, 41%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=white] .md-header{color:var(--md-default-fg-color);background-color:var(--md-default-bg-color)}[data-md-color-primary=white] .md-hero{color:var(--md-default-fg-color);background-color:var(--md-default-bg-color)}[data-md-color-primary=white] .md-hero--expand{border-bottom:.05rem solid var(--md-default-fg-color--lightest)}@media screen and (max-width: 59.9375em){[data-md-color-primary=white] .md-nav__source{color:var(--md-default-fg-color);background-color:var(--md-default-fg-color--lightest)}}@media screen and (min-width: 60em){[data-md-color-primary=white] .md-search__input{background-color:var(--md-default-fg-color--lightest)}[data-md-color-primary=white] .md-search__input+.md-search__icon{color:var(--md-default-fg-color)}[data-md-color-primary=white] .md-search__input::-webkit-input-placeholder{color:var(--md-default-fg-color--light)}[data-md-color-primary=white] .md-search__input::-moz-placeholder{color:var(--md-default-fg-color--light)}[data-md-color-primary=white] .md-search__input::-ms-input-placeholder{color:var(--md-default-fg-color--light)}[data-md-color-primary=white] .md-search__input::placeholder{color:var(--md-default-fg-color--light)}[data-md-color-primary=white] .md-search__input:hover{background-color:var(--md-default-fg-color--lighter)}}@media screen and (max-width: 76.1875em){html [data-md-color-primary=white] .md-nav--primary .md-nav__title[for=__drawer]{color:var(--md-default-fg-color);background-color:var(--md-default-bg-color)}[data-md-color-primary=white] .md-hero{border-bottom:.05rem solid var(--md-default-fg-color--lightest)}}@media screen and (min-width: 76.25em){[data-md-color-primary=white] .md-tabs{color:var(--md-default-fg-color);background-color:var(--md-default-bg-color);border-bottom:.05rem solid var(--md-default-fg-color--lightest)}}[data-md-color-primary=black]{--md-primary-fg-color: hsla(231deg, 48%, 48%, 1);--md-primary-fg-color--light: hsla(230deg, 44%, 64%, 1);--md-primary-fg-color--dark: hsla(232deg, 54%, 41%, 1);--md-primary-bg-color: var(--md-default-bg-color);--md-primary-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-primary=black] .md-header{background-color:#000}[data-md-color-primary=black] .md-hero{background-color:#000}@media screen and (max-width: 59.9375em){[data-md-color-primary=black] .md-nav__source{background-color:var(--md-default-fg-color)}}@media screen and (min-width: 60em){[data-md-color-primary=black] .md-search__input{background-color:var(--md-default-bg-color--lighter)}[data-md-color-primary=black] .md-search__input:hover{background-color:var(--md-default-bg-color--lightest)}}@media screen and (max-width: 76.1875em){html [data-md-color-primary=black] .md-nav--primary .md-nav__title[for=__drawer]{background-color:#000}}@media screen and (min-width: 76.25em){[data-md-color-primary=black] .md-tabs{background-color:#000}}[data-md-color-accent=red]{--md-accent-fg-color: hsla(348deg, 100%, 55%, 1);--md-accent-fg-color--transparent: hsla(348deg, 100%, 55%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=pink]{--md-accent-fg-color: hsla(339deg, 100%, 48%, 1);--md-accent-fg-color--transparent: hsla(339deg, 100%, 48%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=purple]{--md-accent-fg-color: hsla(291deg, 96%, 62%, 1);--md-accent-fg-color--transparent: hsla(291deg, 96%, 62%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=deep-purple]{--md-accent-fg-color: hsla(256deg, 100%, 65%, 1);--md-accent-fg-color--transparent: hsla(256deg, 100%, 65%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=indigo]{--md-accent-fg-color: hsla(231deg, 99%, 66%, 1);--md-accent-fg-color--transparent: hsla(231deg, 99%, 66%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=blue]{--md-accent-fg-color: hsla(218deg, 100%, 63%, 1);--md-accent-fg-color--transparent: hsla(218deg, 100%, 63%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=light-blue]{--md-accent-fg-color: hsla(203deg, 100%, 46%, 1);--md-accent-fg-color--transparent: hsla(203deg, 100%, 46%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=cyan]{--md-accent-fg-color: hsla(188deg, 100%, 42%, 1);--md-accent-fg-color--transparent: hsla(188deg, 100%, 42%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=teal]{--md-accent-fg-color: hsla(172deg, 100%, 37%, 1);--md-accent-fg-color--transparent: hsla(172deg, 100%, 37%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=green]{--md-accent-fg-color: hsla(145deg, 100%, 39%, 1);--md-accent-fg-color--transparent: hsla(145deg, 100%, 39%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=light-green]{--md-accent-fg-color: hsla(97deg, 81%, 48%, 1);--md-accent-fg-color--transparent: hsla(97deg, 81%, 48%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)}[data-md-color-accent=lime]{--md-accent-fg-color: hsla(75deg, 100%, 46%, 1);--md-accent-fg-color--transparent: hsla(75deg, 100%, 46%, 0.1);--md-accent-bg-color: var(--md-default-fg-color);--md-accent-bg-color--light: var(--md-default-fg-color--light)}[data-md-color-accent=yellow]{--md-accent-fg-color: hsla(50deg, 100%, 50%, 1);--md-accent-fg-color--transparent: hsla(50deg, 100%, 50%, 0.1);--md-accent-bg-color: var(--md-default-fg-color);--md-accent-bg-color--light: var(--md-default-fg-color--light)}[data-md-color-accent=amber]{--md-accent-fg-color: hsla(40deg, 100%, 50%, 1);--md-accent-fg-color--transparent: hsla(40deg, 100%, 50%, 0.1);--md-accent-bg-color: var(--md-default-fg-color);--md-accent-bg-color--light: var(--md-default-fg-color--light)}[data-md-color-accent=orange]{--md-accent-fg-color: hsla(34deg, 100%, 50%, 1);--md-accent-fg-color--transparent: hsla(34deg, 100%, 50%, 0.1);--md-accent-bg-color: var(--md-default-fg-color);--md-accent-bg-color--light: var(--md-default-fg-color--light)}[data-md-color-accent=deep-orange]{--md-accent-fg-color: hsla(14deg, 100%, 63%, 1);--md-accent-fg-color--transparent: hsla(14deg, 100%, 63%, 0.1);--md-accent-bg-color: var(--md-default-bg-color);--md-accent-bg-color--light: var(--md-default-bg-color--light)} + +/*# sourceMappingURL=palette.c8acc6db.min.css.map*/ \ No newline at end of file diff --git a/assets/stylesheets/palette.c8acc6db.min.css.map b/assets/stylesheets/palette.c8acc6db.min.css.map new file mode 100644 index 00000000..985c8ae9 --- /dev/null +++ b/assets/stylesheets/palette.c8acc6db.min.css.map @@ -0,0 +1 @@ +{"version":3,"sources":["webpack:///./src/assets/stylesheets/palette.scss","webpack:///./src/assets/stylesheets/utilities/_break.scss"],"names":[],"mappings":"AAiEE,4BACE,+CACA,sDACA,qDAOE,kDACA,gEAXJ,6BACE,iDACA,wDACA,uDAOE,kDACA,gEAXJ,+BACE,iDACA,wDACA,uDAOE,kDACA,gEAXJ,oCACE,iDACA,wDACA,uDAOE,kDACA,gEAXJ,+BACE,iDACA,wDACA,uDAOE,kDACA,gEAXJ,6BACE,iDACA,wDACA,uDAOE,kDACA,gEAXJ,mCACE,iDACA,wDACA,uDAOE,kDACA,gEAXJ,6BACE,kDACA,wDACA,wDAOE,kDACA,gEAXJ,6BACE,kDACA,wDACA,wDAOE,kDACA,gEAXJ,8BACE,iDACA,wDACA,uDAOE,kDACA,gEAXJ,oCACE,gDACA,uDACA,sDAOE,kDACA,gEAXJ,6BACE,gDACA,uDACA,sDAIE,kDACA,gEARJ,+BACE,iDACA,wDACA,sDAIE,kDACA,gEARJ,8BACE,iDACA,wDACA,uDAIE,kDACA,gEARJ,+BACE,iDACA,wDACA,uDAIE,kDACA,gEARJ,oCACE,iDACA,wDACA,sDAOE,kDACA,gEAXJ,8BACE,gDACA,uDACA,sDAOE,kDACA,gEAXJ,6BACE,8CACA,qDACA,oDAOE,kDACA,gEAXJ,kCACE,iDACA,wDACA,uDAOE,kDACA,gEAUN,8BACE,iDACA,wDACA,uDACA,kDACA,gEAGA,yCACE,iCACA,4CAIF,uCACE,iCACA,4CAGA,+CACE,gECmGF,yCD3FA,8CACE,iCACA,uDCuEF,oCD/DA,gDACE,sDAGA,iEACE,iCAIF,2EACE,wCADF,kEACE,wCADF,uEACE,wCADF,6DACE,wCAIF,sDACE,sDCkEJ,yCDzDA,iFACE,iCACA,4CAIF,uCACE,iECgCF,uCDxBA,uCACE,iCACA,4CACA,iEAUN,8BACE,iDACA,wDACA,uDACA,kDACA,gEAGA,yCACE,sBAIF,uCACE,sBCeA,yCDRA,8CACE,6CCXF,oCDmBA,gDACE,qDAGA,sDACE,uDCNJ,yCDeA,iFACE,uBClCF,uCD0CA,uCACE,uBA6BJ,2BACE,iDACA,gEAOE,iDACA,+DAVJ,4BACE,iDACA,gEAOE,iDACA,+DAVJ,8BACE,gDACA,+DAOE,iDACA,+DAVJ,mCACE,iDACA,gEAOE,iDACA,+DAVJ,8BACE,gDACA,+DAOE,iDACA,+DAVJ,4BACE,iDACA,gEAOE,iDACA,+DAVJ,kCACE,iDACA,gEAOE,iDACA,+DAVJ,4BACE,iDACA,gEAOE,iDACA,+DAVJ,4BACE,iDACA,gEAOE,iDACA,+DAVJ,6BACE,iDACA,gEAOE,iDACA,+DAVJ,mCACE,+CACA,8DAOE,iDACA,+DAVJ,4BACE,gDACA,+DAIE,iDACA,+DAPJ,8BACE,gDACA,+DAIE,iDACA,+DAPJ,6BACE,gDACA,+DAIE,iDACA,+DAPJ,8BACE,gDACA,+DAIE,iDACA,+DAPJ,mCACE,gDACA,+DAOE,iDACA,+D","file":"assets/stylesheets/palette.c8acc6db.min.css","sourcesContent":["////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Dependencies\n// ----------------------------------------------------------------------------\n\n@import \"modularscale\";\n@import \"material-color\";\n\n// ----------------------------------------------------------------------------\n// Local imports\n// ----------------------------------------------------------------------------\n\n@import \"utilities/break\";\n@import \"utilities/convert\";\n\n@import \"config\";\n\n// ----------------------------------------------------------------------------\n// Rules: primary colors\n// ----------------------------------------------------------------------------\n\n@each $name, $colors in (\n \"red\": $clr-red-400 $clr-red-200 $clr-red-600,\n \"pink\": $clr-pink-500 $clr-pink-200 $clr-pink-700,\n \"purple\": $clr-purple-400 $clr-purple-200 $clr-purple-600,\n \"deep-purple\": $clr-deep-purple-400 $clr-deep-purple-200 $clr-deep-purple-500,\n \"indigo\": $clr-indigo-500 $clr-indigo-200 $clr-indigo-700,\n \"blue\": $clr-blue-500 $clr-blue-200 $clr-blue-700,\n \"light-blue\": $clr-light-blue-500 $clr-light-blue-200 $clr-light-blue-700,\n \"cyan\": $clr-cyan-500 $clr-cyan-200 $clr-cyan-700,\n \"teal\": $clr-teal-500 $clr-teal-200 $clr-teal-700,\n \"green\": $clr-green-500 $clr-green-200 $clr-green-700,\n \"light-green\": $clr-light-green-500 $clr-light-green-200 $clr-light-green-700,\n \"lime\": $clr-lime-500 $clr-lime-200 $clr-lime-700,\n \"yellow\": $clr-yellow-500 $clr-yellow-200 $clr-yellow-700,\n \"amber\": $clr-amber-500 $clr-amber-200 $clr-amber-700,\n \"orange\": $clr-orange-400 $clr-orange-200 $clr-orange-600,\n \"deep-orange\": $clr-deep-orange-400 $clr-deep-orange-200 $clr-deep-orange-600,\n \"brown\": $clr-brown-500 $clr-brown-200 $clr-brown-700,\n \"grey\": $clr-grey-600 $clr-grey-200 $clr-grey-700,\n \"blue-grey\": $clr-blue-grey-600 $clr-blue-grey-200 $clr-blue-grey-700\n) {\n\n // Color palette\n [data-md-color-primary=\"#{$name}\"] {\n --md-primary-fg-color: hsla(#{hex2hsl(nth($colors, 1))}, 1);\n --md-primary-fg-color--light: hsla(#{hex2hsl(nth($colors, 2))}, 1);\n --md-primary-fg-color--dark: hsla(#{hex2hsl(nth($colors, 3))}, 1);\n\n // Inverted text for lighter shades\n @if index(\"lime\" \"yellow\" \"amber\" \"orange\", $name) {\n --md-primary-bg-color: var(--md-default-fg-color);\n --md-primary-bg-color--light: var(--md-default-fg-color--light);\n } @else {\n --md-primary-bg-color: var(--md-default-bg-color);\n --md-primary-bg-color--light: var(--md-default-bg-color--light);\n }\n }\n}\n\n// ----------------------------------------------------------------------------\n// Rules: white\n// ----------------------------------------------------------------------------\n\n// Color palette\n[data-md-color-primary=\"white\"] {\n --md-primary-fg-color: hsla(#{hex2hsl($clr-indigo-500)}, 1);\n --md-primary-fg-color--light: hsla(#{hex2hsl($clr-indigo-300)}, 1);\n --md-primary-fg-color--dark: hsla(#{hex2hsl($clr-indigo-700)}, 1);\n --md-primary-bg-color: var(--md-default-bg-color);\n --md-primary-bg-color--light: var(--md-default-bg-color--light);\n\n // Application header (stays always on top)\n .md-header {\n color: var(--md-default-fg-color);\n background-color: var(--md-default-bg-color);\n }\n\n // Hero teaser\n .md-hero {\n color: var(--md-default-fg-color);\n background-color: var(--md-default-bg-color);\n\n // Add a border if there are no tabs\n &--expand {\n border-bottom: px2rem(1px) solid var(--md-default-fg-color--lightest);\n }\n }\n\n // [tablet portrait -]: Layered navigation\n @include break-to-device(tablet portrait) {\n\n // Repository containing source\n .md-nav__source {\n color: var(--md-default-fg-color);\n background-color: var(--md-default-fg-color--lightest);\n }\n }\n\n // [tablet portrait +]: Change color of search input\n @include break-from-device(tablet landscape) {\n\n // Search input\n .md-search__input {\n background-color: var(--md-default-fg-color--lightest);\n\n // Icon color\n + .md-search__icon {\n color: var(--md-default-fg-color);\n }\n\n // Placeholder color\n &::placeholder {\n color: var(--md-default-fg-color--light);\n }\n\n // Hovered search field\n &:hover {\n background-color: var(--md-default-fg-color--lighter);\n }\n }\n }\n\n // [tablet -]: Layered navigation\n @include break-to-device(tablet) {\n\n // Site title in main navigation\n html & .md-nav--primary .md-nav__title[for=\"__drawer\"] {\n color: var(--md-default-fg-color);\n background-color: var(--md-default-bg-color);\n }\n\n // Hero teaser\n .md-hero {\n border-bottom: px2rem(1px) solid var(--md-default-fg-color--lightest);\n }\n }\n\n // [screen +]: Set background color for tabs\n @include break-from-device(screen) {\n\n // Tabs with outline\n .md-tabs {\n color: var(--md-default-fg-color);\n background-color: var(--md-default-bg-color);\n border-bottom: px2rem(1px) solid var(--md-default-fg-color--lightest);\n }\n }\n}\n\n// ----------------------------------------------------------------------------\n// Rules: black\n// ----------------------------------------------------------------------------\n\n// Color palette\n[data-md-color-primary=\"black\"] {\n --md-primary-fg-color: hsla(#{hex2hsl($clr-indigo-500)}, 1);\n --md-primary-fg-color--light: hsla(#{hex2hsl($clr-indigo-300)}, 1);\n --md-primary-fg-color--dark: hsla(#{hex2hsl($clr-indigo-700)}, 1);\n --md-primary-bg-color: var(--md-default-bg-color);\n --md-primary-bg-color--light: var(--md-default-bg-color--light);\n\n // Application header (stays always on top)\n .md-header {\n background-color: hsla(0, 0%, 0%, 1);\n }\n\n // Hero teaser\n .md-hero {\n background-color: hsla(0, 0%, 0%, 1);\n }\n\n // [tablet portrait -]: Layered navigation\n @include break-to-device(tablet portrait) {\n\n // Repository containing source\n .md-nav__source {\n background-color: var(--md-default-fg-color);\n }\n }\n\n // [tablet landscape +]: Header-embedded search\n @include break-from-device(tablet landscape) {\n\n // Search input\n .md-search__input {\n background-color: var(--md-default-bg-color--lighter);\n\n // Hovered search field\n &:hover {\n background-color: var(--md-default-bg-color--lightest);\n }\n }\n }\n\n // [tablet -]: Layered navigation\n @include break-to-device(tablet) {\n\n // Site title in main navigation\n html & .md-nav--primary .md-nav__title[for=\"__drawer\"] {\n background-color: hsla(0, 0%, 0%, 1);\n }\n }\n\n // [screen +]: Set background color for tabs\n @include break-from-device(screen) {\n\n // Tabs with outline\n .md-tabs {\n background-color: hsla(0, 0%, 0%, 1);\n }\n }\n}\n\n// ----------------------------------------------------------------------------\n// Rules: accent colors\n// ----------------------------------------------------------------------------\n\n@each $name, $color in (\n \"red\": $clr-red-a400,\n \"pink\": $clr-pink-a400,\n \"purple\": $clr-purple-a200,\n \"deep-purple\": $clr-deep-purple-a200,\n \"indigo\": $clr-indigo-a200,\n \"blue\": $clr-blue-a200,\n \"light-blue\": $clr-light-blue-a700,\n \"cyan\": $clr-cyan-a700,\n \"teal\": $clr-teal-a700,\n \"green\": $clr-green-a700,\n \"light-green\": $clr-light-green-a700,\n \"lime\": $clr-lime-a700,\n \"yellow\": $clr-yellow-a700,\n \"amber\": $clr-amber-a700,\n \"orange\": $clr-orange-a400,\n \"deep-orange\": $clr-deep-orange-a200\n) {\n\n // Color palette\n [data-md-color-accent=\"#{$name}\"] {\n --md-accent-fg-color: hsla(#{hex2hsl($color)}, 1);\n --md-accent-fg-color--transparent: hsla(#{hex2hsl($color)}, 0.1);\n\n // Inverted text for lighter shades\n @if index(\"lime\" \"yellow\" \"amber\" \"orange\", $name) {\n --md-accent-bg-color: var(--md-default-fg-color);\n --md-accent-bg-color--light: var(--md-default-fg-color--light);\n } @else {\n --md-accent-bg-color: var(--md-default-bg-color);\n --md-accent-bg-color--light: var(--md-default-bg-color--light);\n }\n }\n}\n","////\n/// Copyright (c) 2016-2020 Martin Donath \n///\n/// Permission is hereby granted, free of charge, to any person obtaining a\n/// copy of this software and associated documentation files (the \"Software\"),\n/// to deal in the Software without restriction, including without limitation\n/// the rights to use, copy, modify, merge, publish, distribute, sublicense,\n/// and/or sell copies of the Software, and to permit persons to whom the\n/// Software is furnished to do so, subject to the following conditions:\n///\n/// The above copyright notice and this permission notice shall be included in\n/// all copies or substantial portions of the Software.\n///\n/// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n/// FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL\n/// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n/// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n/// DEALINGS\n////\n\n// ----------------------------------------------------------------------------\n// Variables\n// ----------------------------------------------------------------------------\n\n///\n/// Device-specific breakpoints\n///\n/// @example\n/// $break-devices: (\n/// mobile: (\n/// portrait: 220px 479px,\n/// landscape: 480px 719px\n/// ),\n/// tablet: (\n/// portrait: 720px 959px,\n/// landscape: 960px 1219px\n/// ),\n/// screen: (\n/// small: 1220px 1599px,\n/// medium: 1600px 1999px,\n/// large: 2000px\n/// )\n/// );\n///\n$break-devices: () !default;\n\n// ----------------------------------------------------------------------------\n// Helpers\n// ----------------------------------------------------------------------------\n\n///\n/// Choose minimum and maximum device widths\n///\n@function break-select-min-max($devices) {\n $min: 1000000;\n $max: 0;\n @each $key, $value in $devices {\n @while type-of($value) == map {\n $value: break-select-min-max($value);\n }\n @if type-of($value) == list {\n @each $number in $value {\n @if type-of($number) == number {\n $min: min($number, $min);\n @if $max != null {\n $max: max($number, $max);\n }\n } @else {\n @error \"Invalid number: #{$number}\";\n }\n }\n } @else if type-of($value) == number {\n $min: min($value, $min);\n $max: null;\n } @else {\n @error \"Invalid value: #{$value}\";\n }\n }\n @return $min, $max;\n}\n\n///\n/// Select minimum and maximum widths for a device breakpoint\n///\n@function break-select-device($device) {\n $current: $break-devices;\n @for $n from 1 through length($device) {\n @if type-of($current) == map {\n $current: map-get($current, nth($device, $n));\n } @else {\n @error \"Invalid device map: #{$devices}\";\n }\n }\n @if type-of($current) == list or type-of($current) == number {\n $current: (default: $current);\n }\n @return break-select-min-max($current);\n}\n\n// ----------------------------------------------------------------------------\n// Mixins\n// ----------------------------------------------------------------------------\n\n///\n/// A minimum-maximum media query breakpoint\n///\n@mixin break-at($breakpoint) {\n @if type-of($breakpoint) == number {\n @media screen and (min-width: $breakpoint) {\n @content;\n }\n } @else if type-of($breakpoint) == list {\n $min: nth($breakpoint, 1);\n $max: nth($breakpoint, 2);\n @if type-of($min) == number and type-of($max) == number {\n @media screen and (min-width: $min) and (max-width: $max) {\n @content;\n }\n } @else {\n @error \"Invalid breakpoint: #{$breakpoint}\";\n }\n } @else {\n @error \"Invalid breakpoint: #{$breakpoint}\";\n }\n}\n\n///\n/// An orientation media query breakpoint\n///\n@mixin break-at-orientation($breakpoint) {\n @if type-of($breakpoint) == string {\n @media screen and (orientation: $breakpoint) {\n @content;\n }\n } @else {\n @error \"Invalid breakpoint: #{$breakpoint}\";\n }\n}\n\n///\n/// A maximum-aspect-ratio media query breakpoint\n///\n@mixin break-at-ratio($breakpoint) {\n @if type-of($breakpoint) == number {\n @media screen and (max-aspect-ratio: $breakpoint) {\n @content;\n }\n } @else {\n @error \"Invalid breakpoint: #{$breakpoint}\";\n }\n}\n\n///\n/// A minimum-maximum media query device breakpoint\n///\n@mixin break-at-device($device) {\n @if type-of($device) == string {\n $device: $device,;\n }\n @if type-of($device) == list {\n $breakpoint: break-select-device($device);\n @if nth($breakpoint, 2) != null {\n $min: nth($breakpoint, 1);\n $max: nth($breakpoint, 2);\n @media screen and (min-width: $min) and (max-width: $max) {\n @content;\n }\n } @else {\n @error \"Invalid device: #{$device}\";\n }\n } @else {\n @error \"Invalid device: #{$device}\";\n }\n}\n\n///\n/// A minimum media query device breakpoint\n///\n@mixin break-from-device($device) {\n @if type-of($device) == string {\n $device: $device,;\n }\n @if type-of($device) == list {\n $breakpoint: break-select-device($device);\n $min: nth($breakpoint, 1);\n @media screen and (min-width: $min) {\n @content;\n }\n } @else {\n @error \"Invalid device: #{$device}\";\n }\n}\n\n///\n/// A maximum media query device breakpoint\n///\n@mixin break-to-device($device) {\n @if type-of($device) == string {\n $device: $device,;\n }\n @if type-of($device) == list {\n $breakpoint: break-select-device($device);\n $max: nth($breakpoint, 2);\n @media screen and (max-width: $max) {\n @content;\n }\n } @else {\n @error \"Invalid device: #{$device}\";\n }\n}\n"],"sourceRoot":""} \ No newline at end of file diff --git a/aws/connection/index.html b/aws/connection/index.html new file mode 100644 index 00000000..44bc1a1f --- /dev/null +++ b/aws/connection/index.html @@ -0,0 +1,2813 @@ + + + + + + + + + + + + + + + + + + + + + + + + Connection to the AWS Cluster - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Connection to the AWS Cluster

    +

    Access to the frontend

    +

    The ULHPC team will create specific access for the AWS Cluster and send to all project members a ssh key in order to connect to the cluster frontend.

    +

    Once your account has been enabled, you can connect to the cluster using ssh. Computers based on Linux or Mac usually have ssh installed by default. +To create a direct connection, use the command below (using your specific cluster name if it differs from workshop-cluster).

    +

    ssh -i id_rsa username@ec2-52-5-167-162.compute-1.amazonaws.com 
    +
    +This will open a direct, non-graphical connection in the terminal. To exit the remote terminal session, use the standard Linux command “exit”.

    +

    Alternatively, you may want to save the configuration of this connection (and create an alias for it) by editing the file ~/.ssh/config (create it if it does not already exist) and adding the following entries:

    +
    Host aws-ulhpc-access
    +  User username
    +  Hostname ec2-52-5-167-162.compute-1.amazonaws.com 
    +  IdentityFile ~/.ssh/id_rsa
    +  IdentitiesOnly yes
    +
    + +

    For additionnal information about ssh connection, please refer to the following page.

    +
    +

    Data storage

    +
      +
    • HOME storage is limited to 500GB for all users.
    • +
    • The ULHPC team will also create for you a project directory located at /shared/projects/<project_id>. All members of the project will have the possibility to read, write and execute only in this directory.
    • +
    • We strongly advise you to use the project directory to store data and install softwares.
    • +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/aws/images/queues.png b/aws/images/queues.png new file mode 100644 index 00000000..f64ce31d Binary files /dev/null and b/aws/images/queues.png differ diff --git a/aws/overview/index.html b/aws/overview/index.html new file mode 100644 index 00000000..026da794 --- /dev/null +++ b/aws/overview/index.html @@ -0,0 +1,2808 @@ + + + + + + + + + + + + + + + + + + + + + + + + Context & System Overview - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Context & System Overview

    +

    +

    +

    Context

    +

    The University of Luxembourg announced a collaboration with Amazon Web Services (AWS) to deploy Amazon Elastic Compute Cloud (Amazon EC2) cloud computing infrastructure in order to accelerate strategic high-performance computing (HPC) research and development in Europe.

    +

    University of Luxembourg will be among the first European universities to provide research and development communities with access to compute environments that use an architecture similar to the European Processor Initiative (EPI), which will be the basis for Europe’s future exascale computing architecture.

    +

    Using Amazon EC2 instances powered by AWS Graviton3, the University of Luxembourg will make simulation capacity available to University researchers. This autumn, research projects will be selected from proposals submitted by University R&D teams.

    +

    As part of this project, AWS will provide cloud computing services to the University that will support the development, design, and testing of numerical codes (i.e., codes that use only digits, such as binary), which traditionally demands a lot of compute power. This will give researchers an accessible, easy-to-use, end-to-end environment in which they can validate their simulation codes on ARM64 architectures, including servers, personal computers, and Internet of Things (IoT).

    +

    After initial project selection by a steering committee that includes representatives from the University of Luxembourg and AWS, additional projects will be selected each quarter. Selections will be based on the University’s outlined research goals. Priority will be given to research carried out by the University of Luxembourg and its interdisciplinary research centers; however, based on available capacity and project qualifications, the initiative could extend to European industrial partners.

    +

    System description and environment

    +

    The AWS Parallel Cluster based on the new HPC-based Graviton3 instances (all instances and storage located in US-EAST-1) will provide cloud computing services to Uni.lu that will support the development, design, and testing of numerical codes, which traditionally demands a lot of compute power. This will give researchers an accessible, easy-to-use, end-to-end environment in which they can validate their simulation codes on ARM64 architectures, including servers, personal computers, and Internet of Things (IoT). The cluster will consist in two main partitions and jobs will be submitted using the Slurm scheduler :

    +

    +

    PIs and their teams of the funded projects under this call will have the possibility to compile their code with the Arm compiler and using the Arm Performance Library(APL). Support will be provided by the ULHPC team as well as training activities.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/aws/setup/index.html b/aws/setup/index.html new file mode 100644 index 00000000..49b81954 --- /dev/null +++ b/aws/setup/index.html @@ -0,0 +1,3410 @@ + + + + + + + + + + + + + + + + + + + + + + + + Environment Setup - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Environment Setup

    +

    AWS suggest to use Spack to setup your software environment. There is no hard requirement that you must use Spack. However we have included it here, as it is a quick, simple way to setup a development environment. +The official ULHPC swsets are not available on the AWS cluster. If you prefer to use EasyBuild or manually compile softwares, please refer to the ULHPC software documentation for this purpose.

    +

    Environment modules and LMod

    +

    Like the ULHPC facility, the AWS cluster relies on the Environment Modules / LMod framework which provided the module utility on Compute nodes +to manage nearly all software. +There are two main advantages of the module approach:

    +
      +
    1. ULHPC can provide many different versions and/or installations of a + single software package on a given machine, including a default + version as well as several older and newer version.
    2. +
    3. Users can easily switch to different versions or installations + without having to explicitly specify different paths. With modules, + the PATH and related environment variables (LD_LIBRARY_PATH, MANPATH, etc.) are automatically managed.
    4. +
    +

    Environment Modules in itself are a standard and well-established technology across HPC sites, to permit developing and using complex software and libraries build with dependencies, allowing multiple versions of software stacks and combinations thereof to co-exist. +It brings the module command which is used to manage environment variables such as PATH, LD_LIBRARY_PATH and MANPATH, enabling the easy loading and unloading of application/library profiles and their dependencies.

    +

    See https://hpc-docs.uni.lu/environment/modules/ for more details

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CommandDescription
    module availLists all the modules which are available to be loaded
    module spider <pattern>Search for among available modules (Lmod only)
    module load <mod1> [mod2...]Load a module
    module unload <module>Unload a module
    module listList loaded modules
    module purgeUnload all modules (purge)
    module display <module>Display what a module does
    module use <path>Prepend the directory to the MODULEPATH environment variable
    module unuse <path>Remove the directory from the MODULEPATH environment variable
    +

    At the heart of environment modules interaction resides the following components:

    +
      +
    • the MODULEPATH environment variable, which defines the list of searched directories for modulefiles
    • +
    • modulefile
    • +
    +

    Take a look at the current values:

    +

    $ echo $MODULEPATH
    +/shared/apps/easybuild/modules/all:/usr/share/Modules/modulefiles:/etc/modulefiles:/usr/share/modulefiles/Linux:/usr/share/modulefiles/Core:/usr/share/lmod/lmod/modulefiles/Core
    +$ module show toolchain/foss
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    +   /shared/apps/easybuild/modules/all/toolchain/foss/2022b.lua:
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    +help([[
    +Description
    +===========
    +GNU Compiler Collection (GCC) based compiler toolchain, including
    + OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    +
    +
    +More information
    +================
    + - Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain
    +]])
    +whatis("Description: GNU Compiler Collection (GCC) based compiler toolchain, including
    + OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.")
    +whatis("Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain")
    +whatis("URL: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain")
    +conflict("toolchain/foss")
    +load("compiler/GCC/12.2.0")
    +load("mpi/OpenMPI/4.1.4-GCC-12.2.0")
    +load("lib/FlexiBLAS/3.2.1-GCC-12.2.0")
    +load("numlib/FFTW/3.3.10-GCC-12.2.0")
    +load("numlib/FFTW.MPI/3.3.10-gompi-2022b")
    +load("numlib/ScaLAPACK/2.2.0-gompi-2022b-fb")
    +setenv("EBROOTFOSS","/shared/apps/easybuild/software/foss/2022b")
    +setenv("EBVERSIONFOSS","2022b")
    +setenv("EBDEVELFOSS","/shared/apps/easybuild/software/foss/2022b/easybuild/toolchain-foss-2022b-easybuild-devel")
    +
    +Now you can search for a given software using module spider <pattern>:

    +
    $  module spider lang/Python
    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    +  lang/Python:
    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    +    Description:
    +      Python is a programming language that lets you work more quickly and integrate your systems more effectively.
    +
    +     Versions:
    +        lang/Python/3.10.8-GCCcore-12.2.0-bare
    +        lang/Python/3.10.8-GCCcore-12.2.0
    +
    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    +  For detailed information about a specific "lang/Python" module (including how to load the modules) use the module's full name.
    +  For example:
    +
    +     $ module spider lang/Python/3.10.8-GCCcore-12.2.0
    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    +
    + +

    Let's see the effect of loading/unloading a module

    +
    $ module list
    +No modules loaded
    +$ which python
    +/usr/bin/python
    +$ python --version       # System level python
    +Python 2.7.18
    +
    +$ module load lang/Python    # use TAB to auto-complete
    +$ which python
    +/shared/apps/easybuild/software/Python/3.10.8-GCCcore-12.2.0/bin/python
    +$ python --version
    +Python 3.10.8
    +$ module purge
    +
    + +

    Installing softwares with Easybuild

    +

    +

    EasyBuild is a tool that allows to perform automated and reproducible compilation and installation of software. A large number of scientific software are supported (2995 supported software packages in the last release 4.8.0) -- see also What is EasyBuild?

    +

    All builds and installations are performed at user level, so you don't need the admin (i.e. root) rights. +The software are installed in your home directory under $EASYBUILD_PREFIX -- see https://hpc-docs.uni.lu/environment/easybuild/

    + + + + + + + + + + + + + + + + + + + + +
    Default setting (local)Recommended setting
    $EASYBUILD_PREFIX$HOME/.local/easybuild/shared/apps/easybuild/
    +
      +
    • built software are placed under ${EASYBUILD_PREFIX}/software/
    • +
    • modules install path ${EASYBUILD_PREFIX}/modules/all
    • +
    +

    Easybuild main concepts

    +

    See also the official Easybuild Tutorial: "Maintaining a Modern Scientific Software Stack Made Easy with EasyBuild"

    +

    EasyBuild relies on two main concepts: Toolchains and EasyConfig files.

    +

    A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. +The two main toolchains frequently used on the UL HPC platform are the foss ("Free and Open Source Software") and the intel one.

    +
      +
    1. foss is based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.).
    2. +
    3. intel is based on the Intel compiler and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.).
    4. +
    +

    An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedures (like configure, make and make install), this file is very simple. +Many EasyConfig files are already provided with EasyBuild. +By default, EasyConfig files and generated modules are named using the following convention: +<Software-Name>-<Software-Version>-<Toolchain-Name>-<Toolchain-Version>. +However, we use a hierarchical approach where the software are classified under a category (or class) -- see the CategorizedModuleNamingScheme option for the EASYBUILD_MODULE_NAMING_SCHEME environmental variable), meaning that the layout will respect the following hierarchy: +<Software-Class>/<Software-Name>/<Software-Version>-<Toolchain-Name>-<Toolchain-Version>

    +

    Additional details are available on EasyBuild website:

    + +

    Easybuild is provided to you as a software module to complement the existing software set.

    +
    module load tools/EasyBuild
    +
    + +

    In case you cant to install the latest version yourself, please follow the official instructions. +Nonetheless, we strongly recommand to use the provided module. +Don't forget to setup your local Easybuild configuration first.

    +

    What is important for the installation of Easybuild are the following variables:

    +
      +
    • EASYBUILD_PREFIX: where to install local modules and software, i.e. $HOME/.local/easybuild
    • +
    • EASYBUILD_MODULES_TOOL: the type of modules tool you are using, i.e. LMod in this case
    • +
    • EASYBUILD_MODULE_NAMING_SCHEME: the way the software and modules should be organized (flat view or hierarchical) -- we're advising on CategorizedModuleNamingScheme
    • +
    +
    +

    Important

    +
      +
    • Recall that you should be on a compute node to install Easybuild (otherwise the checks of the module command availability will fail.)
    • +
    +
    +

    Install a missing software by complementing the software set

    +

    The current software set contains the toolchain foss-2022b that is necessary to build other softwares. We have build OpenMPI-4.1.4 to take in account the latest AWS EFA and the slurm integration. +In order to install missing softwares for your project, you can complement the existing software set located at /shared/apps/easybuild by using the provided EasyBuild module (latest version).
    +Once Easybuild has been loaded, you can search and install new softwares. By default, these new softwares will be installed at ${HOME}/.local/easybuild. Feel free to adapt the environment variable ${EASYBUILD_PREFIX} to select a new installation directory.

    +

    Let's try to install a missing software

    +
    (heanode)$ srun -p small -N 1 -n1 -c16  --pty bash -i
    +(node)$ module spider HPL   # HPL is a software package that solves a (random) dense linear system in double precision (64 bits)
    +Lmod has detected the following error:  Unable to find: "HPL".
    +(node)$ module load tools/EasyBuild
    +# Search for recipes for the missing software
    +(node)$ eb -S HPL
    +== found valid index for /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs, so using it...
    +CFGS1=/shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs
    + * $CFGS1/b/bashplotlib/bashplotlib-0.6.5-GCCcore-10.3.0.eb
    + * $CFGS1/h/HPL/HPL-2.1-foss-2016.04.eb
    + * $CFGS1/h/HPL/HPL-2.1-foss-2016.06.eb
    + * $CFGS1/h/HPL/HPL-2.1-foss-2016a.eb
    + * $CFGS1/h/HPL/HPL-2.1-foss-2016b.eb
    + ...
    + * $CFGS1/h/HPL/HPL-2.3-foss-2022a.eb
    + * $CFGS1/h/HPL/HPL-2.3-foss-2022b.eb
    + * $CFGS1/h/HPL/HPL-2.3-foss-2023a.eb
    + ...
    + * $CFGS1/h/HPL/HPL-2.3-intel-2022b.eb
    + * $CFGS1/h/HPL/HPL-2.3-intel-2023.03.eb
    + * $CFGS1/h/HPL/HPL-2.3-intel-2023a.eb
    + * $CFGS1/h/HPL/HPL-2.3-intelcuda-2019b.eb
    + * $CFGS1/h/HPL/HPL-2.3-intelcuda-2020a.eb
    + * $CFGS1/h/HPL/HPL-2.3-iomkl-2019.01.eb
    + * $CFGS1/h/HPL/HPL-2.3-iomkl-2021a.eb
    + * $CFGS1/h/HPL/HPL-2.3-iomkl-2021b.eb
    + * $CFGS1/h/HPL/HPL_parallel-make.patch
    +
    + +

    From this list, you should select the version matching the target toolchain version -- here foss-2022b.

    +

    Once you pick a given recipy, install it with

    +
       eb <name>.eb [-D] -r
    +
    + + +
      +
    • -D enables the dry-run mode to check what's going to be install -- ALWAYS try it first
    • +
    • -r enables the robot mode to automatically install all dependencies while searching for easyconfigs in a set of pre-defined directories -- you can also prepend new directories to search for eb files (like the current directory $PWD) using the option and syntax --robot-paths=$PWD: (do not forget the ':'). See Controlling the robot search path documentation
    • +
    • The $CFGS<n>/ prefix should be dropped unless you know what you're doing (and thus have previously defined the variable -- see the first output of the eb -S [...] command).
    • +
    +

    Let's try to review the missing dependencies from a dry-run :

    +

    # Select the one matching the target software set version
    +(node)$ eb HPL-2.3-foss-2022b.eb -Dr   # Dry-run
    +== Temporary log file in case of crash /tmp/eb-lzv785be/easybuild-ihga94y0.log
    +== found valid index for /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs, so using it...
    +== found valid index for /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs, so using it...
    +Dry run: printing build status of easyconfigs and dependencies
    +CFGS=/shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs
    + * [x] $CFGS/m/M4/M4-1.4.19.eb (module: devel/M4/1.4.19)
    + * [x] $CFGS/b/Bison/Bison-3.8.2.eb (module: lang/Bison/3.8.2)
    + * [x] $CFGS/f/flex/flex-2.6.4.eb (module: lang/flex/2.6.4)
    + * [x] $CFGS/z/zlib/zlib-1.2.12.eb (module: lib/zlib/1.2.12)
    + * [x] $CFGS/b/binutils/binutils-2.39.eb (module: tools/binutils/2.39)
    + * [x] $CFGS/g/GCCcore/GCCcore-12.2.0.eb (module: compiler/GCCcore/12.2.0)
    + * [x] $CFGS/z/zlib/zlib-1.2.12-GCCcore-12.2.0.eb (module: lib/zlib/1.2.12-GCCcore-12.2.0)
    + * [x] $CFGS/h/help2man/help2man-1.49.2-GCCcore-12.2.0.eb (module: tools/help2man/1.49.2-GCCcore-12.2.0)
    + * [x] $CFGS/m/M4/M4-1.4.19-GCCcore-12.2.0.eb (module: devel/M4/1.4.19-GCCcore-12.2.0)
    + * [x] $CFGS/b/Bison/Bison-3.8.2-GCCcore-12.2.0.eb (module: lang/Bison/3.8.2-GCCcore-12.2.0)
    + * [x] $CFGS/f/flex/flex-2.6.4-GCCcore-12.2.0.eb (module: lang/flex/2.6.4-GCCcore-12.2.0)
    + * [x] $CFGS/b/binutils/binutils-2.39-GCCcore-12.2.0.eb (module: tools/binutils/2.39-GCCcore-12.2.0)
    + * [x] $CFGS/p/pkgconf/pkgconf-1.9.3-GCCcore-12.2.0.eb (module: devel/pkgconf/1.9.3-GCCcore-12.2.0)
    + * [x] $CFGS/g/groff/groff-1.22.4-GCCcore-12.2.0.eb (module: tools/groff/1.22.4-GCCcore-12.2.0)
    + * [x] $CFGS/n/ncurses/ncurses-6.3-GCCcore-12.2.0.eb (module: devel/ncurses/6.3-GCCcore-12.2.0)
    + * [x] $CFGS/e/expat/expat-2.4.9-GCCcore-12.2.0.eb (module: tools/expat/2.4.9-GCCcore-12.2.0)
    + * [x] $CFGS/b/bzip2/bzip2-1.0.8-GCCcore-12.2.0.eb (module: tools/bzip2/1.0.8-GCCcore-12.2.0)
    + * [x] $CFGS/g/GCC/GCC-12.2.0.eb (module: compiler/GCC/12.2.0)
    + * [x] $CFGS/f/FFTW/FFTW-3.3.10-GCC-12.2.0.eb (module: numlib/FFTW/3.3.10-GCC-12.2.0)
    + * [x] $CFGS/u/UnZip/UnZip-6.0-GCCcore-12.2.0.eb (module: tools/UnZip/6.0-GCCcore-12.2.0)
    + * [x] $CFGS/l/libreadline/libreadline-8.2-GCCcore-12.2.0.eb (module: lib/libreadline/8.2-GCCcore-12.2.0)
    + * [x] $CFGS/l/libtool/libtool-2.4.7-GCCcore-12.2.0.eb (module: lib/libtool/2.4.7-GCCcore-12.2.0)
    + * [x] $CFGS/m/make/make-4.3-GCCcore-12.2.0.eb (module: devel/make/4.3-GCCcore-12.2.0)
    + * [x] $CFGS/t/Tcl/Tcl-8.6.12-GCCcore-12.2.0.eb (module: lang/Tcl/8.6.12-GCCcore-12.2.0)
    + * [x] $CFGS/p/pkgconf/pkgconf-1.8.0.eb (module: devel/pkgconf/1.8.0)
    + * [x] $CFGS/s/SQLite/SQLite-3.39.4-GCCcore-12.2.0.eb (module: devel/SQLite/3.39.4-GCCcore-12.2.0)
    + * [x] $CFGS/o/OpenSSL/OpenSSL-1.1.eb (module: system/OpenSSL/1.1)
    + * [x] $CFGS/l/libevent/libevent-2.1.12-GCCcore-12.2.0.eb (module: lib/libevent/2.1.12-GCCcore-12.2.0)
    + * [x] $CFGS/c/cURL/cURL-7.86.0-GCCcore-12.2.0.eb (module: tools/cURL/7.86.0-GCCcore-12.2.0)
    + * [x] $CFGS/d/DB/DB-18.1.40-GCCcore-12.2.0.eb (module: tools/DB/18.1.40-GCCcore-12.2.0)
    + * [x] $CFGS/p/Perl/Perl-5.36.0-GCCcore-12.2.0.eb (module: lang/Perl/5.36.0-GCCcore-12.2.0)
    + * [x] $CFGS/a/Autoconf/Autoconf-2.71-GCCcore-12.2.0.eb (module: devel/Autoconf/2.71-GCCcore-12.2.0)
    + * [x] $CFGS/a/Automake/Automake-1.16.5-GCCcore-12.2.0.eb (module: devel/Automake/1.16.5-GCCcore-12.2.0)
    + * [x] $CFGS/a/Autotools/Autotools-20220317-GCCcore-12.2.0.eb (module: devel/Autotools/20220317-GCCcore-12.2.0)
    + * [x] $CFGS/n/numactl/numactl-2.0.16-GCCcore-12.2.0.eb (module: tools/numactl/2.0.16-GCCcore-12.2.0)
    + * [x] $CFGS/u/UCX/UCX-1.13.1-GCCcore-12.2.0.eb (module: lib/UCX/1.13.1-GCCcore-12.2.0)
    + * [x] $CFGS/l/libfabric/libfabric-1.16.1-GCCcore-12.2.0.eb (module: lib/libfabric/1.16.1-GCCcore-12.2.0)
    + * [x] $CFGS/l/libffi/libffi-3.4.4-GCCcore-12.2.0.eb (module: lib/libffi/3.4.4-GCCcore-12.2.0)
    + * [x] $CFGS/x/xorg-macros/xorg-macros-1.19.3-GCCcore-12.2.0.eb (module: devel/xorg-macros/1.19.3-GCCcore-12.2.0)
    + * [x] $CFGS/l/libpciaccess/libpciaccess-0.17-GCCcore-12.2.0.eb (module: system/libpciaccess/0.17-GCCcore-12.2.0)
    + * [x] $CFGS/u/UCC/UCC-1.1.0-GCCcore-12.2.0.eb (module: lib/UCC/1.1.0-GCCcore-12.2.0)
    + * [x] $CFGS/n/ncurses/ncurses-6.3.eb (module: devel/ncurses/6.3)
    + * [x] $CFGS/g/gettext/gettext-0.21.1.eb (module: tools/gettext/0.21.1)
    + * [x] $CFGS/x/XZ/XZ-5.2.7-GCCcore-12.2.0.eb (module: tools/XZ/5.2.7-GCCcore-12.2.0)
    + * [x] $CFGS/p/Python/Python-3.10.8-GCCcore-12.2.0-bare.eb (module: lang/Python/3.10.8-GCCcore-12.2.0-bare)
    + * [x] $CFGS/b/BLIS/BLIS-0.9.0-GCC-12.2.0.eb (module: numlib/BLIS/0.9.0-GCC-12.2.0)
    + * [x] $CFGS/o/OpenBLAS/OpenBLAS-0.3.21-GCC-12.2.0.eb (module: numlib/OpenBLAS/0.3.21-GCC-12.2.0)
    + * [x] $CFGS/l/libarchive/libarchive-3.6.1-GCCcore-12.2.0.eb (module: tools/libarchive/3.6.1-GCCcore-12.2.0)
    + * [x] $CFGS/l/libxml2/libxml2-2.10.3-GCCcore-12.2.0.eb (module: lib/libxml2/2.10.3-GCCcore-12.2.0)
    + * [x] $CFGS/c/CMake/CMake-3.24.3-GCCcore-12.2.0.eb (module: devel/CMake/3.24.3-GCCcore-12.2.0)
    + * [ ] $CFGS/h/hwloc/hwloc-2.8.0-GCCcore-12.2.0.eb (module: system/hwloc/2.8.0-GCCcore-12.2.0)
    + * [ ] $CFGS/p/PMIx/PMIx-4.2.2-GCCcore-12.2.0.eb (module: lib/PMIx/4.2.2-GCCcore-12.2.0)
    + * [x] $CFGS/o/OpenMPI/OpenMPI-4.1.4-GCC-12.2.0.eb (module: mpi/OpenMPI/4.1.4-GCC-12.2.0)
    + * [x] $CFGS/f/FlexiBLAS/FlexiBLAS-3.2.1-GCC-12.2.0.eb (module: lib/FlexiBLAS/3.2.1-GCC-12.2.0)
    + * [x] $CFGS/g/gompi/gompi-2022b.eb (module: toolchain/gompi/2022b)
    + * [x] $CFGS/f/FFTW.MPI/FFTW.MPI-3.3.10-gompi-2022b.eb (module: numlib/FFTW.MPI/3.3.10-gompi-2022b)
    + * [x] $CFGS/s/ScaLAPACK/ScaLAPACK-2.2.0-gompi-2022b-fb.eb (module: numlib/ScaLAPACK/2.2.0-gompi-2022b-fb)
    + * [x] $CFGS/f/foss/foss-2022b.eb (module: toolchain/foss/2022b)
    + * [ ] $CFGS/h/HPL/HPL-2.3-foss-2022b.eb (module: tools/HPL/2.3-foss-2022b)
    +== Temporary log file(s) /tmp/eb-lzv785be/easybuild-ihga94y0.log* have been removed.
    +== Temporary directory /tmp/eb-lzv785be has been removed.
    +
    +Let's try to install it (remove the -D):

    +

    # Select the one matching the target software set version
    +(node)$ eb HPL-2.3-foss-2022b.eb -r
    +
    +From now on, you should be able to see the new module.

    +
    (node)$  module spider HPL
    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    +  tools/HPL: tools/HPL/2.3-foss-2022b
    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    +    Description:
    +      HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark.
    +
    +
    +    This module can be loaded directly: module load tools/HPL/2.3-foss-2022b
    +
    +    Help:
    +
    +      Description
    +      ===========
    +      HPL is a software package that solves a (random) dense linear system in double precision (64 bits)
    +       arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available
    +       implementation of the High Performance Computing Linpack Benchmark.
    +
    +
    +      More information
    +      ================
    +       - Homepage: https://www.netlib.org/benchmark/hpl/
    +
    + +

    Tips: When you load a module <NAME> generated by Easybuild, it is installed within the directory reported by the $EBROOT<NAME> variable. +In the above case, you will find the generated binary in ${EBROOTHPL}/.

    +

    Installing softwares with Spack

    +
      +
    • +

      To do this, please clone the Spack GitHub repository into a SPACK_ROOT which is defined to be on a your project directory, i.e., /shared/project/<project_id>.

      +
    • +
    • +

      Then add the configuration to you ~/.bashrc file.

      +
    • +
    • +

      You may wish to change the location of theSPACK_ROOT to fit your specific cluster configuration.

      +
    • +
    • +

      Here, we consider the release v0.19 of Spack from the releases/v0.19 branch, however, you may wish to checkout the develop branch for the latest packages.

      +
    • +
    +
    git clone -c feature.manyFiles=true -b releases/v0.19 https://github.com/spack/spack $SPACK_ROOT
    +
    + +
      +
    • Then, add the following lines in your .bashrc
    • +
    +
    export PROJECT="/shared/projects/<project_id>"
    +export SPACK_ROOT="${PROJECT}/spack"
    +if [[ -f "${SPACK_ROOT}/share/spack/setup-env.sh" && -n ${SLURM_JOB_ID} ]];then
    +    source ${SPACK_ROOT}/share/spack/setup-env.sh" 
    +fi
    +
    + +
    +

    Adapt accordingly

    +
      +
    • Do NOT forget to replace <project_id> with your project name
    • +
    +
    +

    Spack Binary Cache

    +

    At ISC'22, in conjunction with the Spack v0.18 release, AWS announced a collaborative effort to host a Binary Cache . +The binary cache stores prebuilt versions of common HPC packages, meaning that the installation process is reduced to relocation rather than compilation. To increase flexibility the binary cache contains package builds with different variants and built with different compilers. +The purpose of the binary cache is to drastically speed up package installation, especially when long dependency chains exist.

    +

    The binary cache is periodically updated with the latest versions of packages, and is released in conjunction with Spack releases. Thus you can use the v0.18 binary cache to have packages specifically from that Spack release. Alternatively, you can make use of the develop binary cache, which is kept up to date with the Spack develop branch.

    +
      +
    • To add the develop binary cache, and trusting the associated gpg keys:
    • +
    +
    spack mirror add binary_mirror https://binaries.spack.io/develop
    +spack buildcache keys -it
    +
    + +

    Installing packages

    +

    The notation for installing packages, when the binary cache has been enabled is unchanged. Spack will first check to see if the package is installable from the binary cache, and only upon failure will it install from source. We see confirmation of this in the output:

    +
    $ spack install bzip2
    +==> Installing bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k
    +==> Fetching https://binaries.spack.io/develop/build_cache/linux-amzn2-x86_64_v4-gcc-7.3.1-bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k.spec.json.sig
    +gpg: Signature made Fri 01 Jul 2022 04:21:22 AM UTC using RSA key ID 3DB0C723
    +gpg: Good signature from "Spack Project Official Binaries <maintainers@spack.io>"
    +==> Fetching https://binaries.spack.io/develop/build_cache/linux-amzn2-x86_64_v4/gcc-7.3.1/bzip2-1.0.8/linux-amzn2-x86_64_v4-gcc-7.3.1-bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k.spack
    +==> Extracting bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k from binary cache
    +[+] /shared/spack/opt/spack/linux-amzn2-x86_64_v4/gcc-7.3.1/bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k
    +
    + +

    Bypassing the binary cache

    +
      +
    • +

      Sometimes we might want to install a specific package from source, and bypass the binary cache. To achieve this we can pass the --no-cache flag to the install command. We can use this notation to install cowsay. +

      spack install --no-cache cowsay
      +

      +
    • +
    • +

      To compile any software we are going to need a compiler. Out of the box Spack does not know about any compilers on the system. To list your registered compilers, please use the following command: +

      spack compiler list
      +

      +
    • +
    +

    It will return an empty list the first time you used after installing Spack +

     ==> No compilers available. Run `spack compiler find` to autodetect compilers
    +

    +
      +
    • AWS ParallelCluster installs GCC by default, so you can ask Spack to discover compilers on the system: +
      spack compiler find
      +
    • +
    +

    This should identify your GCC install. In your case a conmpiler should be found. +

    ==> Added 1 new compiler to /home/ec2-user/.spack/linux/compilers.yaml
    +     gcc@7.3.1
    + ==> Compilers are defined in the following files:
    +     /home/ec2-user/.spack/linux/compilers.yaml
    +

    +

    Install other compilers

    +

    This default GCC compiler may be sufficient for many applications, we may want to install a newer version of GCC or other compilers in general. Spack is able to install compilers like any other package.

    +

    Newer GCC version

    +

    For example we can install a version of GCC 11.2.0, complete with binutils, and then add it to the Spack compiler list. +```·bash +spack install -j [num cores] gcc@11.2.0+binutils +spack load gcc@11.2.0 +spack compiler find +spack unload +

    As Spack is building GCC and all of the dependency packages this install can take a long time (>30 mins).
    +
    +## Arm Compiler for Linux
    +
    +The Arm Compiler for Linux (ACfL) can be installed by Spack on Arm systems, like the Graviton2 (C6g) or Graviton3 (C7g).o
    +```bash
    +spack install arm@22.0.1
    +spack load arm@22.0.1
    +spack compiler find
    +spack unload
    +

    +

    Where to build softwares

    +

    The cluster has quite a small headnode, this means that the compilation of complex software is prohibited. One simple solution is to use the compute nodes to perform the Spack installations, by submitting the command through Slurm. +

    srun -N1 -c 36 spack install -j36 gcc@11.2.0+binutils
    +

    +

    AWS Environment

    +
      +
    • +

      The versions of these external packages may change and are included for reference.

      +
    • +
    • +

      The Cluster comes pre-installed with Slurm , libfabric , PMIx , Intel MPI , and Open MPI . To use these packages, you need to tell spack where to find them. +

      cat << EOF > $SPACK_ROOT/etc/spack/packages.yaml
      +packages:
      +    libfabric:
      +        variants: fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm
      +        externals:
      +        - spec: libfabric@1.13.2 fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm
      +          prefix: /opt/amazon/efa
      +        buildable: False
      +    openmpi:
      +        variants: fabrics=ofi +legacylaunchers schedulers=slurm ^libfabric
      +        externals:
      +        - spec: openmpi@4.1.1 %gcc@7.3.1
      +          prefix: /opt/amazon/openmpi
      +        buildable: False
      +    pmix:
      +        externals:
      +          - spec: pmix@3.2.3 ~pmi_backwards_compatibility
      +            prefix: /opt/pmix
      +        buildable: False
      +    slurm:
      +        variants: +pmix sysconfdir=/opt/slurm/etc
      +        externals:
      +        - spec: slurm@21.08.8-2 +pmix sysconfdir=/opt/slurm/etc
      +          prefix: /opt/slurm
      +        buildable: False
      +    armpl:
      +        externals:
      +        - spec: armpl@21.0.0%gcc@9.3.0
      +          prefix: /opt/arm/armpl/21.0.0/armpl_21.0_gcc-9.3/
      +        buildable: False
      +EOF
      +

      +
    • +
    +

    Add the GCC 9.3 Compiler

    +

    The Graviton image ships with an additional compiler within the ArmPL project. We can add this compiler to the Spack environment with the following command: spack compiler add /opt/arm/armpl/gcc/9.3.0/bin/

    +

    Open MPI

    +

    For Open MPI we have already made the definition to set libfabric as a dependency of Open MPI. So by default it will configure it correctly. +

    spack install openmpi%gcc@11.2.0
    +

    +

    Additional resources

    + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/connect/access/index.html b/connect/access/index.html new file mode 100644 index 00000000..988695ed --- /dev/null +++ b/connect/access/index.html @@ -0,0 +1,2982 @@ + + + + + + + + + + + + + + + + + + + + + + + + Access/Login Servers - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Login Nodes

    +

    Opening an SSH connection to ULHPC systems results in a +connection to an access node.

    +
    +
    ssh iris-cluster
    +
    + +
    +
    +
    ssh aion-cluster
    +
    + +
    +
    +

    To be able to further run GUI applications within your [interactive] jobs: +

    ssh -X iris-cluster   # OR on Mac OS: ssh -Y iris-cluster
    +

    +
    +
    +

    To be able to further run GUI applications within your [interactive] jobs: +

    ssh -X aion-cluster   # OR on Mac OS: ssh -Y aion-cluster
    +

    +
    +
    +
    +

    Important

    +

    Recall that you SHOULD NOT run any HPC application on the login nodes.

    +

    That's why the module command is NOT available on them.

    +
    +

    Usage

    +

    On access nodes, typical user tasks include

    +
      +
    • Transferring and managing files
    • +
    • Editing files
    • +
    • Submitting jobs
    • +
    +
    +

    Appropriate Use

    +

    Do not run compute- or memory-intensive applications on access +nodes. These nodes are a shared resource. ULHPC admins may terminate +processes which are having negative impacts on other users or the +systems.

    +
    +
    +

    Avoid watch

    +

    If you must use the watch command, please use a much longer +interval such as 5 minutes (=300 sec), e.g., watch -n 300 +<your_command>.

    +
    +
    +

    Avoid Visual Studio Code

    +

    Avoid using Visual Studio Code to connect to the HPC, as it consumes a lot of resources +in the login nodes. Heavy development shouldn't be done directly on the HPC. +For most tasks using a terminal based editor should be enough like: +Vim or Emacs. If you want to have some more advanced features try Neovim +where you can add plugins to meet your specific needs.

    +
    +

    Tips

    +
    +

    ULHPC provides a wide variety of qos's

    +
      +
    • An interactive qos is +available on Iris and Aion for compute- and memory-intensive interactive +work. Please, use an interactive job for resource-intensive processes +instead of running them on access nodes.
    • +
    +
    +
    +

    Tip

    +

    To help identify processes that make heavy use of resources, you +can use:

    +
      +
    • top -u $USER
    • +
    • /usr/bin/time -v ./my_command
    • +
    +
    +
    +

    Running GUI Application over X11

    +

    If you intend to run GUI applications (MATLAB, Stata, ParaView etc.), you MUST connect by SSH to the login nodes with the -X (or -Y on Mac OS) option:

    +
    +
    ssh -X iris-cluster   # OR on Mac OS: ssh -Y iris-cluster
    +
    + +
    +
    +
    ssh -X aion-cluster   # OR on Mac OS: ssh -Y aion-cluster
    +
    + +
    +
    +
    +
    +

    Install Neovim using Micormamba

    +

    Neovim is not installed by default on the HPC but you can install it using Micromamba.

    +

    micromamba create --name editor-env
    +micromamba install --name editor-env conda-forge::nvim
    +
    +After installation you can create a alias in your .bashrc for easy access: +
    alias nvim='micromamba run --name editor-env nvim'
    +

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/connect/images/SshD.png b/connect/images/SshD.png new file mode 100644 index 00000000..480980d3 Binary files /dev/null and b/connect/images/SshD.png differ diff --git a/connect/images/SshL.png b/connect/images/SshL.png new file mode 100644 index 00000000..5aca6bbd Binary files /dev/null and b/connect/images/SshL.png differ diff --git a/connect/images/SshR.png b/connect/images/SshR.png new file mode 100644 index 00000000..88106c73 Binary files /dev/null and b/connect/images/SshR.png differ diff --git a/connect/images/ipa.png b/connect/images/ipa.png new file mode 100644 index 00000000..e7fa013f Binary files /dev/null and b/connect/images/ipa.png differ diff --git a/connect/images/moba-agent1.png b/connect/images/moba-agent1.png new file mode 100644 index 00000000..d5de98a5 Binary files /dev/null and b/connect/images/moba-agent1.png differ diff --git a/connect/images/moba-agent2.png b/connect/images/moba-agent2.png new file mode 100644 index 00000000..deed516b Binary files /dev/null and b/connect/images/moba-agent2.png differ diff --git a/connect/images/moba-network-sessions-manager.png b/connect/images/moba-network-sessions-manager.png new file mode 100644 index 00000000..ae5df68b Binary files /dev/null and b/connect/images/moba-network-sessions-manager.png differ diff --git a/connect/images/moba-session-advanced.png b/connect/images/moba-session-advanced.png new file mode 100644 index 00000000..fc7d7ae3 Binary files /dev/null and b/connect/images/moba-session-advanced.png differ diff --git a/connect/images/moba-session-button.png b/connect/images/moba-session-button.png new file mode 100644 index 00000000..b33dd3aa Binary files /dev/null and b/connect/images/moba-session-button.png differ diff --git a/connect/images/moba-ssh-key-gen.png b/connect/images/moba-ssh-key-gen.png new file mode 100644 index 00000000..b50df8b4 Binary files /dev/null and b/connect/images/moba-ssh-key-gen.png differ diff --git a/connect/images/ood_file_management.png b/connect/images/ood_file_management.png new file mode 100644 index 00000000..9509642d Binary files /dev/null and b/connect/images/ood_file_management.png differ diff --git a/connect/images/ood_graphical_allocated.png b/connect/images/ood_graphical_allocated.png new file mode 100644 index 00000000..327a6ac6 Binary files /dev/null and b/connect/images/ood_graphical_allocated.png differ diff --git a/connect/images/ood_graphical_desktop.png b/connect/images/ood_graphical_desktop.png new file mode 100644 index 00000000..75a2d493 Binary files /dev/null and b/connect/images/ood_graphical_desktop.png differ diff --git a/connect/images/ood_graphical_waiting.png b/connect/images/ood_graphical_waiting.png new file mode 100644 index 00000000..f91438bb Binary files /dev/null and b/connect/images/ood_graphical_waiting.png differ diff --git a/connect/images/ood_job_composer.png b/connect/images/ood_job_composer.png new file mode 100644 index 00000000..4f9cc807 Binary files /dev/null and b/connect/images/ood_job_composer.png differ diff --git a/connect/images/ood_job_list.png b/connect/images/ood_job_list.png new file mode 100644 index 00000000..4f98fe9a Binary files /dev/null and b/connect/images/ood_job_list.png differ diff --git a/connect/images/ood_shell_access.png b/connect/images/ood_shell_access.png new file mode 100644 index 00000000..d5862098 Binary files /dev/null and b/connect/images/ood_shell_access.png differ diff --git a/connect/images/putty-setup-screenshot.png b/connect/images/putty-setup-screenshot.png new file mode 100644 index 00000000..3f3f74e5 Binary files /dev/null and b/connect/images/putty-setup-screenshot.png differ diff --git a/connect/images/puttygen-screenshot-1.png b/connect/images/puttygen-screenshot-1.png new file mode 100644 index 00000000..21fde3e8 Binary files /dev/null and b/connect/images/puttygen-screenshot-1.png differ diff --git a/connect/images/puttygen-screenshot-2.png b/connect/images/puttygen-screenshot-2.png new file mode 100644 index 00000000..4f0de1ea Binary files /dev/null and b/connect/images/puttygen-screenshot-2.png differ diff --git a/connect/images/puttygen-screenshot-3.png b/connect/images/puttygen-screenshot-3.png new file mode 100644 index 00000000..01039150 Binary files /dev/null and b/connect/images/puttygen-screenshot-3.png differ diff --git a/connect/images/puttygen-screenshot-6.png b/connect/images/puttygen-screenshot-6.png new file mode 100644 index 00000000..37580b9f Binary files /dev/null and b/connect/images/puttygen-screenshot-6.png differ diff --git a/connect/images/puttygen-screenshot-7.png b/connect/images/puttygen-screenshot-7.png new file mode 100644 index 00000000..2793c4cc Binary files /dev/null and b/connect/images/puttygen-screenshot-7.png differ diff --git a/connect/images/ssh.png b/connect/images/ssh.png new file mode 100644 index 00000000..2647d9d4 Binary files /dev/null and b/connect/images/ssh.png differ diff --git a/connect/ipa/index.html b/connect/ipa/index.html new file mode 100644 index 00000000..cdc5dcdb --- /dev/null +++ b/connect/ipa/index.html @@ -0,0 +1,2949 @@ + + + + + + + + + + + + + + + + + + + + + + + + Identity Management (IdM/IPA) - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ULHPC Identity Management Portal (IdM/IPA)

    +

    ULHPC Identity Management Portal

    +

    Red Hat Identity Management (IdM), formally referred to as IPA ("Identity, Policy, and Audit" -- see also +https://www.freeipa.org), provides a centralized and unified way to manage +identity stores, authentication, policies, and authorization policies in a +Linux-based domain. IdM significantly reduces the administrative overhead of +managing different services individually and using different tools on different +machines.

    +

    All services (HPC and complementary ones) managed by the ULHPC team rely on a +highly redundant setup involving several Redhat IdM/IPA server.

    +
    +

    SSH Key Management

    +

    You are responsible for uploading and managing your authorized public SSH +keys for your account, under the terms of the Acceptable Use Policy. +Be aware that the ULHPC team review on a periodical basis the compliance to the policy, as well as the security of your keys. +See also the note on deprecated/weak DSA/RSA keys

    +
    +

    References

    + +

    Upload your SSH key on the ULHPC Identity Management Portal

    +

    You should upload your public SSH key(s) *.pub to your user entry on the ULHPC Identity Management Portal. +For that, connect to the ULHPC IdM portal (use the URL communicated to you by the UL HPC team in your "welcome" mail) and enter your ULHPC credentials.

    +

    +

    First copy the content of the key you want to add

    +
    # Example with ED25519 **public** key
    +(laptop)$> cat ~/.ssh/id_ed25519.pub
    +ssh-ed25519 AAAA[...]
    +# OR the RSA **public** key
    +(laptop)$> cat ~/.ssh/id_rsa.pub
    +ssh-rsa AAAA[...]
    +
    + +

    Then on the portal:

    +
      +
    1. Select Identity / Users.
    2. +
    3. Select your login entry
    4. +
    5. Under the Settings tab in the Account Settings area, click SSH public keys: Add.
    6. +
    +

    +

    Paste in the Base 64-encoded public key string, and click Set.

    +

    +

    Click Save at the top of the page. +Your key fingerprint should be listed now.

    +

    IPA user portal

    +
    +

    Listing SSH keys attached to your account through SSSD

    +

    SSSD is a system daemon used on ULHPC computational +resources. Its primary function is to provide access to local or remote +identity and authentication resources through a common framework that can +provide caching and offline support to the system. +To easily access the authorized keys configured for your account from the +command-line (i.e. without login on the ULHPC IPA portal), you can use: +

    sss_ssh_authorizedkeys $(whoami)
    +

    +
    +

    Change Your Password

    +
      +
    1. connect to the ULHPC IdM portal (use the URL communicated to you by the UL + HPC team in your "welcome" mail) and enter your ULHPC credentials.
    2. +
    3. On the top right under your name, select the entry "Change Password"
    4. +
    5. In the dialog window that appears, enter the current password, + and your new password. Your password should meet the password + requirements explained in the next section below, and must be + 'safe' or 'very safe' according to the provided password strength + meter.
    6. +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/connect/linux/index.html b/connect/linux/index.html new file mode 100644 index 00000000..8e5432f4 --- /dev/null +++ b/connect/linux/index.html @@ -0,0 +1,2986 @@ + + + + + + + + + + + + + + + + + + + + + + + + Installation notes - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Installation notes

    +

    Normally, SSH is installed natively on your machine and the ssh command should be accessible from the command line (or a Terminal) through the ssh command:

    +
    (your_workstation)$> ssh -V
    +OpenSSH_8.4p1, OpenSSL 1.1.1h  22 Sep 2020
    +
    + + +

    If that's not the case, consider installing the package openssh-client (Debian-like systems) or ssh (Redhat-like systems).

    +

    Your local SSH configuration is located in the ~/.ssh/ directory and consists of:

    +
      +
    • +

      ~/.ssh/id_rsa.pub: your SSH public key. This one is the only one SAFE to distribute.

      +
    • +
    • +

      ~/.ssh/id_rsa: the associated private key. NEVER EVER TRANSMIT THIS FILE

      +
    • +
    • +

      (eventually) the configuration of the SSH client ~/.ssh/config

      +
    • +
    • +

      ~/.ssh/known_hosts: Contains a list of host keys for all hosts you have logged into that are not already in the system-wide list of known host keys. This permits to detect man-in-the-middle attacks.

      +
    • +
    +

    SSH Key Management

    +

    To generate an SSH keys, just use the ssh-keygen command, typically as follows:

    +
    (your_workstation)$> ssh-keygen -t rsa -b 4096
    +Generating public/private rsa key pair.
    +Enter file in which to save the key (/home/user/.ssh/id_rsa):
    +Enter passphrase (empty for no passphrase):
    +Enter same passphrase again:
    +Your identification has been saved in /home/user/.ssh/id_rsa.
    +Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    +The key fingerprint is:
    +fe:e8:26:df:38:49:3a:99:d7:85:4e:c3:85:c8:24:5b username@yourworkstation
    +The key's randomart image is:
    ++---[RSA 4096]----+
    +|                 |
    +|      . E        |
    +|       * . .     |
    +|      . o . .    |
    +|        S. o     |
    +|       .. = .    |
    +|       =.= o     |
    +|      * ==o      |
    +|       B=.o      |
    ++-----------------+
    +
    + + +
    +

    Warning

    +

    To ensure the security of the platform and your data stored on it, you must protect your SSH keys with a passphrase! +Additionally, your private key and passphrase should never be transmitted to anybody.

    +
    +

    After the execution of ssh-keygen command, the keys are generated and stored in the following files:

    +
      +
    • SSH RSA Private key: ~/.ssh/id_rsa. Again, NEVER EVER TRANSMIT THIS FILE
    • +
    • SSH RSA Public key: ~/.ssh/id_rsa.pub. This file is the ONLY one SAFE to distribute
    • +
    +

    Ensure the access rights are correct on the generated keys using the ' ls -l ' command. The private key should be readable only by you:

    +
    (your_workstation)$> ls -l ~/.ssh/id_*
    +-rw------- 1 git git 751 Mar  1 20:16 /home/username/.ssh/id_rsa
    +-rw-r--r-- 1 git git 603 Mar  1 20:16 /home/username/.ssh/id_rsa.pub
    +
    + + +

    Configuration

    +

    In order to be able to login to the clusters, you will have to add this public key (i.e. id_rsa.pub) into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your "welcome" mail).

    +

    +

    The port on which the SSH servers are listening is not the default one (i.e. 22) but 8022. +Consequently, if you want to connect to the Iris cluster, open a terminal and run (substituting yourlogin with the login name you received from us):

    +
    (your_workstation)$> ssh -p 8022 yourlogin@access-iris.uni.lu
    +
    + + +

    For the Aion cluster, the access server host name is access-aion.uni.lu:

    +
    (your_workstation)$> ssh -p 8022 yourlogin@access-aion.uni.lu
    +
    + + +

    Alternatively, you may want to save the configuration of this connection (and create an alias for it) by editing the file ~/.ssh/config (create it if it does not already exist) and adding the following entries:

    +
    Host iris-cluster
    +    Hostname access-iris.uni.lu
    +
    +Host aion-cluster
    +    Hostname access-aion.uni.lu
    +
    +Host *-cluster
    +    User yourlogin
    +    Port 8022
    +    ForwardAgent no
    +
    + + +

    Now you'll be able to issue the following (simpler) command to connect to the cluster and obtain the welcome banner:

    +
    (your_workstation)$> ssh iris-cluster
    +==================================================================================
    + Welcome to access2.iris-cluster.uni.lux
    +==================================================================================
    +                          _                         ____
    +                         / \   ___ ___ ___  ___ ___|___ \
    +                        / _ \ / __/ __/ _ \/ __/ __| __) |
    +                       / ___ \ (_| (_|  __/\__ \__ \/ __/
    +                      /_/   \_\___\___\___||___/___/_____|
    +               _____      _        ____ _           _          __
    +              / /_ _|_ __(_)___   / ___| |_   _ ___| |_ ___ _ _\ \
    +             | | | || '__| / __| | |   | | | | / __| __/ _ \ '__| |
    +             | | | || |  | \__ \ | |___| | |_| \__ \ ||  __/ |  | |
    +             | ||___|_|  |_|___/  \____|_|\__,_|___/\__\___|_|  | |
    +              \_\                                              /_/
    +==================================================================================
    +
    +=== Computing Nodes ========================================= #RAM/n === #Cores ==
    + iris-[001-108] 108 Dell C6320 (2 Xeon E5-2680v4@2.4GHz [14c/120W]) 128GB  3024
    + iris-[109-168]  60 Dell C6420 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 128GB  1680
    + iris-[169-186]  18 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB   504
    +                +72 GPU  (4 Tesla V100 [5120c CUDA + 640c Tensor])   16GB +368640
    + iris-[187-190]   4 Dell R840 (4 Xeon Platin.8180M@2.5GHz [28c/205W]) 3TB   448
    + iris-[191-196]   6 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB   168
    +                +24 GPU  (4 Tesla V100 [5120c CUDA + 640c Tensor])   32GB +122880
    +==================================================================================
    +  *** TOTAL: 196 nodes, 5824 cores + 491520 CUDA cores + 61440 Tensor cores ***
    +
    + Fast interconnect using InfiniBand EDR 100 Gb/s technology
    + Shared Storage (raw capacity): 2180 TB (GPFS) + 1300 TB (Lustre) = 3480 TB
    +
    + Support (in this order!)                       Platform notifications
    +   - User DOC ........ https://hpc.uni.lu/docs    - Twitter: @ULHPC
    +   - FAQ ............. https://hpc.uni.lu/faq
    +   - Mailing-list .... hpc-users@uni.lu
    +   - Bug reports .NEW. https://hpc.uni.lu/support (Service Now)
    +   - Admins .......... hpc-team@uni.lu (OPEN TICKETS)
    +==================================================================================
    + /!\ NEVER COMPILE OR RUN YOUR PROGRAMS FROM THIS FRONTEND !
    +     First reserve your nodes (using srun/sbatch(1))
    +Linux access2.iris-cluster.uni.lux 3.10.0-957.21.3.el7.x86_64 x86_64
    + 15:51:56 up 6 days,  2:32, 39 users,  load average: 0.59, 0.68, 0.54
    +[yourlogin@access2 ~]$
    +
    + + +

    Activate the SSH agent

    +

    To be able to use your SSH key in a public-key authentication scheme, it must be loaded by an SSH agent.

    +
      +
    • +

      Mac OS X (>= 10.5), this will be handled automatically; you will be asked to fill in the passphrase on the first connection.

      +
    • +
    • +

      Linux, this will be handled automatically; you will be asked to fill the passphrase on the first connection. However if you get a message similar to the following:

      +

      (your_workstation)$> ssh -vv iris-cluster +[...] +Agent admitted failure to sign using the key. +Permission denied (publickey).

      +
    • +
    +

    This means that you have to manually load your key in the SSH agent by running:

    +
    $> ssh-add ~/.ssh/id_rsa
    +
    + + +

    SSH Resources

    +
      +
    • +

      Mac OS X: Cyberduck is a free Cocoa FTP and SFTP client.

      +
    • +
    • +

      Linux: OpenSSH is available in every good linux distro, and every *BSD, and Mac OS X.

      +
    • +
    +

    SSH Advanced Tips

    +
      +
    • +

      Bash completion: The bash-completion package eases the ssh command usage by providing completion for hostnames and more (assuming you set the directive HashKnownHost to no in your ~/etc/ssh_config)

      +
    • +
    • +

      Forwarding a local port: You can forward a local port to a host behind a firewall.

      +
    • +
    +

    SSH forward of local port

    +

    This is useful if you run a server on one of the cluster nodes (let's say listening on port 2222) and you want to access it via the local port 1111 on your machine. Then you'll run:

    +
    (your_workstation)$> ssh iris-cluster -L 1111:iris-014:2222
    +
    + + +
      +
    • Forwarding a remote port: You can forward a remote port back to a host protected by your firewall.
    • +
    +

    SSH forward of a remote port

    +
      +
    • +

      Tunnelling for others: By using the -g parameter, you allow connections from other hosts than localhost to use your SSH tunnels. Be warned that anybody within your network may access the tunnelized host this way, which may be a security issue.

      +
    • +
    • +

      Using OpenSSH SOCKS proxy feature (with Firefox for instance): the OpenSSH ssh client also embeds a SOCKS proxy. You may activate it by using the -D parameter and a value for a port (e.g. 3128), then configuring your application (Firefox for instance) to use localhost:port (i.e. "localhost:3128") as a SOCKS proxy. The FoxyProxy module is typically useful for that.

      +
    • +
    +

    Creating a SOCKS proxy with SSH

    +

    One very nice feature of FoxyProxy is that you can use the host resolution on the remote server. This permits you to access your local machine within the university for instance with the same name you would use within the UL network. To summarize, that's better than the VPN proxy ;)

    +

    Once you setup a SSH SOCKS proxy, you can also use tsocks, a Shell wrapper to simplify the use of the tsocks(8) library to transparently allow an application (not aware of SOCKS) to transparently use a SOCKS proxy. For instance, assuming you create a VNC server on a given remote server as follows:

    +
    (remote_server)$> vncserver -geometry 1366x768
    +New 'X' desktop is remote_server:1
    +
    +Starting applications specified in /home/username/.vnc/xstartup
    +Log file is /home/username/.vnc/remote_server:1.log
    +
    + + +

    Then you can make the VNC client on your workstation use this tunnel to access the VNS server as follows:

    +
    (your_workstation)$> tsocks vncviewer <IP_of_remote_server>:1
    +
    + + +
      +
    • Escape character: use ~. to disconnect, even if your remote command hangs.
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/connect/ood/index.html b/connect/ood/index.html new file mode 100644 index 00000000..27c99c6a --- /dev/null +++ b/connect/ood/index.html @@ -0,0 +1,2966 @@ + + + + + + + + + + + + + + + + + + + + + + + + Open On Demand Portal - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ULHPC Open On Demand (OOD) Portal

    +

    +

    Open OnDemand (OOD) is a Web portal compatible with Windows, Linux and MacOS. +You should login with your ULHPC credential using the URL communicated to you by the UL HPC team.

    +

    OOD provides a convenient web access to the HPC resources and integrates

    +
      +
    • a file management system
    • +
    • a job management system (job composer, monitoring your submitted jobs, ...)
    • +
    • an interactive command-line shell access
    • +
    • interactive apps with graphical desktop environments
    • +
    +
    +

    ULHPC OOD Portal limitations

    +

    The ULHPC OOD portal is NOT accessible outside the UniLu network. +If you want to use it, you will need to setup a VPN to access the UniLu network +Note: The portal is in _still under active development state: missing features and bugs can be reported to the ULHPC team via the support portal

    +
    +

    Live tests and demo are proposed during the ULHPC Tutorial: Preliminaries / OOD.

    +

    Below are illustrations of OOD capabilities on the ULHPC facility.

    +

    File management

    +

    +

    Job composer and Job List

    +

    +

    +

    Shell access

    +

    +

    Interactive sessions

    +

    +

    +

    Graphical Desktop Environment

    +

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/connect/ssh/index.html b/connect/ssh/index.html new file mode 100644 index 00000000..02fc60d1 --- /dev/null +++ b/connect/ssh/index.html @@ -0,0 +1,3750 @@ + + + + + + + + + + + + + + + + + + + + + + + + SSH - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    SSH

    +

    All ULHPC servers are reached using either the Secure +Shell (SSH) communication and encryption protocol (version 2).

    +

    Developed by SSH Communications Security Ltd., Secure Shell is a an encrypted network protocol used to log into another computer over an unsecured network, to execute commands in a remote machine, and to move files from one machine to another in a secure way. +On UNIX/LINUX/BSD type systems, SSH is also the name of a suite of software applications for +connecting via the SSH protocol. The SSH applications can execute +commands on a remote machine and transfer files from one machine to +another. All communications are automatically and transparently +encrypted, including passwords. Most versions of SSH provide login +(ssh, slogin), a remote copy operation (scp), and many also provide a +secure ftp client (sftp). Additionally, SSH allows secure X Window +connections.

    +

    To use SSH, you have to generate a pair of keys, one public and the other +private. +The public key authentication is the most secure and flexible approach to ensure a multi-purpose transparent connection to a remote server. +This approach is enforced on the ULHPC platforms and assumes that the public key is known by the system in order to perform an authentication based on a challenge/response protocol instead of the +classical password-based protocol.

    +

    The way SSH handles the keys and the configuration files is illustrated in the following figure:

    +

    +

    Installation

    + +

    SSH Key Generation

    +

    To generate an RSA SSH keys of 4096-bit length, just use the ssh-keygen command as follows:

    +
    ssh-keygen -t rsa -b 4096 -a 100
    +
    + +

    After the execution of this command, the generated keys are stored in the following files:

    +
      +
    • SSH RSA Private key: ~/.ssh/id_rsa. NEVER EVER TRANSMIT THIS FILE
    • +
    • SSH RSA Public key: ~/.ssh/id_rsa.pub. This file is the ONLY one SAFE to distribute
    • +
    +
    +

    To passphrase or not to passphrase

    +

    To ensure the security of your SSH key-pair on your laptop, you MUST protect your SSH keys with a passphrase! +Note however that while possible, this passphrase is purely private and has a priori nothing to do with your University or your ULHPC credentials. Nevertheless, a strong passphrase follows the same recommendations as for strong passwords (for instance: see password requirements and guidelines.

    +

    Finally, just like encryption keys, passphrases need to be kept safe and protected from unauthorised access. A Password Manager can help you to store all your passwords safely. The University is currently not offering a university wide password manger but there are many free and paid ones you can use, for example: KeePassX, PWSafe, Dashlane, 1Password or LastPass.

    +
    +

    You may want to generate also ED25519 Key Pairs (which is the most recommended public-key algorithm available today) -- see explaination

    +
    ssh-keygen -t ed25519 -a 100
    +
    + +

    Your key pairs will be located under ~/.ssh/ and follow the following format -- the .pub extension indicated the public key part and is the ONLY one SAFE to distribute:

    +
    $ ls -l ~/.ssh/id_*
    +-rw------- username groupname ~/.ssh/id_rsa
    +-rw-r--r-- username groupname ~/.ssh/id_rsa.pub     # Public  RSA key
    +-rw------- username groupname ~/.ssh/id_ed25519
    +-rw-r--r-- username groupname ~/.ssh/id_ed25519.pub # Public ED25519 key
    +
    + +

    Ensure the access rights are correct on the generated keys using the 'ls -l' command. +In particular, the private key should be readable only by you:

    +

    For more details, follow the ULHPC Tutorials: Preliminaries / SSH.

    +
    (deprecated - Windows only): SSH key management with MobaKeyGen tool

    On Windows with MobaXterm, a tool exists and can be used to generate an SSH key pair. While not recommended (we encourage you to run WSL), here are the instructions to follow to generate these keys:

    +
      +
    • Open the application Start > Program Files > MobaXterm.
    • +
    • Change the default home directory for a persistent home directory instead of the default Temp directory. Go onto Settings > Configuration > General > Persistent home directory.
        +
      • choose a location for your home directory.
          +
        • your local SSH configuration will be located under HOME/.ssh/
        • +
        +
      • +
      +
    • +
    • Go onto Tools > Network > MobaKeyGen (SSH key generator).
        +
      • Choose RSA as the type of key to generate and change "Number of bits in a generated key" to 4096.
      • +
      • Click on the Generate button. Move your mouse to generate some randomness.
      • +
      • Select a strong passphrase in the Key passphrase field for your key.
      • +
      +
    • +
    • Save the public and private keys as respectively id_rsa.pub and id_rsa.ppk.
        +
      • Please keep a copy of the public key, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your "welcome" mail).
      • +
      +
    • +
    +

    MobaKeyGen (SSH key generator)

    +
    +
    (deprecated - Windows only): SSH key management with PuTTY

    While no longer recommended, you may still want to use Putty and the associated tools, more precisely:

    +
      +
    • PuTTY, the free SSH client
    • +
    • Pageant, an SSH authentication agent for PuTTY tools
    • +
    • PuTTYgen, an RSA key generation utility
    • +
    • PSCP, an SCP (file transfer) client, i.e. command-line secure file copy
    • +
    • WinSCP, SCP/SFTP (file transfer) client with easy-to-use graphical interface
    • +
    +

    The different steps involved in the installation process are illustrated below (REMEMBER to tick the option "Associate .PPK files (PuTTY Private Key) with Pageant and PuTTYGen"):

    +

    Putty Setup Screen #4

    +

    Now you can use the PuTTYgen utility to generate an RSA key pair. The main steps for the generation of the keys are illustrated below (yet with 4096 bits instead of 2048):

    +

    Configuring a passphrase

    +

    Saving the private key

    +

    Saving the public key

    +
      +
    • Save the public and private keys as respectively id_rsa.pub and id_rsa.ppk.
        +
      • Please keep a copy of the public key, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your "welcome" mail).
      • +
      +
    • +
    +
    +

    Password-less logins and transfers

    +

    Password based authentication is disabled on all ULHPC servers. +You can only use public-key authentication. +This assumes that you upload your public SSH keys *.pub to your user entry on the ULHPC Identity Management Portal.

    +

    Consult the associated documentation to discover how to do it.

    +

    Once done, you can connect by SSH to the ULHPC clusters. +Note that the port on which the SSH servers are listening is not the default SSH one (i.e. 22) but 8022. Consequently, if you want to connect to the Iris cluster, open a terminal and run (substituting yourlogin with the login name you received from us):

    +
    +
    # ADAPT 'yourlogin' accordingly
    +ssh -p 8022 yourlogin@access-iris.uni.lu
    +
    + +
    +
    +
    # ADAPT 'yourlogin' accordingly
    +ssh -p 8022 yourlogin@access-aion.uni.lu
    +
    + +
    +
    +

    Of course, we advise you to setup your SSH configuration to avoid typing this detailed command. This is explained in the next section.

    +

    SSH Configuration

    +

    On Linux / Mac OS / Unix / WSL, your SSH configuration is defined in ~/.ssh/config. +As recommended in the ULHPC Tutorials: Preliminaries / SSH, you probably want to create the following configuration to easiest further access and data transfers:

    +
    # ~/.ssh/config -- SSH Configuration
    +# Common options
    +Host *
    +    Compression yes
    +    ConnectTimeout 15
    +
    +# ULHPC Clusters
    +Host iris-cluster
    +    Hostname access-iris.uni.lu
    +
    +Host aion-cluster
    +    Hostname access-aion.uni.lu
    +
    +# /!\ ADAPT 'yourlogin' accordingly
    +Host *-cluster
    +    User yourlogin
    +    Port 8022
    +    ForwardAgent no
    +
    + +

    You should now be able to connect as follows

    +
    +
    ssh iris-cluster
    +
    + +
    +
    +
    ssh aion-cluster
    +
    + +
    +
    +
    (Windows only) Remote session configuration with MobaXterm

    This part of the documentation comes from MobaXterm documentation page +MobaXterm allows you to launch remote sessions. You just have to click on the "Sessions" button to start a new session. Select SSH session on the second screen.

    +

    MobaXterm Session button

    +

    MobaXterm Session Manager

    +

    Enter the following parameters:

    +
      +
    • Remote host: access-iris.uni.lu (repeat with access-aion.uni.lu)
    • +
    • Check the Specify username box
    • +
    • Username: yourlogin
        +
      • Adapt to match the one that was sent to you in the Welcome e-mail once your HPC account was created
      • +
      +
    • +
    • Port: 8022
    • +
    • Go in Advanced SSH settings and check the Use private key box.
        +
      • Select your previously generated key id_rsa.ppk.
      • +
      +
    • +
    +

    MobaXterm Session Manager Advanced

    +

    You can now click on Connect and enjoy.

    +
    +
    (deprecated - Windows only) - Remote session configuration with PuTTY

    If you want to connect to one of the ULHPC cluster, open Putty and enter the following settings:

    +
      +
    • In Category:Session :
    • +
    • Host Name: access-iris.uni.lu (or access-aion.uni.lu if you want to access Aion)
    • +
    • Port: 8022
    • +
    • Connection Type: SSH (leave as default)
    • +
    • In Category:Connection:Data :
    • +
    • Auto-login username: yourlogin
        +
      • Adapt to match the one that was sent to you in the Welcome e-mail once your HPC account was created
      • +
      +
    • +
    • In Category:SSH:Auth :
    • +
    • Upload your private key: Options controlling SSH authentication
    • +
    +

    Click on Open button. If this is the first time connecting to the server from this computer a Putty Security Alert will appear. Accept the connection by clicking Yes.

    +

    You should now be logged into the selected ULHPC login node.

    +

    Now you probably want want to save the configuration of this connection:

    +
      +
    • Go onto the Session category.
    • +
    • Enter the settings you want to save.
    • +
    • Enter a name in the Saved session field (for example Iris for access to Iris cluster).
    • +
    • Click on the Save button.
    • +
    +

    Next time you want to connect to the cluster, click on Load button and Open to open a new connection.

    +
    +

    SSH Agent

    +

    On your laptop

    +

    To be able to use your SSH key in a public-key authentication scheme, it must be loaded by an SSH agent.

    +
      +
    • +

      Mac OS X (>= 10.5), this will be handled automatically; you will be asked to fill in the passphrase on the first connection.

      +
    • +
    • +

      Linux, this will be handled automatically; you will be asked to fill the passphrase on the first connection.

      +
    • +
    +

    However if you get a message similar to the following:

    +
    (laptop)$> ssh -vv iris-cluster
    +[...]
    +Agent admitted failure to sign using the key.
    +Permission denied (publickey).
    +
    + +

    This means that you have to manually load your key in the SSH agent by running:

    +
    (laptop)$> ssh-add ~/.ssh/id_rsa
    +Enter passphrase for ~/.ssh/id_rsa:           # <-- enter your passphrase here
    +Identity added: ~/.ssh/id_rsa (<login>@<hostname>)
    +
    +(laptop)$> ssh-add ~/.ssh/id_ed25519
    +Enter passphrase for ~/.ssh/id_ed25519:       # <-- enter your passphrase here
    +Identity added: ~/.ssh/id_ed25519 (<login>@<hostname>)
    +
    + +
      +
    • On Ubuntu/WSL, if you experience issues when using ssh-add, you should install the keychain package and use it as follows (eventually add it to your ~/.profile):
    • +
    +
    # Installation
    +(laptop)$> sudo apt install keychain
    +
    +# Save your passphrase
    +/usr/bin/keychain --nogui ~/.ssh/id_ed25519    # (eventually) repeat with ~/.ssh/id_rsa
    +# Load the agent in your shell
    +source ~/.keychain/$(hostname)-sh
    +
    + +
    (Windows only) SSH Agent within MobaXterm
      +
    • Go in Settings > SSH Tab
    • +
    • In SSH agents section, check Use internal SSH agent "MobAgent"
    • +
    +

    +
      +
    • Click on the + button on the right
    • +
    • Select your private key file. If you have several keys, you can add them by doing steps above again.
    • +
    • Click on "Show keys currently loaded in MobAgent". An advertisement window may appear asking if you want to run MobAgent. Click on "Yes".
    • +
    • Check that your key(s) appears in the window.
    • +
    +

    +
      +
    • Close the window.
    • +
    • Click on OK. Restart MobaXterm.
    • +
    +
    +
    (deprecated - Windows only) - SSH Agent with PuTTY Pageant

    To be able to use your PuTTY key in a public-key authentication scheme, it must be loaded by an SSH agent. +You should run Pageant for that. +To load your SSH key in Pageant:

    +
      +
    • Right-click on the pageant icon in the system tray,
        +
      • click on the Add key menu item
      • +
      • select the private key file you saved while running puttygen.exe i.e. ``
      • +
      • click on the Open button: a new dialog will pop up and ask for your passphrase. Once your passphrase is entered, your key will be loaded in pageant, enabling you to connect with Putty.
      • +
      +
    • +
    +
    +

    On ULHPC clusters

    +

    For security reason, SSH agent forwarding is prohibited and explicitly disabled (see ForwardAgent no configuration by default in the above configuration, you may need to manually load an agent once connected on the ULHPC facility, for instance if you are tired of typing the passphrase of a SSH key generated on the cluster to access a remote (private) service.

    +

    You need to proceed as follows:

    +
    $ eval "$(ssh-agent)"    # export the SSH_AUTH_SOCK and SSH_AGENT_PID variables
    +$ ssh-add ~/.ssh/id_rsa
    +# [...]
    +Enter passphrase for [...]
    +Identity added: ~/.ssh/id_rsa (<login>@<hostname>)
    +
    + +

    You can then enjoy it. +Be aware however that this exposes your private key. So you MUST properly kill your agent when you don't need it any mode, using

    +
    $ eval "$(ssh-agent -k)"
    +Agent pid <PID> killed
    +
    + +

    Key fingerprints

    +

    ULHPC may occasionally update the host keys on the major systems. Check here +to confirm the current fingerprints.

    +
    +

    With regards access-iris.uni.lu:

    +
    256 SHA256:tkhRD9IVo04NPw4OV/s2LSKEwe54LAEphm7yx8nq1pE /etc/ssh/ssh_host_ed25519_key.pub (ED25519)
    +2048 SHA256:WDWb2hh5uPU6RgaSotxzUe567F3scioJWy+9iftVmhI /etc/ssh/ssh_host_rsa_key.pub (RSA)
    +
    + +
    +
    +

    With regards access-aion.uni.lu:

    +
    256 SHA256:jwbW8pkfCzXrh1Xhf9n0UI+7hd/YGi4FlyOE92yxxe0 [access-aion.uni.lu]:8022 (ED25519)
    +3072 SHA256:L9n2gT6aV9KGy0Xdh1ks2DciE9wFz7MDRBPGWPFwFK4 [access-aion.uni.lu]:8022 (RSA)
    +
    + +
    +
    +
    +

    Get SSH key fingerprint

    +

    The ssh fingerprints can be obtained via: +

    ssh-keygen -lf <(ssh-keyscan -t rsa,ed25519 $(hostname) 2>/dev/null)
    +

    +
    +
    Putty key fingerprint format

    Depending on the ssh client you use to connect to ULHPC systems, you may see different key fingerprints. +For example, Putty uses different format of fingerprints as follows:

    +
      +
    • access-iris.uni.lu +
      ssh-ed25519 255 4096 07:6a:5f:11:df:d4:3f:d4:97:98:12:69:3a:63:70:2f
      +
    • +
    +

    You may see the following warning when connecting to Cori with Putty, but it is safe to ingore.

    +
    PuTTY Security Alert
    +The server's host key is not cached in the registry. You have no guarantee that the server is the computer you think it is.
    +The server's ssh-ed25519 key fingerprint is:
    +ssh-ed25519 255 4096 07:6a:5f:11:df:d4:3f:d4:97:98:12:69:3a:63:70:2f
    +If you trust this host, hit Yes to add the key to PuTTY's cache and carry on connecting.
    +If you want to carry on connecting just once, without adding the key to the cache, hit No.
    +If you do not trust this host, hit Cancel to abandon the connection.
    +
    + +
    +

    Host Keys

    +

    These are the entries in ~/.ssh/known_hosts.

    +
    +

    The known host SSH entry for the Iris cluster should be as follows:

    +
    [access-iris.uni.lu]:8022 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOP1eF8uJ37h5jFQQShn/NHRGD/d8KsMMUTHkoPRANLn
    +
    + +
    +
    +

    The known host SSH entry for the Aion cluster should be as follows:

    +
    [access-aion.uni.lu]:8022 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFmcYJ7T6A1wOvIQaohgwVCrKLqIrzpQZAZrlEKx8Vsy
    +
    + +
    +
    +

    Troubleshooting

    +

    See the corresponding section.

    +

    Advanced SSH Tips and Tricks

    +

    CLI Completion

    +

    The bash-completion package eases the ssh command usage by providing completion for hostnames and more (assuming you set the directive HashKnownHost to no in your ~/etc/ssh_config)

    +

    SOCKS 5 Proxy plugin

    +

    Many Data Analytics framework involves a web interface (at the level of the master and/or the workers) you probably want to access in a relative transparent way.

    +

    For that, a convenient way is to rely on a SOCKS proxy, which is basically an SSH tunnel in which specific applications forward their traffic down the tunnel to the server, and then on the server end, the proxy forwards the traffic out to the general Internet. +Unlike a VPN, a SOCKS proxy has to be configured on an app by app basis on the client machine, but can be set up without any specialty client agents. +The general principle is depicted below.

    +

    Creating a SOCKS proxy with SSH

    +

    Setting Up the Tunnel

    +

    To initiate such a SOCKS proxy using SSH (listening on localhost:1080 for instance), you simply need to use the -D 1080 command line option when connecting to a remote server:

    +
    +
    ssh -D 1080 -C iris-cluster
    +
    + +
    +
    +
    ssh -D 1080 -C aion-cluster
    +
    + +
    +
    +
      +
    • -D: Tells SSH that we want a SOCKS tunnel on the specified port number (you can choose a number between 1025-65536)
    • +
    • -C: Compresses the data before sending it
    • +
    +

    FoxyProxy [Firefox] Extension

    +

    Now that you have an SSH tunnel, it's time to configure your web browser (recommended: Firefox) to use that tunnel. +In particular, install the Foxy Proxy +extension for Firefox and configure it to use your SOCKS proxy:

    +
      +
    • Right click on the fox icon, Select Options
    • +
    • Add a new proxy button
    • +
    • Name: ULHPC proxy
    • +
    • Informations > Manual configuration
        +
      • Host IP: 127.0.0.1
      • +
      • Port: 1080
      • +
      • Check the Proxy SOCKS Option
      • +
      +
    • +
    • Click on OK
    • +
    • Close
    • +
    • Open a new tab
    • +
    • Click on the Fox
    • +
    • Choose the ULHPC proxy
        +
      • disable it when you no longer need it.
      • +
      +
    • +
    +

    You can now access any web interface deployed on any service reachable from the SSH jump host i.e. the ULHPC login node.

    +

    Using tsock

    +

    Once you setup a SSH SOCKS proxy, you can also use tsocks, a Shell wrapper to simplify the use of the tsocks(8) library to transparently allow an application (not aware of SOCKS) to transparently use a SOCKS proxy. For instance, assuming you create a VNC server on a given remote server as follows:

    +
    (remote_server)$> vncserver -geometry 1366x768
    +New 'X' desktop is remote_server:1
    +
    +Starting applications specified in /home/username/.vnc/xstartup
    +Log file is /home/username/.vnc/remote_server:1.log
    +
    + +

    Then you can make the VNC client on your workstation use this tunnel to access the VNS server as follows:

    +
    (laptop)$> tsocks vncviewer <IP_of_remote_server>:1
    +
    + +
    +

    tsock Escape character

    +

    Use ~. to disconnect, even if your remote command hangs.

    +
    +

    SSH Port Forwarding

    +

    Forwarding a local port

    +

    You can forward a local port to a host behind a firewall.

    +

    SSH forward of local port

    +

    This is useful if you run a server on one of the cluster nodes (let's say listening on port 2222) and you want to access it via the local port 1111 on your machine. Then you'll run:

    +
    # Here targeting iris cluster
    +(laptop) $ ssh iris-cluster -L 1111:iris-014:2222
    +
    + +

    Forwarding a remote port

    +

    You can forward a remote port back to a host protected by your firewall.

    +

    SSH forward of a remote port

    +

    This is useful when you want the HPC node to access some local service. For instance is your local machine runs a service that is listening at some local port, say 2222, and you have some service in the HPC node that listens to some local port, say 1111, then the you'll run:

    +
    # Here targeting the iris cluster
    +(local machine) $ ssh iris-cluster -R 1111:$(hostname -i):2222
    +
    + +

    Tunnelling for others

    +

    By using the -g parameter, you allow connections from other hosts than localhost to use your SSH tunnels. Be warned that anybody within your network may access the tunnelized host this way, which may be a security issue.

    +

    SSH jumps

    +

    Compute nodes are not directly accessible from the outside network. To login into a cluster node you will need to jump through a login node. Remember, you need a job running in a node before you can ssh into it. Assume that you have some job running on aion-0014 for instance. Then, connect to aion-0014 with:

    +
    ssh -J ${USER}@access-aion.uni.lu:8022 ${USER}@aion-0014
    +
    + +

    The domain resolution in the login node will determine the IP of the aion-0014. You can always use the IP address if the node directly if you know it.

    +

    Passwordless SSH jumps

    +

    The ssh agent is not configured in the login nodes for security reasons. As a result, compute nodes will request your password. To configure a passwordless jump to a compute node, you will need to install the same key in your ssh configuration of your local machine and the login node.

    +

    To avoid exposing your keys at your personal machine, create and share a new key. Create a key in your local machine, +

    ssh-keygen -a 127 -t ed25519 -f ~/.ssh/ulhpc_id_ed25519
    +
    +and then copy both the private and public keys in your HPC account, +
    scp ~/.ssh/ulhpc_id_ed25519* aion-cluster:~/.ssh/
    +
    +where the command assumes that you have setup your SSH configuration file. Finally, add the key to the list of authorized keys: +
    ssh-copy-id -i ~/.ssh/ulhpc_id_ed25519 aion-cluster
    +
    +Then you can connect without a password to any compute node at which you have a job running with the command: +
    ssh -i ~/.ssh/ulhpc_id_ed25519 -J ${USER}@access-aion.uni.lu:8022 ${USER}@<node address>
    +

    +

    In the <node address> option you can use the node IP address or the node name.

    +

    Port forwarding over SSH jumps

    +

    You can combine the jump command with other options, such as port forwarding, for instance to access from you local machine a web server running in a compute node. Assume for instance that you have a server running in iris-014 and listens at the IP 127.0.0.1 and port 2222, and that you would like to forward the remote port 2222 to the 1111 port of you local machine. The, call the port forwarding command with a jump though the login node:

    +
    ssh -J iris-cluster -L 1111:127.0.0.1:2222 <cluster username>@iris-014
    +
    + +

    This command can be combined with passwordless access to the cluster node.

    +

    Extras Tools around SSH

    +
      +
    • +

      Assh - Advanced SSH config is a transparent wrapper that make ~/.ssh/config easier to manage

      +
        +
      • support for templates, aliases, defaults, inheritance etc.
      • +
      • gateways: transparent ssh connection chaining
      • +
      • +

        more flexible command-line. Ex: Connect to hosta using hostb as a gateway +

        $ ssh hosta/hostb
        +

        +
      • +
      • +

        drastically simplify your SSH config

        +
      • +
      • Linux / Mac OS only
      • +
      +
    • +
    • +

      ClusterShell: clush, nodeset (or cluset),

      +
        +
      • light, unified, robust command execution framework
      • +
      • well-suited to ease daily administrative tasks of Linux clusters.
          +
        • using tools like clush and nodeset
        • +
        +
      • +
      • efficient, parallel, scalable command execution engine \hfill{\tiny in Python}
      • +
      • provides an unified node groups syntax and external group access
          +
        • see nodeset and the NodeSet class
        • +
        +
      • +
      +
    • +
    • +

      DSH - Distributed / Dancer's Shell

      +
    • +
    • sshutle, "where transparent proxy meets VPN meets ssh"
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/connect/troubleshooting/index.html b/connect/troubleshooting/index.html new file mode 100644 index 00000000..6e6bddcd --- /dev/null +++ b/connect/troubleshooting/index.html @@ -0,0 +1,3006 @@ + + + + + + + + + + + + + + + + + + + + + + + + Troubleshooting - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Troubleshooting

    + +

    There are several possibilities and usually the error message can give you some hints.

    +

    Your account has expired

    +

    Please open a ticket on ServiceNow +(HPC → User access & accounts → Report issue with cluster access) +or send us an email to hpc-team@uni.lu with the current end date of your +contract and we will extend your account accordingly.

    +

    "Access Denied" or "Permission denied (publickey)"

    +

    Basically, you are NOT able to connect to the access servers until your +SSH public key is configured. There can be several reason that explain the +denied connection message:

    +
      +
    • Make sure you are using the proper ULHPC user name (and not your local + username or University/Eduroam login).
        +
      • Check your mail entitled "[HPC@Uni.lu] Welcome - Account information" + to get your ULHPC login
      • +
      +
    • +
    • Log into IPA and double check your SSH public key settings.
    • +
    • Ensure you have run your SSH agent
    • +
    • If you have a new computer or for some other reason you have generated + new ssh key, please update your ssh keys on the IPA user portal.
        +
      • See IPA for more details
      • +
      +
    • +
    • You are using (deprecated) DSA/RSA keys. As per the + OpenSSH website:
      +

      "OpenSSH 7.0 and greater similarly disable the ssh-dss (DSA) public key algorithm. +It too is weak and we recommend against its use". Solution: generate a new RSA keypair +(3092 bit or more) and re-upload it on the IPA web portal (use the URL +communicated to you by the UL HPC team in your “welcome” mail). For more +information on keys, see this website.

      +
      +
    • +
    +

    +
      +
    • +

      Your public key is corrupted, please verify and re-upload it on the IPA web portal.

      +
    • +
    • +

      We have taken the cluster down for maintenance and we forgot to activate the + banner message mentioning this. Please check the calendar, the latest Twitter + messages (box on the right of this page) and the messages sent on the hpc-users mailing list.

      +
    • +
    +

    If the above steps did not permit to solve your issue, please open a ticket on +ServiceNow +(HPC → User access & accounts → Report issue with cluster access) or send us an email to hpc-team@uni.lu.

    +

    Host identification changed

    +
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    +@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
    +@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    +IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    +Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    +It is also possible that a host key has just been changed.
    +...
    +
    + +

    Ensure that your ~/.ssh/known_hosts file contains the correct entries for the ULHPC clusters +and confirm the fingerprints using the posted fingerprints

    +
      +
    1. Open ~/.ssh/known_hosts
    2. +
    3. Remove any lines referring Iris and Aion and save the file
    4. +
    5. Paste the specified host key entries (for all clusters) OR retry connecting to the host and + accept the new host key after verify that you have the correct "fingerprint" + from the reference list.
    6. +
    +

    Be careful with permission changes to your $HOME

    +

    If you change your home directory to be writeable by the group, ssh will not let you connect anymore. +It requires drwxr-xr-x or 755 (or less) on your $HOME and ~/.ssh, and +-rw-r--r-- or 644 (or less) on ~/.ssh/authorized_keys.

    +

    File and folder permissions can be verified at any time using stat $path, e.g.:

    +
    $> stat $HOME
    +$> stat $HOME/.ssh
    +$> stat $HOME/.ssh/authorized_keys
    +
    + + +

    Check out the description of the notation of file permissions +in both symbolic and numeric mode.

    +

    On your local machine, you also need to to have read/write permissions to ~/.ssh/config for your user only. This can be ensured with the following command:

    +
    chmod 600 ~/.ssh/config
    +
    + + +

    Open a ticket

    +

    If you cannot solve your problem, do not hesitate to open a ticket on the Service Now portal.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/connect/windows/index.html b/connect/windows/index.html new file mode 100644 index 00000000..2548c6bc --- /dev/null +++ b/connect/windows/index.html @@ -0,0 +1,3132 @@ + + + + + + + + + + + + + + + + + + + + + + + + Windows - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Windows

    + +

    In this page, we cover two different SSH client software: MobaXterm and Putty. +Choose your preferred tool.

    +

    MobaXterm

    +

    Installation notes

    +

    The following steps will help you to configure MobaXterm to access the UL HPC clusters. +You can also check out the MobaXterm demo which shows an overview of its features.

    +

    First, download and install MobaXterm. Open the application Start > Program Files > MobaXterm.

    +

    Change the default home directory for a persistent home directory instead of the default Temp directory. Go onto Settings > Configuration > General > Persistent home directory. Choose a location for your home directory.

    +

    Your local SSH configuration is located in the HOME/.ssh/ directory and consists of:

    +
      +
    • +

      HOME/.ssh/id_rsa.pub: your SSH public key. This one is the only one SAFE to distribute.

      +
    • +
    • +

      HOME/.ssh/id_rsa: the associated private key. NEVER EVER TRANSMIT THIS FILE

      +
    • +
    • +

      (eventually) the configuration of the SSH client HOME/.ssh/config

      +
    • +
    • +

      HOME/.ssh/known_hosts: Contains a list of host keys for all hosts you have logged into that are not already in the system-wide list of known host keys. This permits to detect man-in-the-middle attacks.

      +
    • +
    +

    SSH Key Management

    +

    Choose the method you prefer: either the graphical interface MobaKeyGen or command line generation of the ssh key.

    +

    With MobaKeyGen tool

    +

    Go onto Tools > Network > MobaKeyGen (SSH key generator). Choose RSA as the type of key to generate and change "Number of bits in a generated key" to 4096. Click on the Generate button. Move your mouse to generate some randomness.

    +
    +

    Warning

    +

    To ensure the security of the platform and your data stored on it, you must protect your SSH keys with a passphrase! +Additionally, your private key and passphrase should never be transmitted to anybody.

    +
    +

    Select a strong passphrase in the Key passphrase field for your key. Save the public and private keys as respectively id_rsa.pub and id_rsa.ppk. +Please keep a copy of the public key, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your "welcome" mail).

    +

    MobaKeyGen (SSH key generator)

    +

    IPA user portal

    +

    With local terminal

    +

    Click on Start local terminal. To generate an SSH keys, just use the ssh-keygen command, typically as follows:

    +
    $> ssh-keygen -t rsa -b 4096
    +Generating public/private rsa key pair.
    +Enter file in which to save the key (/home/user/.ssh/id_rsa):
    +Enter passphrase (empty for no passphrase):
    +Enter same passphrase again:
    +Your identification has been saved in /home/user/.ssh/id_rsa.
    +Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    +The key fingerprint is:
    +fe:e8:26:df:38:49:3a:99:d7:85:4e:c3:85:c8:24:5b username@yourworkstation
    +The key's randomart image is:
    ++---[RSA 4096]----+
    +|                 |
    +|      . E        |
    +|       * . .     |
    +|      . o . .    |
    +|        S. o     |
    +|       .. = .    |
    +|       =.= o     |
    +|      * ==o      |
    +|       B=.o      |
    ++-----------------+
    +
    + + +
    +

    Warning

    +

    To ensure the security of the platform and your data stored on it, you must protect your SSH keys with a passphrase!

    +
    +

    After the execution of ssh-keygen command, the keys are generated and stored in the following files:

    +
      +
    • SSH RSA Private key: HOME/.ssh/id_rsa. Again, NEVER EVER TRANSMIT THIS FILE
    • +
    • SSH RSA Public key: HOME/.ssh/id_rsa.pub. This file is the ONLY one SAFE to distribute
    • +
    +

    Configuration

    +

    This part of the documentation comes from MobaXterm documentation page

    +

    MobaXterm allows you to launch remote sessions. You just have to click on the "Sessions" button to start a new session. Select SSH session on the second screen.

    +

    MobaXterm Session button

    +

    MobaXterm Session Manager

    +

    Enter the following parameters:

    +
      +
    • Remote host: access-iris.uni.lu or access-aion.uni.lu
    • +
    • Check the Specify username box
    • +
    • Username: yourlogin
    • +
    • as was sent to you in the Welcome e-mail once your HPC account was created
    • +
    • Port: 8022
    • +
    +

    Go in Advanced SSH settings and check the Use private key box. Select your previously generated key id_rsa.ppk.

    +

    MobaXterm Session Manager Advanced

    +

    Click on Connect. The following text appears.

    +
    ==================================================================================
    + Welcome to access2.iris-cluster.uni.lux
    +==================================================================================
    +                          _                         ____
    +                         / \   ___ ___ ___  ___ ___|___ \
    +                        / _ \ / __/ __/ _ \/ __/ __| __) |
    +                       / ___ \ (_| (_|  __/\__ \__ \/ __/
    +                      /_/   \_\___\___\___||___/___/_____|
    +               _____      _        ____ _           _          __
    +              / /_ _|_ __(_)___   / ___| |_   _ ___| |_ ___ _ _\ \
    +             | | | || '__| / __| | |   | | | | / __| __/ _ \ '__| |
    +             | | | || |  | \__ \ | |___| | |_| \__ \ ||  __/ |  | |
    +             | ||___|_|  |_|___/  \____|_|\__,_|___/\__\___|_|  | |
    +              \_\                                              /_/
    +==================================================================================
    +
    +=== Computing Nodes ========================================= #RAM/n === #Cores ==
    + iris-[001-108] 108 Dell C6320 (2 Xeon E5-2680v4@2.4GHz [14c/120W]) 128GB  3024
    + iris-[109-168]  60 Dell C6420 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 128GB  1680
    + iris-[169-186]  18 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB   504
    +                +72 GPU  (4 Tesla V100 [5120c CUDA + 640c Tensor])   16GB +368640
    + iris-[187-190]   4 Dell R840 (4 Xeon Platin.8180M@2.5GHz [28c/205W]) 3TB   448
    + iris-[191-196]   6 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB   168
    +                +24 GPU  (4 Tesla V100 [5120c CUDA + 640c Tensor])   32GB +122880
    +==================================================================================
    +  *** TOTAL: 196 nodes, 5824 cores + 491520 CUDA cores + 61440 Tensor cores ***
    +
    + Fast interconnect using InfiniBand EDR 100 Gb/s technology
    + Shared Storage (raw capacity): 2180 TB (GPFS) + 1300 TB (Lustre) = 3480 TB
    +
    + Support (in this order!)                       Platform notifications
    +   - User DOC ........ https://hpc.uni.lu/docs    - Twitter: @ULHPC
    +   - FAQ ............. https://hpc.uni.lu/faq
    +   - Mailing-list .... hpc-users@uni.lu
    +   - Bug reports .NEW. https://hpc.uni.lu/support (Service Now)
    +   - Admins .......... hpc-team@uni.lu (OPEN TICKETS)
    +==================================================================================
    + /!\ NEVER COMPILE OR RUN YOUR PROGRAMS FROM THIS FRONTEND !
    +     First reserve your nodes (using srun/sbatch(1))
    +Linux access2.iris-cluster.uni.lux 3.10.0-957.21.3.el7.x86_64 x86_64
    + 15:51:56 up 6 days,  2:32, 39 users,  load average: 0.59, 0.68, 0.54
    +[yourlogin@access2 ~]$
    +
    + + +

    Putty

    +

    Installation notes

    +

    You need to install Putty and the associated tools, more precisely:

    +
      +
    • +

      PuTTY, the free SSH client

      +
    • +
    • +

      Pageant, an SSH authentication agent for PuTTY tools

      +
    • +
    • +

      PuTTYgen, an RSA key generation utility

      +
    • +
    • +

      PSCP, an SCP (file transfer) client, i.e. command-line secure file copy

      +
    • +
    • +

      WinSCP, SCP/SFTP (file transfer) client with easy-to-use graphical interface

      +
    • +
    +

    The simplest method is probably to download and run the latest Putty installer (does not include WinSCP).

    +

    The different steps involved in the installation process are illustrated below (REMEMBER to tick the option "Associate .PPK files (PuTTY Private Key) with Pageant and PuTTYGen"):

    +

    Windows security warning

    +

    Putty Setup Screen #1

    +

    Putty Setup Screen #2

    +

    Putty Setup Screen #3

    +

    Putty Setup Screen #4

    +

    Now you should have all the Putty programs available in Start / All Programs / Putty.

    +

    SSH Key Management

    +

    Here you can use the PuTTYgen utility, an RSA key generation utility.

    +

    The main steps for the generation of the keys are illustrated below:

    +

    Putty Key Generator interface

    +

    Key generation in progress

    +

    Configuring a passphrase

    +

    Saving the private key

    +

    Saving the public key

    +

    Configuration

    +

    In order to be able to login to the clusters, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your "welcome" mail).

    +

    IPA user portal

    +

    The port on which the SSH servers are listening is not the default one (i.e. 22) but 8022. +Consequently, if you want to connect to the Iris cluster, open Putty and enter the following settings:

    +
      +
    • In Category:Session :
    • +
    • Host Name: access-iris.uni.lu or access-aion.uni.lu
    • +
    • Port: 8022
    • +
    • Connection Type: SSH (leave as default)
    • +
    • In Category:Connection:Data :
    • +
    • Auto-login username: yourlogin
    • +
    • In Category:SSH:Auth :
    • +
    • Upload your private key: Options controlling SSH authentication
    • +
    +

    Click on Open button. If this is the first time connecting to the server from this computer a Putty Security Alert will appear. Accept the connection by clicking Yes.

    +

    Enter your login (username of your HPC account). You are now logged into Iris access server with SSH.

    +

    Alternatively, you may want to save the configuration of this connection. Go onto the Session category. Enter the settings you want to save. Enter a name in the Saved session field (for example Iris for access to Iris cluster). Click on the Save button. Next time you want to connect to the cluster, click on Load button and Open to open a new connexion.

    +

    Now you'll be able to obtain the welcome banner:

    +
    ==================================================================================
    + Welcome to access2.iris-cluster.uni.lux
    +==================================================================================
    +                          _                         ____
    +                         / \   ___ ___ ___  ___ ___|___ \
    +                        / _ \ / __/ __/ _ \/ __/ __| __) |
    +                       / ___ \ (_| (_|  __/\__ \__ \/ __/
    +                      /_/   \_\___\___\___||___/___/_____|
    +               _____      _        ____ _           _          __
    +              / /_ _|_ __(_)___   / ___| |_   _ ___| |_ ___ _ _\ \
    +             | | | || '__| / __| | |   | | | | / __| __/ _ \ '__| |
    +             | | | || |  | \__ \ | |___| | |_| \__ \ ||  __/ |  | |
    +             | ||___|_|  |_|___/  \____|_|\__,_|___/\__\___|_|  | |
    +              \_\                                              /_/
    +==================================================================================
    +
    +=== Computing Nodes ========================================= #RAM/n === #Cores ==
    + iris-[001-108] 108 Dell C6320 (2 Xeon E5-2680v4@2.4GHz [14c/120W]) 128GB  3024
    + iris-[109-168]  60 Dell C6420 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 128GB  1680
    + iris-[169-186]  18 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB   504
    +                +72 GPU  (4 Tesla V100 [5120c CUDA + 640c Tensor])   16GB +368640
    + iris-[187-190]   4 Dell R840 (4 Xeon Platin.8180M@2.5GHz [28c/205W]) 3TB   448
    + iris-[191-196]   6 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB   168
    +                +24 GPU  (4 Tesla V100 [5120c CUDA + 640c Tensor])   32GB +122880
    +==================================================================================
    +  *** TOTAL: 196 nodes, 5824 cores + 491520 CUDA cores + 61440 Tensor cores ***
    +
    + Fast interconnect using InfiniBand EDR 100 Gb/s technology
    + Shared Storage (raw capacity): 2180 TB (GPFS) + 1300 TB (Lustre) = 3480 TB
    +
    + Support (in this order!)                       Platform notifications
    +   - User DOC ........ https://hpc.uni.lu/docs    - Twitter: @ULHPC
    +   - FAQ ............. https://hpc.uni.lu/faq
    +   - Mailing-list .... hpc-users@uni.lu
    +   - Bug reports .NEW. https://hpc.uni.lu/support (Service Now)
    +   - Admins .......... hpc-team@uni.lu (OPEN TICKETS)
    +==================================================================================
    + /!\ NEVER COMPILE OR RUN YOUR PROGRAMS FROM THIS FRONTEND !
    +     First reserve your nodes (using srun/sbatch(1))
    +
    + + +

    Activate the SSH agent

    +

    To be able to use your SSH key in a public-key authentication scheme, it must be loaded by an SSH agent.

    +

    You should run Pageant. To load your SSH key in Pageant, just right-click on the pageant icon in the system tray, click on the Add key menu item and select the private key file you saved while running puttygen.exe and click on the Open button: a new dialog will pop up and ask for your passphrase. Once your passphrase is entered, your key will be loaded in pageant, enabling you to connect with Putty.

    +

    Open Putty.exe (connection type: SSH)

    +
      +
    • In _Category:Session_:
    • +
    • Host Name: access-iris.uni.lu or access-aion.uni.lu
    • +
    • Port: 8022
    • +
    • Saved session: Iris
    • +
    • In Category:Connection:Data:
    • +
    • Auto-login username: yourlogin
    • +
    • Go back to Category:Session and click on Save
    • +
    • Click on Open
    • +
    +

    SSH Resources

    +
      +
    • OpenSSH/Cygwin: OpenSSH is available with Cygwin. You may then find the same features in your SSH client even if you run Windows. Furthermore, Cygwin also embeds many other GNU Un*x like tools, and even a FREE X server for windows.
    • +
    • Putty: Free windowish SSH client
    • +
    • ssh.com Free for non commercial use windows client
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/containers/index.html b/containers/index.html new file mode 100644 index 00000000..a1c270e6 --- /dev/null +++ b/containers/index.html @@ -0,0 +1,3169 @@ + + + + + + + + + + + + + + + + + + + + + + + + About - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Containers

    +

    Many applications and libraries can also be used through container systems, with the updated Singularity tool providing many new features of which we can especially highlight support for Open Containers Initiative - OCI containers (including Docker OCI), and support for secure containers - building and running encrypted containers with RSA keys and passphrases.

    +

    Singularity

    +

    +

    The ULHPC offers the possibilty to run Singularity containers. Singularity is an open source container platform designed to be simple, fast, and secure. Singularity is optimized for EPC and HPC workloads, allowing untrusted users to run untrusted containers in a trusted way.

    +

    Loading Singularity

    +

    To use Singularity, you need to load the corresponding Lmod module.

    +
    >$ module load tools/Singularity
    +
    + +
    +

    Warning

    +

    Modules are not allowed on the access servers. To test interactively Singularity, rememerber to ask for an interactive job first. +

    salloc -p interactive --pty bash
    +

    +
    +

    Pulling container images

    +

    Like Docker, Singularity provide a way to pull images from a Hubs such as DockerHub and Singuarity Hub.

    +

    >$ singularity pull docker://ubuntu:latest
    +
    +You should see the following output:

    +
    +

    Output

    +

    INFO:    Converting OCI blobs to SIF format
    +INFO:    Starting build...
    +
    Getting image source signatures
    +Copying blob d72e567cc804 done
    +Copying blob 0f3630e5ff08 done
    +Copying blob b6a83d81d1f4 done
    +Copying config bbea2a0436 done
    +Writing manifest to image destination
    +Storing signatures
    +...
    +INFO:    Creating SIF file...
    +

    +
    +

    You may now test the container by executing some inner commands:

    +
    >$ singularity exec ubuntu_latest.sif cat /etc/os-release
    +
    + +
    +

    Output

    +

    NAME="Ubuntu"
    +VERSION="20.04.1 LTS (Focal Fossa)"
    +ID=ubuntu
    +ID_LIKE=debian
    +PRETTY_NAME="Ubuntu 20.04.1 LTS"
    +VERSION_ID="20.04"
    +HOME_URL="https://www.ubuntu.com/&quot;
    +SUPPORT_URL="https://help.ubuntu.com/&quot;
    +BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/&quot;
    +PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy&quot;
    +VERSION_CODENAME=focal
    +UBUNTU_CODENAME=focal
    +

    +
    +

    Building container images

    +

    Building container images requires to have root privileges. Therefore, users have to build images on their local machine before transfering them to the platform. +Please refer to the Data transfer section for this purpose.

    +
    +

    Note

    +

    Singularity 3 introduces the ability to build your containers in the cloud, so you can easily and securely create containers for your applications without speci al privileges or setup on your local system. The Remote Builder can securely build a container for you from a definition file entered here or via the Singularity CLI (see https://cloud.sylabs.io/builder for more details).

    +
    +

    GPU-enabled Singularity containers

    +

    This section relies on the very excellent documentation from CSCS. In the following example, a container with CUDA features is build, transfered and tested on the ULHPC platform. This example will pull a CUDA container from DockrHub and setup CUDA examples. For this purpose, a singularity definition file, i.e., cuda_samples.def needs to be created with the following content:

    +
    Bootstrap: docker
    +From: nvidia/cuda:10.1-devel
    +
    +%post
    +    apt-get update
    +    apt-get install -y git
    +    git clone https://github.com/NVIDIA/cuda-samples.git /usr/local/cuda_samples
    +    cd /usr/local/cuda_samples
    +    git fetch origin --tags
    +    git checkout 10.1.1
    +    make
    +
    +%runscript
    +    /usr/local/cuda_samples/Samples/deviceQuery/deviceQuery
    +
    + +

    On a local machine having singularity installed, we can build the container image, i.e., cuda_samples.sif using the definition file using the follwing singularity command:

    +
    sudo singularity build cuda_samples.sif cuda_samples.def
    +
    + +
    +

    Warning

    +

    You should have root privileges on this machine. Without this condition, you will not be able to built the definition file.

    +
    +

    Once the container is built and transfered to your dedicated storage on the ULHPC plaform, the container can be executed with the following command:

    +
    # Inside an interactive job on a gpu-enabled node
    +singularity run --nv cuda_samples.sif
    +
    + +
    +

    Warning

    +

    In order to run a CUDA-enabled container, the --nv option has to be passed to singularity run. According to this option, singularity is going to setup the container environment to use the NVIDIA GPU and the basic CUDA libraries.

    +
    +
    +

    Output

    +

    CUDA Device Query (Runtime API) version (CUDART static linking)

    +

    Detected 1 CUDA Capable device(s)

    +

    Device 0: "Tesla V100-SXM2-16GB" + CUDA Driver Version / Runtime Version 10.2 / 10.1 + CUDA Capability Major/Minor version number: 7.0 + Total amount of global memory: 16160 MBytes (16945512448 bytes) + (80) Multiprocessors, ( 64) CUDA Cores/MP: 5120 CUDA Cores + GPU Max Clock rate: 1530 MHz (1.53 GHz) + Memory Clock rate: 877 Mhz + Memory Bus Width: 4096-bit + L2 Cache Size: 6291456 bytes + Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) + Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers + Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers + Total amount of constant memory: 65536 bytes + Total amount of shared memory per block: 49152 bytes + Total number of registers available per block: 65536 + Warp size: 32 + Maximum number of threads per multiprocessor: 2048 + Maximum number of threads per block: 1024 + Max dimension size of a thread block (x,y,z): (1024, 1024, 64) + Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) + Maximum memory pitch: 2147483647 bytes + Texture alignment: 512 bytes + Concurrent copy and kernel execution: Yes with 5 copy engine(s) + Run time limit on kernels: No + Integrated GPU sharing Host Memory: No + Support host page-locked memory mapping: Yes + Alignment requirement for Surfaces: Yes + Device has ECC support: Enabled + Device supports Unified Addressing (UVA): Yes + Device supports Compute Preemption: Yes + Supports Cooperative Kernel Launch: Yes + Supports MultiDevice Co-op Kernel Launch: Yes + Device PCI Domain ID / Bus ID / location ID: 0 / 30 / 0 + Compute Mode: + < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

    +

    deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.1, NumDevs = 1 +Result = PASS +

    +
    +

    MPI and Singularity containers

    +

    This section relies on the very excellent documentation from CSCS. The following singularity definition file mpi_osu.def can be used to build a container with the osu benchmarks using mpi:

    +

    bootstrap: docker
    +from: debian:jessie
    +
    +%post
    +    # Install software
    +    apt-get update
    +    apt-get install -y file g++ gcc gfortran make gdb strace realpath wget curl --no-install-recommends
    +
    +    # Install mpich
    +    curl -kO https://www.mpich.org/static/downloads/3.1.4/mpich-3.1.4.tar.gz
    +    tar -zxvf mpich-3.1.4.tar.gz
    +    cd mpich-3.1.4
    +    ./configure --disable-fortran --enable-fast=all,O3 --prefix=/usr
    +    make -j$(nproc)
    +    make install
    +    ldconfig
    +
    +    # Build osu benchmarks
    +    wget -q http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.3.2.tar.gz
    +    tar xf osu-micro-benchmarks-5.3.2.tar.gz
    +    cd osu-micro-benchmarks-5.3.2
    +    ./configure --prefix=/usr/local CC=$(which mpicc) CFLAGS=-O3
    +    make
    +    make install
    +    cd ..
    +    rm -rf osu-micro-benchmarks-5.3.2
    +    rm osu-micro-benchmarks-5.3.2.tar.gz
    +
    +%runscript
    +    /usr/local/libexec/osu-micro-benchmarks/mpi/pt2pt/osu_bw
    +
    +
    sudo singularity build mpi_osu.sif mpi_osu.def
    +
    +Once the container image is ready, you can use it for example inside the following slurm launcher to start a best-effort job:

    +

    #!/bin/bash -l
    +#SBATCH -J ParallelJob
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=1
    +#SBATCH --time=05:00
    +#SBATCH -p batch
    +#SBATCH --qos=qos-besteffort
    +
    +module load tools/Singularity
    +srun -n $SLURM_NTASKS singularity run mpi_osu.sif
    +
    +The content of the output file:

    +
    +

    Output

    +

    +# OSU MPI Bandwidth Test v5.3.2
    +# Size      Bandwidth (MB/s)
    +1                       0.35
    +2                       0.78
    +4                       1.70
    +8                       3.66
    +16                      7.68
    +32                     16.38
    +64                     32.86
    +128                    66.61
    +256                    80.12
    +512                    97.68
    +1024                  151.57
    +2048                  274.60
    +4096                  408.71
    +8192                  456.51
    +16384                 565.84
    +32768                 582.62
    +65536                 587.17
    +131072                630.64
    +262144                656.45
    +524288                682.37
    +1048576               712.19
    +2097152               714.55
    +

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/contributing/images/gitflow.png b/contributing/images/gitflow.png new file mode 100644 index 00000000..5fb378a0 Binary files /dev/null and b/contributing/images/gitflow.png differ diff --git a/contributing/images/github_flow.png b/contributing/images/github_flow.png new file mode 100644 index 00000000..0aebf971 Binary files /dev/null and b/contributing/images/github_flow.png differ diff --git a/contributing/index.html b/contributing/index.html new file mode 100644 index 00000000..bbd3a3b8 --- /dev/null +++ b/contributing/index.html @@ -0,0 +1,3041 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Overview

    + +

    You are more than welcome to contribute to the development of this project. +You are however expected to follow the model of Github Flow for your contributions.

    +
    +

    What is a [good] Git Workflow?

    +

    A Git Workflow is a recipe or recommendation for how to use Git to accomplish work in a consistent and productive manner. Indeed, Git offers a lot of flexibility in how changes can be managed, yet there is no standardized process on how to interact with Git. The following questions are expected to be addressed by a successful workflow:

    +
      +
    1. Q1: Does this workflow scale with team size?
    2. +
    3. Q2: Is it possible to prevent/limit mistakes and errors ?
    4. +
    5. Q3: Is it easy to undo mistakes and errors with this workflow?
    6. +
    7. Q4: Does this workflow permits to easily test new feature/functionnalities before production release ?
    8. +
    9. Q5: Does this workflow allow for Continuous Integration (even if not yet planned at the beginning)
    10. +
    11. Q6: Does this workflow permit to master the production release
    12. +
    13. Q7: Does this workflow impose any new unnecessary cognitive overhead to the team?
    14. +
    15. Q8: The workflow is easy to use/setup and maintain
    16. +
    +

    In particular, the default "workflow" centralizedgitl (where everybody just commit to the single master branch), while being the only one satisfying Q7, proved to be easily error-prone and can break production system relying on the underlying repository. For this reason, other more or less complex workflows have emerged -- all feature-branch-based, that supports teams and projects where production deployments are made regularly:

    +
      +
    • +

      Git-flow, the historical successful workflow featuring two main branches with an infinite lifetime (production and {master | devel})

      +
        +
      • all operations are facilitated by the git-flow CLI extension
      • +
      • maintaining both branches can be bothersome - make up
      • +
      • the only one permitting to really control production release
      • +
      +
    • +
    • +

      Github Flow, a lightweight version with a single branch (master)

      +
        +
      • pull-request based - requires interaction with Gitlab/Github web interface (git request-pull might help)
      • +
      +
    • +
    +

    The ULHPC team enforces an hydrid workflow detailed below, HOWEVER you can safely contribute to this documentation by following the Github Flow explained now.

    +
    +

    Default Git workflow for contributions

    +

    We expect contributors to follow the Github Flow concept.

    +

    +

    This flow is ideal for organizations that need simplicity, and roll out frequently. If you are already using Git, you are probably using a version of the Github flow. Every unit of work, whether it be a bugfix or feature, is done through a branch that is created from master. After the work has been completed in the branch, it is reviewed and tested before being merged into master and pushed out to production.

    +

    In details:

    +
      +
    • +

      As preliminaries (to be done only once), Fork the ULHPC/ulhpc-docs repository under <YOUR-USERNAME>/ulhpc-docs

      +
        +
      • A fork is a copy of a repository placed under your Github namespace. Forking a repository allows you to freely experiment with changes without affecting the original project.
      • +
      • In the top-right corner of the ULHPC/ulhpc-docs repository, click "Fork" button.
      • +
      • Under Settings, change the repository name from docs to ulhpc-docs
      • +
      • Once done, you can clone your copy (forked) repository: select the SSH url under the "Code" button: +
        # (Recommended) Place your repo in a clean (and self-explicit) directory layout
        +# /!\ ADAPT 'YOUR-USERNAME' with your Github username
        +$> mkdir -p ~/git/github.com/YOUR-USERNAME
        +$> cd ~/git/github.com/YOUR-USERNAME
        +# /!\ ADAPT 'YOUR-USERNAME' with your Github username
        +git clone git@github.com:YOUR-USERNAME/ulhpc-docs.git
        +$> cd ulhpc-docs
        +$> make setup
        +
      • +
      • +

        Configure your working forked copy to sync with the original ULHPC/ulhpc-docs repository through a dedicated upstream remote +

        # Check current remote: only 'origin' should be listed
        +$> git remote -v
        +origin  git@github.com:YOUR-USERNAME/ulhpc-docs.git (fetch)
        +origin  git@github.com:YOUR-USERNAME/ulhpc-docs.git (push)
        +# Add upstream
        +$> make setup-upstream
        +# OR, manually:
        +$> git remote add upstream https://github.com/ULHPC/ulhpc-docs.git
        +# Check the new remote
        +$> git remote -v
        +origin  git@github.com:YOUR-USERNAME/ulhpc-docs.git (fetch)
        +origin  git@github.com:YOUR-USERNAME/ulhpc-docs.git (push)
        +upstream https://github.com/ULHPC/ulhpc-docs.git (fetch)
        +upstream https://github.com/ULHPC/ulhpc-docs.git (push)
        +

        +
      • +
      • +

        At this level, you probably want to follow the setup instructions to configure your ulhpc-docs python virtualenv and deploy locally the documentation with make doc

        + +
      • +
      +
    • +
    +

    Then, to bring your contributions:

    +
      +
    1. Pull the latest changes from the upstream remote using: +
      make sync-upstream
      +
    2. +
    3. Create your own feature branch with appropriate name <name>: +
      # IF you have installed git-flow: {brew | apt | yum |...} install gitflow git-flow
      +# /!\ ADAPT <name> with appropriate name: this will create and checkout to branch feature/<name>
      +git-flow feature start <name>
      +# OR
      +git checkout -b feature/<name>
      +
    4. +
    5. Commit your changes once satisfied with them +
      git add [...]
      +git commit -s -m 'Added some feature'
      +
    6. +
    7. Push to the feature branch and publish it +
      # IF you have installed git-flow
      +# /!\ ADAPT <name> accordingly
      +git-flow feature publish <name>
      +# OR
      +git push -u origin feature/<name>
      +
    8. +
    9. Create a new Pull Request to submit your changes to the ULHPC team.
    10. +
    11. +

      Commit first! +

      # check what would be put in the pull request
      +git request-pull master ./
      +# Open Pull Request from web interface
      +# Github: Open 'new pull request'
      +#      Base = feature/<name>,   compare = master
      +

      +
    12. +
    13. +

      Pull request will be reviewed, eventually with comments/suggestion for modifications -- see official doc

      +
    14. +
    15. you may need to apply new commits to resolve the comments -- remember to mention the pull request in the commit message with the prefix '[PR#<ID>]' (Ex: [PR#5]) in your commit message +
      cd /path/to/ulhpc-docs
      +git checkout feature/<name>
      +git pull
      +# [...]
      +git add [...]
      +# /!\ ADAPT Pull Request ID accordingly
      +git commit -s -m '[PR#<ID>] ...'
      +
    16. +
    +

    After your pull request has been reviewed and merged, you can safely delete the feature branch.

    +
    # Adapt <name> accordingly
    +git checkout feature/<name> # Eventually, if needed
    +make sync-upstream
    +git-flow feature finish <name> # feature branch 'feature/<name>' will be merged into 'devel'
    +#                              # feature branch 'feature/<name>' will be locally deleted
    +#                              # you will checkout back to the 'master' branch
    +git push origin --delete feature/<name>   # /!\ WARNING: Ensure you delete the CORRECT remote branch
    +git push  # sync master branch
    +
    + +

    ULHPC Git Workflow

    +

    Throughout all its projects, the ULHPC team has enforced a stricter workflow for Git repository summarized in the below figure:

    +

    +

    The main concepts inherited from both advanced workflows (Git-flow and Github Flow) are listed below:

    +
      +
    • The central repository holds two main branches with an infinite lifetime:
        +
      • production: the production-ready branch, used for the deployed version of the documentation.
      • +
      • devel | master | main (master in this case): the main (master) branch where the latest developments intervene (name depends on repository purpose). This is the default branch you get when you clone the repository.
      • +
      +
    • +
    • You should always setup your local copy of the repository with make setup
        +
      • ensure also you have installed the gitflow extension
      • +
      • ensure you are properly made the initial configuration of git -- see also sample .gitconfig
      • +
      +
    • +
    +

    In compliment to the Github Flow described above, several additional operations are facilitated by the root Makefile:

    +
      +
    • Initial setup of the repository with make setup
    • +
    • Release of a new version of this repository with make start_bump_{patch,minor,major} and make release
        +
      • this action is managed by the ULHPC team according to the semantic versioning scheme implemented within this this project.
      • +
      +
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/contributing/versioning/index.html b/contributing/versioning/index.html new file mode 100644 index 00000000..a913882f --- /dev/null +++ b/contributing/versioning/index.html @@ -0,0 +1,2821 @@ + + + + + + + + + + + + + + + + + + + + + + + + Semantic Versioning - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Semantic Versioning

    + +

    The operation consisting of releasing a new version of this repository is automated by a set of tasks within the root Makefile. +In this context, a version number have the following format:

    +
      <major>.<minor>.<patch>[-b<build>]
    +
    + + +

    where:

    +
      +
    • < major > corresponds to the major version number
    • +
    • < minor > corresponds to the minor version number
    • +
    • < patch > corresponds to the patching version number
    • +
    • (eventually) < build > states the build number i.e. the total number of commits within the devel branch.
    • +
    +

    Example: `1.0.0-b28`.

    +
    +

    VERSION file

    +

    The current version number is stored in the root file VERSION. +/!\ IMPORTANT: NEVER MAKE ANY MANUAL CHANGES TO THIS FILE

    +
    +
    +

    ULHPC/docs repository release

    +

    Only the ULHPC team is allowed to perform the releasing operations (and push to the production branch). +By default, the main documentation website is built against the production branch.

    +
    +

    For more information on the version, run:

    +
     $> make versioninfo
    +
    + + +
    ULHPC Team procedure for repository release

    If a new version number such be bumped, the following command is issued: +

    make start_bump_{major,minor,patch}
    +
    +This will start the release process for you using git-flow within the release/<new-version> branch - see also Git(hub) flow. +Once the last changes are committed, the release becomes effective by running: +
    make release
    +
    +It will finish the release using git-flow, create the appropriate tag in the production branch and merge all things the way they should be in the master branch.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/data-center/images/data-center-bt1.jpg b/data-center/images/data-center-bt1.jpg new file mode 100644 index 00000000..d401d4f2 Binary files /dev/null and b/data-center/images/data-center-bt1.jpg differ diff --git a/data-center/images/datacentre_airflow.jpg b/data-center/images/datacentre_airflow.jpg new file mode 100644 index 00000000..273e8562 Binary files /dev/null and b/data-center/images/datacentre_airflow.jpg differ diff --git a/data-center/images/inrow-cooling.jpg b/data-center/images/inrow-cooling.jpg new file mode 100644 index 00000000..a5699227 Binary files /dev/null and b/data-center/images/inrow-cooling.jpg differ diff --git a/data-center/index.html b/data-center/index.html new file mode 100644 index 00000000..b7506224 --- /dev/null +++ b/data-center/index.html @@ -0,0 +1,2991 @@ + + + + + + + + + + + + + + + + + + + + + + + + Centre de Calcul (CDC) - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ULHPC Data Center - Centre de Calcul (CDC)

    +

    +

    The ULHPC facilities are hosted within the University's "Centre de Calcul" (CDC) data center located in the Belval Campus.

    +

    Power and Cooling Capacities

    +

    Established over two floors underground (CDC-S01 and CDC-S02) of ~1000~100m2 each, the CDC features five server rooms per level (each of them offering ~100m2 as IT rooms surface). +When the first level CDC-S01 is hosting administrative IT and research equipment, the second floor (CDC-S02) is primarily targeting the hosting of HPC equipment (compute, storage and interconnect).

    +

    +

    A power generation station supplies the HPC floor with up to 3 MW of electrical power, and 3 MW of cold water at a 12-18°C regime used for traditional Airflow with In-Row cooling. +A separate hot water circuit (between 30 and 40°C) allow to implement Direct Liquid Cooling (DLC) solutions as for the Aion supercomputer in two dedicated server rooms.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    LocationCoolingUsageMax Capa.
    CDC S-02-001AirflowFuture extension280 kW
    CDC S-02-002AirflowFuture extension280 kW
    CDC S-02-003DLCFuture extension - High Density/Energy efficient HPC1050 kW
    CDC S-02-004DLCHigh Density/Energy efficient HPC: aion1050 kW
    CDC S-02-005AirflowStorage / Traditional HPC: iris and common equipment300 kW
    +

    Data-Center Cooling technologies

    +

    Airflow with In-Row cooling

    +

    +Most server rooms are designed for traditional airflow-based cooling and implement hot or cold aisle containment, as well as In-row cooling systems work within a row of standard server rack engineered to take up the smallest footprint and offer high-density cooling. Ducting and baffles ensure that the cooling air gets where it needs to go.

    +

    Iris compute, storage and interconnect equipment are hosted in such a configuration

    +

    +

    [Direct] Liquid Cooling

    +

    Traditional solutions implemented in most data centers use air as a +medium to remove the heat from the servers and computing equipment +and are not well suited to cutting-edge high-density HPC environments +due to the limited thermal capacity of air. Liquids’ thermal +conductivity is higher than the air, thus concludes the +liquid can absorb (through conductivity) more heat than the air. +The replacement of air with a liquid cooling medium allows to +drastically improve the energy-efficiency as well as the density +of the implemented solution, especially with Direct Liquid Cooling (DLC) where +the heat from the IT components is directly +transferred to a liquid cooling medium through liquid-cooled plates.

    +

    The Aion supercomputer based on the fan-less Atos BullSequana XH2000 DLC cell design relies on this water-cooled configuration.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/data/backups/index.html b/data/backups/index.html new file mode 100644 index 00000000..88d35027 --- /dev/null +++ b/data/backups/index.html @@ -0,0 +1,3014 @@ + + + + + + + + + + + + + + + + + + + + + + + + Backups - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Backups

    +
    +

    Danger

    +

    All ULHPC users should back up important files on +a regular basis. Ultimately, it is your responsibility to +protect yourself from data loss.

    +
    +

    The backups are only accessible by HPC staff, for disaster recovery purposes only.

    +

    More precisions can be requested via a support request.

    +

    Directories on the ULHPC clusters infrastructure

    +

    For computation purposes, ULHPC users can use multiple storages: home, scratch and projects. Note however that the HPC Platform does not have the infrastructure to backup all of them, see details below.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    DirectoryPathBackup locationFrequencyRetention
    home directories$HOMEnot backed up
    scratch$SCRATCHnot backed up
    projects$PROJECTWORKCDC, BelvalWeeklyone backup per week of the backup directory ONLY ($PROJECT/backup/)
    +

    Directories on the SIU Isilon infrastructure

    +

    Projects stored on the Isilon filesystem are snapshotted weekly, the snapshots are kept for 10 days.

    +
    +

    Danger

    +

    Snapshots are not a real backup. It does not protect you against a system failure, it will only permit to recover some files in case of accidental deletion

    +
    +

    Each project directory, in /mnt/isilon/projects/ contains a hidden sub-directory .snapshot:

    +
      +
    • .snapshot is invisible to ls, ls -a, find and similar + commands
    • +
    • can be browsed normally after cd .snapshot
    • +
    • files cannot be created, deleted or edited in snapshots
    • +
    • files can only be copied out of a snapshot
    • +
    +

    Services

    + + + + + + + + + + + + + + + + + +
    NameBackup locationFrequencyRetention
    hpc.uni.lu (pad, privatebin)CDC, BelvalDailylast 7 daily backups, one per month for the last 6 months
    +

    Restore

    +

    If you require a restoration of lost data that cannot be accomplished via the +snapshots capability, please create a new request on Service Now portal, +with pathnames and timestamps of the missing data.

    +

    Such restore requests may take a few days to complete.

    +

    Backup Tools

    +

    In practice, the ULHPC backup infrastructure is fully puppetized and make use of several tools facilitating the operations:

    +
      +
    • backupninja, which allows you to coordinate system backup by dropping a few simple configuration files into /etc/backup.d/
    • +
    • a forked version of bontmia, which stands for "Backup Over Network To Multiple Incremental Archives"
    • +
    • BorgBackup, a deduplicating backup program supporting compression and authenticated encryption.
    • +
    • several internal scripts to pilot LVM snapshots/backup/restore operations
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/data/encryption/index.html b/data/encryption/index.html new file mode 100644 index 00000000..34921e33 --- /dev/null +++ b/data/encryption/index.html @@ -0,0 +1,3214 @@ + + + + + + + + + + + + + + + + + + + + + + + + Sensitive Data Protection - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Sensitive Data Protection

    +

    The advent of the EU General Data Protection Regulation (GDPR) permitted to highlight the need to protect sensitive information from leakage.

    +

    GPG

    +

    A basic approach relies on GPG to encrypt single files -- see this tutorial for more details

    +
    # File encryption
    +$ gpg --encrypt [-r <recipient>] <file>     # => produces <file>.gpg
    +$ rm -f <file>    # /!\ WARNING: encryption DOES NOT delete the input (clear-text) file
    +$ gpg --armor --detach-sign <file>          # Generate signature file <file>.asc
    +
    +# Decryption
    +$ gpg --verify <file>.asc           # (eventually but STRONGLY encouraged) verify signature file
    +$ gpg --decrypt <file>.gpg          # Decrypt PGP encrypted file
    +
    + +

    One drawback is that files need to be completely decrypted for processing

    +

    Tutorial: Using GnuPG aka Gnu Privacy Guard aka GPG

    +

    File Encryption Frameworks (EncFS, GoCryptFS...)

    +

    In contrast to disk-encryption software that operate on whole disks (TrueCrypt, dm-crypt etc), file encryption operates on individual files that can be backed up or synchronised easily, especially within a Git repository.

    +
      +
    • Comparison matrix
    • +
    • gocryptfs, aspiring successor of EncFS written in Go
    • +
    • EncFS, mature with known security issues
    • +
    • eCryptFS, integrated into the Linux kernel
    • +
    • Cryptomator, strong cross-platform support through Java and WebDAV
    • +
    • securefs, a cross-platform project implemented in C++.
    • +
    • CryFS, result of a master thesis at the KIT University that uses chunked storage to obfuscate file sizes.
    • +
    +

    Assuming you are working from /path/to/my/project, your workflow (mentionned below for EncFS, but it can be adpated to all the other tools) operated on encrypted vaults and would be as follows:

    +
      +
    • (eventually) if operating within a working copy of a git repository, you should ignore the mounting directory (ex: vault/*) in the root .gitignore of the repository
        +
      • this ensures neither you nor a collaborator will commit any unencrypted version of a file by mistake
      • +
      • you commit only the EncFS / GocryptFS / eCryptFS / Cryptomator / securefs / CryFS raw directory (ex: .crypt/) in your repository. Thus only encrypted form or your files are commited
      • +
      +
    • +
    • You create the EncFS / GocryptFS / eCryptFS / Cryptomator / securefs / CryFS encrypted vault
    • +
    • You prepare macros/scripts/Makefile/Rakefile tasks to lock/unlock the vault on demand
    • +
    +

    Here are for instance a few example of these operations in live to create a encrypted vault:

    +
    +
    $ cd /path/to/my/project
    +$ rawdir=.crypt      # /!\ ADAPT accordingly
    +$ mountdir=vault     # /!\ ADAPT accordingly
    +#
    +# (eventually) Ignore the mount dir
    +$ echo $mountdir >> .gitignore
    +### EncFS: Creation of an EncFS vault (only once)
    +$ encfs --standard $rawdir $mountdir
    +
    + +
    +
    +

    you SHOULD be on a computing node to use GoCryptFS.

    +
    $ cd /path/to/my/project
    +$ rawdir=.crypt      # /!\ ADAPT accordingly
    +$ mountdir=vault     # /!\ ADAPT accordingly
    +#
    +# (eventually) Ignore the mount dir
    +$ echo $mountdir >> .gitignore
    +### GoCryptFS: load the module - you SHOULD be on a computing node
    +$ module load tools/gocryptfs
    +# Creation of a GoCryptFS vault (only once)
    +$> gocryptfs -init $rawdir
    +
    + +
    +
    +

    Then you can mount/unmount the vault as follows:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ToolOSOpening/Unlocking the vaultClosing/locking the vault
    EncFSLinuxencfs -o nonempty --idle=60 $rawdir $mountdirfusermount -u $mountdir
    EncFSMac OSencfs --idle=60 $rawdir $mountdirumount $mountdir
    GocryptFSgocryptfs $rawdir $mountdiras above
    +

    The fact that GoCryptFS is available as a module brings the advantage that it can be mounted in a view folder (vault/) where you can read and write the unencrypted files, which is Automatically unmounted upon job termination.

    +

    File Encryption using SSH [RSA] Key Pairs

    + +

    If you encrypt/decrypt files or messages on more than a one-off occasion, you should really use GnuPGP as that is a much better suited tool for this kind of operations. +But if you already have someone's public SSH key, it can be convenient to use it, and it is safe.

    +
    +

    Warning

    +

    The below instructions are NOT compliant with the new OpenSSH format which is used for storing encrypted (or unencrypted) RSA, EcDSA and Ed25519 keys (among others) when you use the -o option of ssh-keygen. +You can recognize these keys by the fact that the private SSH key ~/.ssh/id_rsa starts with - +----BEGIN OPENSSH PRIVATE KEY-----

    +
    +

    Encrypt a file using a public SSH key

    +

    (eventually) SSH RSA public key conversion to PEM PKCS8

    +

    OpenSSL encryption/decryption operations performed using the RSA algorithm relies on keys following the PEM format 1 (ideally in the PKCS#8 format). +It is possible to convert OpenSSH public keys (private ones are already compliant) to the PEM PKCS8 format (a more secure format). +For that one can either use the ssh-keygen or the openssl commands, the first one being recomm +ended.

    +

    # Convert the public key of your collaborator to the PEM PKCS8 format (a more secure format)
    +$ ssh-keygen -f id_dst_rsa.pub -e -m pkcs8 > id_dst_rsa.pkcs8.pub
    +# OR use OpenSSL for that...
    +$ openssl rsa -in id_dst_rsa -pubout -outform PKCS8 > id_dst_rsa.pkcs8.pub
    +
    +Note that you don't actually need to save the PKCS#8 version of his public key file -- the below command will make this conversion on demand.

    +

    Generate a 256 bit (32 byte) random symmetric key

    +

    There is a limit to the maximum length of a message i.e. size of a file that can be encrypted using asymmetric RSA public key encryption keys (which is what SSH ke +ys are). +For this reason, you should better rely on a 256 bit key to use for symmetric AES encryption and then encrypt/decrypt that symmetric AES key with the asymmetric RSA k +eys +This is how encrypted connections usually work, by the way.

    +

    Generate the unique symmetric key key.bin of 32 bytes (i.e. 256 bit) as follows:

    +
    openssl rand -base64 32 -out key.bin
    +
    + +

    You should only use this key once. If you send something else to the recipient at another time, you should regenerate another key.

    +

    Encrypt the (potentially big) file with the symmetric key

    +
    openssl enc -aes-256-cbc -salt -in bigdata.dat -out bigdata.dat.enc  -pass file:./key.bin
    +
    + +
    Indicative performance of OpenSSL Encryption time

    You can quickly generate random files of 1 or 10 GiB size as follows: +

    # Random generation of a 1GiB file
    +$ dd if=/dev/urandom of=bigfile_1GiB.dat  bs=64M count=16  iflag=fullblock
    +# Random generation of a 1GiB file
    +$ dd if=/dev/urandom of=bigfile_10GiB.dat bs=64M count=160 iflag=fullblock
    +
    +An indicated encryption time taken for above generated random file on a local laptop (Mac OS X, local filesystem over SSD) is proposed in the below table, using +
    openssl enc -aes-256-cbc -salt -in bigfile_<N>GiB.dat -out bigfile_<N>GiB.dat.enc  -pass file:./key.bin
    +

    + + + + + + + + + + + + + + + + + + + + +
    FilesizeEncryption time
    bigfile_1GiB.dat1 GiB0m5.395s
    bigfile_10GiB.dat10 GiB2m50.214s
    +
    +

    Encrypt the symmetric key, using your collaborator public SSH key in PKCS8 format:

    +
    $ openssl rsautl -encrypt -pubin -inkey <(ssh-keygen -e -m PKCS8 -f id_dst_rsa.pub) -in key.bin -out key.bin.enc
    +# OR, if you have a copy of the PKCS#8 version of his public key
    +$ openssl rsautl -encrypt -pubin -inkey  id_dst_rsa.pkcs8.pub -in key.bin -out key.bin.enc
    +
    + +

    Delete the unencrypted symmetric key as you don't need it any more (and you should not use it anymore)

    +
      $> rm key.bin
    +
    + + +

    Now you can transfer the *.enc files i.e. send the (potentially big) encrypted file <file>.enc and the encrypted symmetric key (i.e. key.bin.enc ) to the recipient _i.e. your collaborator. +Note that you are encouraged to send the encrypted file and the encrypted key separately. Although it's not absolutely necessary, it's good practice to separate the two. +If you're allowed to, transfer them by SSH to an agreed remote server. It is even safe to upload the files to a public file sharing service and tell the recipient to download them from there.

    +

    Decrypt a file encrypted with a public SSH key

    +

    First decrypt the symmetric key using the SSH private counterpart:

    +
    # Decrypt the key -- /!\ ADAPT the path to the private SSH key
    +$ openssl rsautl -decrypt -inkey ~/.ssh/id_rsa -in key.bin.enc -out key.bin
    +Enter pass phrase for ~/.ssh/id_rsa:
    +
    + +

    Now the (potentially big) file can be decrypted, using the symmetric key:

    +
    openssl enc -d -aes-256-cbc -in bigdata.dat.enc -out bigdata.dat -pass file:./key.bin
    +
    + +

    Misc Q&D for small files

    +

    For a 'quick and dirty' encryption/decryption of small files:

    +
    # Encrypt
    +$  openssl rsautl -encrypt -inkey <(ssh-keygen -e -m PKCS8 -f ~/.ssh/id_rsa.pub) -pubin -in <cleartext_file>.dat -out <encrypted_file>.dat.enc
    +# Decrypt
    +$ openssl rsautl -decrypt -inkey ~/.ssh/id_rsa -in <encrypted_file>.dat.enc -out <cleartext_file>.dat
    +
    + +

    Data Encryption in Git Repository with git-crypt

    +

    It is of course even more important in the context of git repositories, whether public or private, since the disposal of a working copy of the repository enable the access to the full history of commits, in particular the ones eventually done by mistake (git commit -a) that used to include sensitive files. +That's where git-crypt comes for help. +It is an open source, command line utility that empowers developers to protect specific files within a git repository.

    +
    +

    git-crypt enables transparent encryption and decryption of files in a git repository. +Files which you choose to protect are encrypted when committed, and decrypted when checked +out. git-crypt lets you freely share a repository containing a mix of public and private +content. git-crypt gracefully degrades, so developers without the secret key can still +clone and commit to a repository with encrypted files. This lets you store your secret +material (such as keys or passwords) in the same repository as your code, without +requiring you to lock down your entire repository.

    +
    +

    The biggest advantage of git-crypt is that private data and public data can live in the same location.

    +

    Using Git-crypt to Protect Sensitive Data

    +

    PetaSuite Protect

    +

    PetaSuite is a compression suite for Next-Generation-Sequencing (NGS) data. +It consists of a command-line tool and a user-mode library. The command line tool performs compression and decompression operations on files. The user-mode library allows other tools and pipelines to transparently access the NGS data in their original file formats.

    +

    PetaSuite is used within LCSB and provides the following features:

    +
      +
    • Encrypt and compress genomic data
    • +
    • Encryption keys and access managed centrally
    • +
    • Decryption and decompression on-the-fly using a library that intercepts all FS access
    • +
    +

    This is a commercial software -- contact lcsb.software@uni.lu if you would like to use it

    +
    +
    +
      +
    1. +

      Defined in RFCs 1421 through 1424, is a container format for public/private keys or certificates used preferentially by open-source software such as OpenSSL. The name is from Privacy Enhanced Mail (PEM) (a failed method for secure email, but the container format it used lives on, and is a base64 translation of the x509 ASN.1 keys. 

      +
    2. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/data/gdpr/index.html b/data/gdpr/index.html new file mode 100644 index 00000000..a638e6fd --- /dev/null +++ b/data/gdpr/index.html @@ -0,0 +1,2904 @@ + + + + + + + + + + + + + + + + + + + + + + + + GDPR Compliance - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    GDPR Compliance

    + +

    UL HPC Acceptable Use Policy (AUP) (pdf)

    +
    +

    Warning

    +

    Personal data is/may be visible, accessible or handled:

    +
      +
    • directly on the HPC clusters
    • +
    • through Resource and Job Management System (RJMS) tools (Slurm) and associated monitoring interfaces
    • +
    • through service portals (like OpenOnDemand)
    • +
    • on code management portals such GitLab, GitHub
    • +
    • on secondary storage systems used within the University such as Atlas, DropIT, etc.
    • +
    +
    +

    Data Use

    +

    Use of UL HPC data storage resources (file systems, data storage tiers, backup, etc.) should be used only for work directly related to the projects for which the resources were requested and granted, and primarily to advance University’s missions of education and research. Use of UL HPC data resources for personal activities is prohibited.

    +

    The UL HPC Team maintains up-to-date documentation on its data storage resources and their proper use, and provides regular training and support to users. +Users assume the responsibility for following the documentation, training sessions and best practice guides in order to understand the proper and considerate use of the UL HPC data storage resources.

    +

    Authors/generators/owners of information or data are responsible for its correct categorization as sensitive or non-sensitive. Owners of sensitive information are responsible for its secure handling, transmission, processing, storage, and disposal on the UL HPC systems. The UL HPC Team recommends use of encryption to protect the data from unauthorized access. Data Protection inquiries, especially as regards sensitive information processing can be directed to the Data Protection Officer.

    +

    Users are prohibited from intentionally accessing, modifying or deleting data they do not own or have not been granted explicit permission to access.

    +

    Users are responsible to ensure the appropriate level of protection, backup and integrity checks on their critical data and applications. It is their responsibility to set appropriate access controls for the data they bring, process and generate on UL HPC facilities.

    +

    In the event of system failure or malicious actions, UL HPC makes no guarantee against loss of data or that user or project data can be recovered nor that it cannot be accessed, changed, or deleted by another individual.

    +

    Personal information agreement

    +

    UL HPC retains the right to monitor all activities on its facilities.

    +

    Users acknowledge that data regarding their activity on UL HPC facilities will be collected. The data is collected (e.g. by the Slurm workload manager) for utilization accounting and reporting purposes, and for the purpose of understanding typical patterns of user’s behavior on the system in order to further improve the services provided by UL HPC. Another goal is to identify intrusions, misuse, security incidents or illegal actions in order to protect UL HPC users and facilities..

    +

    Users agree that this data may be processed to extract information contributing to the above stated purposes.

    +

    Users agree that their name, surname, email address, affiliation, work place and phone numbers are processed by the UL HPC Team in order to provide HPC and associated services.

    +

    Data Protection inquiries can be directed to the Data Protection Officer. Further information about Data Protection can be found at: +https://wwwen.uni.lu/university/data_protection

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/data/images/distem-scp-vs-rsync.png b/data/images/distem-scp-vs-rsync.png new file mode 100644 index 00000000..d721ddc5 Binary files /dev/null and b/data/images/distem-scp-vs-rsync.png differ diff --git a/data/images/filetransfer/MobaXterm_transfer.png b/data/images/filetransfer/MobaXterm_transfer.png new file mode 100644 index 00000000..e50e9619 Binary files /dev/null and b/data/images/filetransfer/MobaXterm_transfer.png differ diff --git a/data/layout/index.html b/data/layout/index.html new file mode 100644 index 00000000..8e6d0264 --- /dev/null +++ b/data/layout/index.html @@ -0,0 +1,3001 @@ + + + + + + + + + + + + + + + + + + + + + + + + Global Directory Structure - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Global Directory Structure

    + + +

    ULHPC File Systems Overview

    + + +

    Several File Systems co-exist on the ULHPC facility and are configured for different purposes. +Each servers and computational resources has access to at least three different file systems with different levels of performance, permanence and available space summarized below

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    DirectoryEnv.file systembackup
    /home/users/<login>$HOMEGPFS/Spectrumscaleno
    /work/projects/<name>-GPFS/Spectrumscaleyes (partial, backup subdirectory)
    /scratch/users/<login>$SCRATCHLustreno
    /mnt/isilon/projects/<name>-OneFSyes (live sync and snapshots)
    + + + + +

    Global Home directory $HOME

    +

    Home directories provide a convenient means for a user to have access to files such as dotfiles, source files, input files, configuration files regardless of the platform.

    +

    Refer to your home directory using the environment variable $HOME whenever possible. +The absolute path may change, but the value of $HOME will always be correct.

    + + + + +

    Global Project directory $PROJECTHOME=/work/projects/

    +

    Project directories are intended for sharing data within a group of researchers, under /work/projects/<name>

    +

    Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible.

    + + + + +

    Global Scratch directory $SCRATCH

    +

    The scratch area is a Lustre-based file system designed for high performance temporary storage of large files.

    +

    It is thus intended to support large I/O for jobs that are being actively computed on the ULHPC systems. +We recommend that you run your jobs, especially data intensive ones, from the ULHPC scratch file system.

    +

    Refer to your scratch directory using the environment variable $SCRATCH whenever possible (which expands to /scratch/users/$(whoami)). +The scratch file system is shared via the Infiniband network of the ULHPC facility and is available from all nodes while being tuned for high performance.

    + + +

    Project Cold-Data and Archives

    + + +

    OneFS, A global low-performance Dell/EMC Isilon solution is used to host project data, and serve for backup and archival purposes. You will find them mounted under /mnt/isilon/projects.

    + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/data/project/index.html b/data/project/index.html new file mode 100644 index 00000000..e3ea3afa --- /dev/null +++ b/data/project/index.html @@ -0,0 +1,2973 @@ + + + + + + + + + + + + + + + + + + + + + + + + Project Data Management - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Project Data Management

    + + + +

    Global Project directory $PROJECTHOME=/work/projects/

    +

    Project directories are intended for sharing data within a group of researchers, under /work/projects/<name>

    +

    Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible.

    + + +
    +

    Research Project Allocations, Accounting and Reporting

    +

    The Research Support and Accounting Departments of the University keep track of the list of research projects funded within the University. +Starting 2021, a new procedure has been put in place to provide a detailed reporting of the HPC usage for such projects. +As part of this process, the following actions are taken by the ULHPC team:

    +
      +
    1. a dedicated project account <name> (normally the acronym of the project) is created for accounting purpose at the Slurm level (L3 account - see Account Hierarchy);
    2. +
    3. a dedicated project directory with the same name (<name>) is created, allowing to share data within a group of project researchers, under $PROJECTHOME/<name>, i.e., /work/projects/<name>
    4. +
    +

    You are then entitled to submit jobs associated to the project using -A <name> such that the HPC usage is reported accurately. +The ULHPC team will provide to the project PI (Principal Investigator) and the Research Support department a regular report detailing the corresponding HPC usage. +In all cases, job billing under the conditions defined in the Job Accounting and Billing section may apply.

    +
    +

    New project directory

    +

    You can request a new project directory under ServiceNow (HPC → Storage & projects → Request for a new project).

    +

    Quotas and Backup Policies

    +

    See quotas for detailed information about inode, space quotas, and file system purge policies. +Your projects backup directories are backuped weekly, according to the policy detailed in the ULHPC backup policies.

    + + +
    +

    Access rights to project directory: Quota for clusterusers group in project directories is 0 !!!

    +

    When a project <name> is created, a group of the same name (<name>) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory.

    +

    Be aware that your default group as a user is clusterusers which has (on purpose) a quota in project directories set to 0. +You thus need to ensure you always write data in your project directory using the <name> group (instead of yoru default one.). +This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...]

    +

    When using rsync to transfer file toward the project directory /work/projects/<name> as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to:

    +
      +
    • give new files the destination-default permissions with --no-p (--no-perms), and
    • +
    • use the default group <name> of the destination dir with --no-g (--no-group)
    • +
    • (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX
    • +
    +

    Your full rsync command becomes (adapt accordingly):

    +
      rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] <source> /work/projects/<name>/[...]
    +
    + + +
    +

    For the same reason detailed above, in case you are using a build command or +more generally any command meant to write data in your project directory +/work/projects/<name>, you want to use the +sg as follows:

    +
    # /!\ ADAPT <name> accordingly
    +sg <name> -c "<command> [...]"
    +
    + +

    This is particularly important if you are building dedicated software with +Easybuild for members of the project - you typically want to do it as follows:

    +
    # /!\ ADAPT <name> accordingly
    +sg <name> -c "eb [...] -r --rebuild -D"   # Dry-run - enforce using the '<name>' group
    +sg <name> -c "eb [...] -r --rebuild"      # Dry-run - enforce using the '<name>' group
    +
    + + + +

    Project directory modification

    +

    You can request changes for your project directory (quotas extension, add/remove a group member) under ServiceNow:

    +
      +
    • HPC → Storage & projects → Extend quota/Request information
    • +
    • HPC → User access & accounts → Add/Remove user within project
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/data/project_acl/index.html b/data/project_acl/index.html new file mode 100644 index 00000000..ec233a53 --- /dev/null +++ b/data/project_acl/index.html @@ -0,0 +1,2795 @@ + + + + + + + + + + + + + + + + + + + + + + + + Project acl - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Project acl

    + +
    +

    Global Project quotas and backup policies

    +

    See quotas for detailed information about inode, +space quotas, and file system purge policies. +Your projects backup directories are backuped weekly, according to the policy detailed in the ULHPC backup policies.

    +
    + + +
    +

    Access rights to project directory: Quota for clusterusers group in project directories is 0 !!!

    +

    When a project <name> is created, a group of the same name (<name>) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory.

    +

    Be aware that your default group as a user is clusterusers which has (on purpose) a quota in project directories set to 0. +You thus need to ensure you always write data in your project directory using the <name> group (instead of yoru default one.). +This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...]

    +

    When using rsync to transfer file toward the project directory /work/projects/<name> as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to:

    +
      +
    • give new files the destination-default permissions with --no-p (--no-perms), and
    • +
    • use the default group <name> of the destination dir with --no-g (--no-group)
    • +
    • (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX
    • +
    +

    Your full rsync command becomes (adapt accordingly):

    +
      rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] <source> /work/projects/<name>/[...]
    +
    + + +
    +

    For the same reason detailed above, in case you are using a build command or +more generally any command meant to write data in your project directory +/work/projects/<name>, you want to use the +sg as follows:

    +
    # /!\ ADAPT <name> accordingly
    +sg <name> -c "<command> [...]"
    +
    + +

    This is particularly important if you are building dedicated software with +Easybuild for members of the project - you typically want to do it as follows:

    +
    # /!\ ADAPT <name> accordingly
    +sg <name> -c "eb [...] -r --rebuild -D"   # Dry-run - enforce using the '<name>' group
    +sg <name> -c "eb [...] -r --rebuild"      # Dry-run - enforce using the '<name>' group
    +
    + + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/data/sharing/index.html b/data/sharing/index.html new file mode 100644 index 00000000..7e596fb9 --- /dev/null +++ b/data/sharing/index.html @@ -0,0 +1,2985 @@ + + + + + + + + + + + + + + + + + + + + + + + + Data Sharing - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Data Sharing

    + +

    Security and Data Integrity

    +

    Sharing data with other users must be done carefully. Permissions +should be set to the minimum necessary to achieve the desired +access. For instance, consider carefully whether it's really necessary +before sharing write permssions on data. Be sure to have archived +backups of any critical shared data. It is also important to ensure +that private login secrets (such as SSH private keys or apache +htaccess files) are NOT shared with other users (either intentionally +or accidentally). Good practice is to keep things like this in a +separare directory that is as locked down as possible.

    +

    The very first protection is to maintain your Home with access rights 700

    +
    chmod 700 $HOME
    +
    + +

    Sharing Data within ULHPC Facility

    +

    Sharing with Other Members of Your Project

    +

    We can setup a project directory with specific group read and write permissions, allowing to +share data with other members of your project.

    +

    Sharing with ULHPC Users Outside of Your Project

    +

    Unix File Permissions

    +

    You can share files and directories with ULHPC users outside of your +project by adjusting the unix file permissions. We have an extensive +write up of unix file permissions and how they work +here.

    +

    Sharing Data outside of ULHPC

    +

    The IT service of the University can be contacted to easily and quickly share data over the web +using a dedicated Data Transfer service. +Open the appropriate ticket on the Service Now portal.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/data/transfer/index.html b/data/transfer/index.html new file mode 100644 index 00000000..0eb98278 --- /dev/null +++ b/data/transfer/index.html @@ -0,0 +1,3498 @@ + + + + + + + + + + + + + + + + + + + + + + + + Data Transfer - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Data Transfer to/from/within UL HPC Clusters

    +

    Introduction

    +

    Directories such as $HOME, $WORK or $SCRATCH are shared among the nodes of the cluster that you are using (including the login node) via shared filesystems (SpectrumScale, Lustre) meaning that:

    +
      +
    • every file/directory pushed or created on the login node is available on the computing nodes
    • +
    • every file/directory pushed or created on the computing nodes is available on the login node
    • +
    +

    The two most common commands you can use for data transfers over SSH:

    +
      +
    • scp: for the full transfer of files and directories (only works fine for single files or directories of small/trivial size)
    • +
    • rsync: a software application which synchronizes files and directories from one location to another while minimizing data transfer as only the outdated or inexistent elements are transferred (practically required for lengthy complex transfers, which are more likely to be interrupted in the middle).
    • +
    +
    +

    scp or rsync?

    +

    While both ensure a secure transfer of the data within an encrypted tunnel, rsync should be preferred: as mentionned in the from openSSH 8.0 release notes: +"The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead".

    +

    scp is also relatively slow when compared to rsync as exhibited for instance in the below sample Distem experience:

    +

    +

    You will find below notes on scp usage, but kindly prefer to use rsync.

    +
    +
    Consider scp as deprecated! Click nevertheless to get usage details

    scp (see scp(1) ) or secure copy is probably the easiest of all the methods. The basic syntax is as follows:

    +
    scp [-P 8022] [-Cr] source_path destination_path
    +
    + + +
      +
    • the -P option specifies the SSH port to use (in this case 8022)
    • +
    • the -C option activates the compression (actually, it passes the -C flag to ssh(1) to enable compression).
    • +
    • the -r option states to recursively copy entire directories (in this case, scp follows symbolic links encountered in the tree traversal). Please note that in this case, you must specify the source file as a directory for this to work.
    • +
    +

    The syntax for declaring a remote path is as follows on the cluster: +
    +yourlogin@iris-cluster:path/from/homedir

    +

    Transfer from your local machine to the remote cluster login node

    +

    For instance, let's assume you have a local directory ~/devel/myproject you want to transfer to the cluster, in your remote homedir.

    +
    # /!\ ADAPT yourlogin to... your ULHPC login
    +$> scp -P 8022 -r ~/devel/myproject yourlogin@iris-cluster:
    +
    + +

    This will transfer recursively your local directory ~/devel/myproject on the cluster login node (in your homedir).

    +

    Note that if you configured (as advised elsewhere) the SSH connection in your ~/.ssh/config file, you can use a much simpler syntax:

    +
    $> scp -r ~/devel/myproject iris-cluster:
    +
    + +

    Transfer from the remote cluster front-end to your local machine

    +

    Conversely, let's assume you want to retrieve the files ~/experiments/parallel_run/* +

    $> scp -P 8022 yourlogin@iris-cluster:experiments/parallel_run/* /path/to/local/directory
    +

    +

    Again, if you configured the SSH connection in your ~/.ssh/config file, you can use a simpler syntax:

    +
    $> scp iris-cluster:experiments/parallel_run/* /path/to/local/directory
    +
    + +

    See the scp(1) man page or man scp for more details.

    +
    +

    Danger

    +

    scp SHOULD NOT be used in the following cases:

    +
      +
    • When you are copying more than a few files, as scp spawns a new process for each file and can be quite slow and resource intensive when copying a large number of files.
    • +
    • When using the -r switch, scp does not know about symbolic links and will blindly follow them, even if it has already made a copy of the file. That can lead to scp copying an infinite amount of data and can easily fill up your hard disk (or worse, a system shared disk), so be careful.
    • +
    +
    +
    +

    N.B. There are many alternative ways to transfer files in HPC platforms and you should check your options according to the problem at hand.

    +

    Windows and OS X users may wish to transfer files from their systems to the clusters' login nodes with easy-to-use GUI applications such as:

    + +

    These applications will need to be configured to connect to the frontends with the same parameters as discussed on the SSH access page.

    +

    Using rsync

    +

    The clever alternative to scp is rsync, which has the advantage of transferring only the files which differ between the source and the destination. This feature is often referred to as fast incremental file transfer. Additionally, symbolic links can be preserved. +The typical syntax of rsync (see rsync(1) ) for the cluster is similar to the one of scp:

    +
    # /!\ ADAPT </path/to/source> and </path/to/destination>
    +# From LOCAL directory (/path/to/local/source) toward REMOTE server <hostname>
    +rsync --rsh='ssh -p 8022' -avzu /path/to/local/source  [user@]hostname:/path/to/destination
    +# Ex: from REMOTE server <hostname> to LOCAL directory
    +rsync --rsh='ssh -p 8022' -avzu [user@]hostname:/path/to/source  /path/to/local/destination
    +
    + +
      +
    • the --rsh option specifies the connector to use (here SSH on port 8022)
    • +
    • the -a option corresponds to the "Archive" mode. Most likely you should always keep this on as it preserves file permissions and does not follow symlinks.
    • +
    • the -v option enables the verbose mode
    • +
    • the -z option enable compression, this will compress each file as it gets sent over the pipe. This can greatly decrease time, depending on what sort of files you are copying.
    • +
    • the -u option (or --update) corresponds to an updating process which skips files that are newer on the receiver. At this level, you may prefer the more dangerous option --delete that deletes extraneous files from dest dirs. +Just like scp, the syntax for qualifying a remote path is as follows on the cluster: yourlogin@iris-cluster:path/from/homedir
    • +
    +

    Transfer from your local machine to the remote cluster

    +

    Coming back to the previous examples, let's assume you have a local directory ~/devel/myproject you want to transfer to the cluster, in your remote homedir. In that case:

    +

    # /!\ ADAPT yourlogin to... your ULHPC login
    +$> rsync --rsh='ssh -p 8022' -avzu ~/devel/myproject yourlogin@access-iris.uni.lu:
    +
    +This will synchronize your local directory ~/devel/myproject on the cluster front-end (in your homedir).

    +
    +

    Transfer to Iris, Aion or both?

    +

    The above example target the access server of Iris. +Actually, you could have targetted the access server of Aion: it doesn't matter since the storage is SHARED between both clusters.

    +
    +

    Note that if you configured (as advised above) your SSH connection in your ~/.ssh/config file with a dedicated SSH entry {iris,aion}-cluster, you can use a simpler syntax:

    +
    $> rsync -avzu ~/devel/myproject iris-cluster:
    +# OR (it doesn't matter)
    +$> rsync -avzu ~/devel/myproject aion-cluster:
    +
    + +

    Transfer from your local machine to a project directory on the remote cluster

    +

    When transferring data to a project directory you should keep the group and group permissions imposed by the project directory and quota. Therefore you need to add the options --no-p --no-g to your rsync command:

    +
    $> rsync -avP --no-p --no-g ~/devel/myproject iris-cluster:/work/projects/myproject/
    +
    + +

    Transfer from the remote cluster to your local machine

    +

    Conversely, let's assume you want to synchronize (retrieve) the remote files ~/experiments/parallel_run/* on your local machine:

    +
    # /!\ ADAPT yourlogin to... your ULHPC login
    +$> rsync --rsh='ssh -p 8022' -avzu yourlogin@access-iris.uni.lu:experiments/parallel_run /path/to/local/directory
    +
    + +

    Again, if you configured the SSH connection in your ~/.ssh/config file, you can use a simpler syntax:

    +
    $> rsync -avzu iris-cluster:experiments/parallel_run /path/to/local/directory
    +# OR (it doesn't matter)
    +$> rsync -avzu aion-cluster:experiments/parallel_run /path/to/local/directory
    +
    + +

    As always, see the man page or man rsync for more details.

    +
    Windows Subsystem for Linux (WSL)

    In WSL, the home directory in Linux virtual machines is not your home directory in Windows. If you want to access the files that you downloaded with rsync inside a Linux virtual machine, please consult the WSL documentation and the file system section in particular.

    +
    +

    Data Transfer within Project directories

    +

    The ULHPC facility features a Global Project directory $PROJECTHOME hosted within the GPFS/SpecrumScale file-system. +You have to pay a particular attention when using rsync to transfer data within your project directory as depicted below.

    + + +
    +

    Access rights to project directory: Quota for clusterusers group in project directories is 0 !!!

    +

    When a project <name> is created, a group of the same name (<name>) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory.

    +

    Be aware that your default group as a user is clusterusers which has (on purpose) a quota in project directories set to 0. +You thus need to ensure you always write data in your project directory using the <name> group (instead of yoru default one.). +This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...]

    +

    When using rsync to transfer file toward the project directory /work/projects/<name> as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to:

    +
      +
    • give new files the destination-default permissions with --no-p (--no-perms), and
    • +
    • use the default group <name> of the destination dir with --no-g (--no-group)
    • +
    • (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX
    • +
    +

    Your full rsync command becomes (adapt accordingly):

    +
      rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] <source> /work/projects/<name>/[...]
    +
    + + +
    +

    For the same reason detailed above, in case you are using a build command or +more generally any command meant to write data in your project directory +/work/projects/<name>, you want to use the +sg as follows:

    +
    # /!\ ADAPT <name> accordingly
    +sg <name> -c "<command> [...]"
    +
    + +

    This is particularly important if you are building dedicated software with +Easybuild for members of the project - you typically want to do it as follows:

    +
    # /!\ ADAPT <name> accordingly
    +sg <name> -c "eb [...] -r --rebuild -D"   # Dry-run - enforce using the '<name>' group
    +sg <name> -c "eb [...] -r --rebuild"      # Dry-run - enforce using the '<name>' group
    +
    + + + +
    Debugging quota issues

    Sometimes, when copying files with rsync or scp commands and you are not careful with the options of these commands, you copy files with incorrect permissions and ownership. If a directory is copied with the wrong permissions and ownership, all files created within the directory may maintain the incorrect permissions and ownership. Typical issues that you may encounter include:

    +
      +
    • If a directory is copied incorrectly from a project directory to your home directory, the contents of the directory may continue counting towards the group data instead of your personal data and data usage may be misquoted by the df-ulphc utility. Actual data usage takes into account the file group not only its location.
    • +
    • If a directory is copied incorrectly from a personal directory or another machine to a project directory, you may be unable to create files, since the clusterusers group has no quota inside project directories. Note the group special permission (g±s) in directories ensures that all files created in the directory will have the group of the directory instead of the process that creates the file.
    • +
    +

    Typical resolutions techniques involve resetting the correct file ownership and permissions:

    +
    +
    chown -R <username>:<project name> <path to directory or file>
    +find <path to directory or file> -type d | xargs -I % chmod g+s '%'
    +
    + +
    +
    +
    chown -R <username>:clusterusers <path to directory or file>
    +find <path to directory or file> -type d | xargs -I % chmod g-s '%'
    +
    + +
    +
    +
    +

    Using MobaXterm (Windows)

    +

    If you are under Windows and you have MobaXterm installed and configured, you probably want to use it to transfer your files to the clusters. Here are the steps to use rsync inside MobaXterm in Windows.

    +
    +

    Warning

    +

    Be aware that you SHOULD enable MobaXterm SSH Agent -- see SSH Agent instructions for more instructions.

    +
    +

    Using a local bash, transfer your files

    +
      +
    • +

      Open a local "bash" shell. Click on Start local terminal on the welcome page of MobaXterm.

      +
    • +
    • +

      Find the location of the files you want to transfer. They should be located under /drives/<name of your disk>. You will have to use the Linux command line to move from one directory to the other. The cd command is used to change the current directory and ls to list files. For example, if your files are under C:\\Users\janedoe\Downloads\ you should then go to /drives/c/Users/janedoe/Downloads/ with this command:

      +
    • +
    +
    cd /drives/c/Users/janedoe/Downloads/
    +
    + +

    Then list the files with ls command. You should see the list of your data files.

    +
      +
    • +

      When you have retrieved the location of your files, we can begin the transfer with rsync. For example /drives/c/Users/janedoe/Downloads/ (watch out, there is no / character at the end of the path, it is important).

      +
    • +
    • +

      Launch the command rsync with this parameters to transfer all the content of the Downloads directory to the /isilon/projects/market_data/ directory on the cluster (the syntax is very important, be careful)

      +
    • +
    +
    rsync -avzpP -e "ssh -p 8022" /drives/c/Users/janedoe/Downloads/ yourlogin@access-iris.uni.lu:/isilon/projects/market_data/
    +
    + +
      +
    • You should see the output of transfer in progress. Wait for it to finish (it can be very long).
    • +
    +

    +

    Interrupt and resume a transfer in progress

    +
      +
    • +

      If you want to interrupt the transfer to resume it later, press Ctrl-C and exit MobaXterm.

      +
    • +
    • +

      To resume a transfer, go in the right location and execute the rsync command again. Only the files that have not been transferred will be transferred again.

      +
    • +
    +

    Alternative approaches

    +

    You can also consider alternative approaches to synchronize data with the cluster login node:

    +
      +
    • rely on a versioning system such as Git; this approach works well for source code trees;
    • +
    • mount your remote homedir by SSHFS:
        +
      • on Mac OS X, you should consider installing MacFusion for this purpose, where as
      • +
      • on Linux, just use the command-line sshfs or, mc;
      • +
      +
    • +
    • use GUI tools like FileZilla, Cyberduck, or WindSCP (or proprietary options like ExpanDrive or ForkLift 3).
    • +
    +

    SSHFS

    +

    SSHFS (SSH Filesystem) is a file system client that mounts directories located on a remote server onto a local directory over a normal ssh connection. Install the requires packages if they are not already available in your system.

    +
    +

    # Debian-like
    +sudo apt-get install sshfs
    +# RHEL-like
    +sudo yum install sshfs
    +
    +You may need to add yourself to the fuse group.

    +
    +
    +

    # Assuming HomeBrew -- see https://brew.sh
    +brew install osxfuse sshfs
    +
    +You can also directly install macFUSE from: https://osxfuse.github.io/. You must reboot for the installation of osxfuse to take effect. You can then update to the latest version.

    +
    +
    +

    With SSHFS any user can mount their ULHPC home directory onto a local workstation through an ssh connection. The CLI format is as follows: +

    sshfs [user@]host:[dir] mountpoint [options]
    +

    +

    Proceed as follows (assuming you have a working SSH connection): +

    # Create a local directory for the mounting point, e.g. ~/ulhpc
    +mkdir -p ~/ulhpc
    +# Mount the remote file system
    +sshfs iris-cluster: ~/ulhpc -o follow_symlinks,reconnect,dir_cache=no
    +
    +Note the leaving the [dir] argument blanck, mounts the user's home directory by default. The options (-o) used are:

    +
      +
    • follow_symlinks presents symbolic links in the remote files system as regular files in the local file system, useful when the symbolic link points outside the mounted directory;
    • +
    • reconnect allows the SSHFS client to automatically reconnect to server if connection is interrupted;
    • +
    • dir_cache enables or disables the directory cache which holds the names of directory entries (can be slow for mounted remote directories with many files).
    • +
    +

    When you no longer need the mounted remote directory, you must unmount your remote file system:

    +
    +
    fusermount -u ~/ulhpc
    +
    + +
    +
    +
    diskutil umount ~/ulhpc
    +
    + +
    +
    +

    Transfers between long term storage and the HPC facilities

    +

    The university provides central data storage services for all employees and students. The data are stored securely on the university campus and are managed by the IT department. The storage servers most commonly used at the university are

    +
      +
    • Atlas (atlas.uni.lux) for staff members, and
    • +
    • Poseidon (poseidon.uni.lux) for students.
    • +
    +

    For more details on the university central storage, you can have a look at

    + +
    +

    Connecting to central data storage services from a personal machine

    +

    The examples presented here are targeted to the university HPC machines. To connect to the university central data storage with a (Linux) personal machine from outside of the university network, you need to start first a VPN connection.

    +
    +

    The SMB shares exported for directories in the central data storage are meant to be accessed interactively. Transfer your data manually before and after your jobs are run. You can mount directories from the central data storage in the login nodes, and access the central data storage through the interface of smbclient from both the compute nodes during interactive jobs and the login nodes.

    +
    +

    Never store your password in plain text

    +

    Unlike mounting with sshfs, you will always need to enter your password to access a directory in an SMB share. Avoid, storing your password in any manner that it makes it recoverable from plain text. For instance, do not create job scripts that contain your password in plain text just to move data to Atlas within a job.

    +
    +

    The following commands target Atlas, but commands for Poseidon are similar.

    +

    Mounting an SMB share to a login node

    +

    The UL HPC team provides the smb-storage script to mount SMB shares in login nodes.

    +
      +
    • There exists an SMB share users where all staff member have a directory named after their user name (name.surname). To mount your directory in an shell session at a login node execute the command +
      smb-storage mount name.surname
      +
      +and your directory will be mounted to the default mount location: +
      ~/atlas.uni.lux-users-name.surname
      +
    • +
    • To mount a project share project_name in a shell session at a login node execute the command +
      smb-storage mount name.surname --project project_name
      +
      +and the share will be mounted in the default mount location: +
      ~/atlas.uni.lux-project_name
      +
    • +
    • To unmount any share, simply call the unmount subcommand with the mount point path, for instance +
      smb-storage unmount ~/atlas.uni.lux-users-name.surname
      +
      +or: +
      smb-storage unmount ~/atlas.uni.lux-project_name
      +
    • +
    +

    The smb-storage script provides optional flags to modify the default options:

    +
      +
    • --help prints information about the usage and options of he script;
    • +
    • --server <server url> specifies the server from which the SMB share is mounted (defaults to --server atlas.uni.lux if not specified, use --server poseidon.uni.lux to mount a share from Poseidon);
    • +
    • --project <project name> [<directory in project>] mounts the share <project name> and creates a symbolic link to the optionally provided location <directory in project>, or to the project root directory if a location is not provided (defaults to --project users name.surname if not specified);
    • +
    • --mountpoint <path> selects the path where the share directory will be available (defaults to ~/<server url>-<project name>-<directory in project> if nbot specified);
    • +
    • --debug prints details of the operations performed by the mount script.
    • +
    +
    +

    Best practices

    +

    Mounted SMB shares will be available in the login node, and he mount point will appear as a dead symbolic link in compute nodes. This is be design, you can only mount SMB shares in login nodes because SMB shares are meant to be used in interactive sections.

    +

    Mounted shares will remain available as long as the login session where the share was mounted remains active. You can mount shares in a tmux session in a login node, and access the share from any other session in the login node.

    +
    +
    Details of the mounting process

    There exists an SMB share users where all staff member have a directory named after their user name (name.surname). All other projects have an SMB share named after the project name (in lowercase characters).

    +

    The smb-storage scripts uses gio mount to mount SMB shares. Shares are mounted in a specially named mount point in /run/user/${UID}/gvfs. Then, smb-storage creates a symbolic link to the requested directory in project in the path specified in the --mountpoint option.

    +

    During unmounting, the symbolic links are deleted by the smb-storage script and then the shares mounted in /run/user/${UID}/gvfs are unmounted and their mount points are removed using gio mount --unmount. If a session with mounted SMB shares terminates without unmounting the shares, the shares in /run/user/${UID}/gvfs will be unmounted and their mount points deleted, but the symbolic links created by smb-storage must be removed manually.

    +
    +

    Accessing SMB shares with smbclient

    +

    The smbclient program is available in both login and compute nodes. In compute nodes the only way to access SMB shares is through the client program. With the SMB client one can connect to the users share and browse their personal directory with the command: +

    smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu
    +
    +Project directories are accessed with the command: +
    smbclient //atlas.uni.lux/project_name --user=name.surname@uni.lu
    +

    +

    Type help to get a list of all available commands or help (command_name) to get more information for a specific command. Some useful commands are

    +
      +
    • ls to list all the files in a directory,
    • +
    • mkdir (directory_name) to create a directory,
    • +
    • rm (file_name) to remove a file,
    • +
    • rmdir (directory_name) to remove a directory,
    • +
    • scopy (source_full_path) (destination_full_path) to move a file within the SMN shared directory,
    • +
    • get (file_name) [destination] to move a file from Atlas to the local machine (placed in the working directory, if the destination is not specified), and
    • +
    • put (file_name) [destination] to move a file to Atlas from the local machine (placed in the working directory, if a full path is not specified),
    • +
    • mget (file name pattern) [destination] to download multiple files, and
    • +
    • mput (file name pattern) [destination] to upload multiple files.
    • +
    +

    The patterns used in mget/mput are either normal file names, or globular expressions (e.g. *.txt).

    +

    Connecting into an interactive SMB session means that you will have to maintain a shell session dedicated to SMB. However, it saves you from entering your password for every operation. If you would like to perform a single operation and exit, you can avoid maintaining an interactive session with the --command flag. For instance, +

    smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu --command='get "full path/to/remote file.txt" "full path/to/local file.txt"'
    +
    +copies a file from the SMB directory to the local machine. Notice the use of double quotes to handle file names with spaces. Similarly, +
    smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu --command='put "full path/to/local file.txt" "full path/to/remote file.txt"'
    +
    +copies a file from the local machine to the SMB directory.

    +

    Moving whole directories is a bit more involved, as it requires setting some state variables for the session, both for interactive and non-interactive sessions. To download a directory for instance, use +

    smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu --command='recurse ON; prompt OFF; mget "full path/to/remote directory" "full path/to/local directory"'
    +
    +and to upload a directory use +
    smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu --command='recurse ON; prompt OFF; mput "full path/to/remote local" "full path/to/remote directory"'
    +
    +respectively. The session option

    +
      +
    • recurse ON enables recursion into directories, and the option
    • +
    • prompt OFF disables prompting for confirmation before moving each file.
    • +
    +

    Sources

    + +

    Special transfers

    +

    Sometimes you may have the case that a lot of files need to go from point A to B over a Wide Area Network (eg. across the Atlantic). Since packet latency and other factors on the network will naturally slow down the transfers, you need to find workarounds, typically with either rsync or tar.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/build-tools/easybuild/index.html b/development/build-tools/easybuild/index.html new file mode 100644 index 00000000..6f93b8e3 --- /dev/null +++ b/development/build-tools/easybuild/index.html @@ -0,0 +1,3381 @@ + + + + + + + + + + + + + + + + + + + + + + + + Building [custom] software with EasyBuild on the UL HPC platform - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Building [custom] software with EasyBuild on the UL HPC platform

    +

    EasyBuild can be used to ease, automate and script the build of software on the UL HPC platforms.

    +

    Indeed, as researchers involved in many cutting-edge and hot topics, you probably have access to many theoretical resources to understand the surrounding concepts. Yet it should normally give you a wish to test the corresponding software. +Traditionally, this part is rather time-consuming and frustrating, especially when the developers did not rely on a "regular" building framework such as CMake or the autotools (i.e. with build instructions as configure --prefix <path> && make && make install).

    +

    And when it comes to have a build adapted to an HPC system, you are somehow forced to make a custom build performed on the target machine to ensure you will get the best possible performance. +EasyBuild is one approach to facilitate this step.

    +

    +

    EasyBuild is a tool that allows to perform automated and reproducible compilation and installation of software. A large number of scientific software are supported (1504 supported software packages in the last release 3.6.1) -- see also What is EasyBuild?

    +

    All builds and installations are performed at user level, so you don't need the admin (i.e. root) rights. +The software are installed in your home directory (by default in $HOME/.local/easybuild/software/) and a module file is generated (by default in $HOME/.local/easybuild/modules/) to use the software.

    +

    EasyBuild relies on two main concepts: Toolchains and EasyConfig files.

    +

    A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. +The two main toolchains frequently used on the UL HPC platform are the foss ("Free and Open Source Software") and the intel one.

    +
      +
    1. foss is based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.).
    2. +
    3. intel is based on the Intel compiler and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.).
    4. +
    +

    An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedures (like configure, make and make install), this file is very simple. +Many EasyConfig files are already provided with EasyBuild. +By default, EasyConfig files and generated modules are named using the following convention: +<Software-Name>-<Software-Version>-<Toolchain-Name>-<Toolchain-Version>. +However, we use a hierarchical approach where the software are classified under a category (or class) -- see the CategorizedModuleNamingScheme option for the EASYBUILD_MODULE_NAMING_SCHEME environmental variable), meaning that the layout will respect the following hierarchy: +<Software-Class>/<Software-Name>/<Software-Version>-<Toolchain-Name>-<Toolchain-Version>

    +

    Additional details are available on EasyBuild website:

    + +

    a. Installation

    + +

    What is important for the installation of EasyBuild are the following variables:

    +
      +
    • EASYBUILD_PREFIX: where to install local modules and software, i.e. $HOME/.local/easybuild
    • +
    • EASYBUILD_MODULES_TOOL: the type of modules tool you are using, i.e. LMod in this case
    • +
    • EASYBUILD_MODULE_NAMING_SCHEME: the way the software and modules should be organized (flat view or hierarchical) -- we're advising on CategorizedModuleNamingScheme
    • +
    +

    Add the following entries to your ~/.bashrc (use your favorite CLI editor like nano or vim):

    +
    # Easybuild
    +export EASYBUILD_PREFIX=$HOME/.local/easybuild
    +export EASYBUILD_MODULES_TOOL=Lmod
    +export EASYBUILD_MODULE_NAMING_SCHEME=CategorizedModuleNamingScheme
    +# Use the below variable to run:
    +#    module use $LOCAL_MODULES
    +#    module load tools/EasyBuild
    +export LOCAL_MODULES=${EASYBUILD_PREFIX}/modules/all
    +
    +alias ma="module avail"
    +alias ml="module list"
    +function mu(){
    +   module use $LOCAL_MODULES
    +   module load tools/EasyBuild
    +}
    +
    + +

    Then source this file to expose the environment variables:

    +
    $> source ~/.bashrc
    +$> echo $EASYBUILD_PREFIX
    +/home/users/<login>/.local/easybuild
    +
    + +

    Now let's install EasyBuild following the official procedure. Install EasyBuild in a temporary directory and use this temporary installation to build an EasyBuild module in your $EASYBUILD_PREFIX:

    +
    # pick installation prefix, and install EasyBuild into it
    +export EB_TMPDIR=/tmp/$USER/eb_tmp
    +python3 -m pip install --ignore-installed --prefix $EB_TMPDIR easybuild
    +
    +# update environment to use this temporary EasyBuild installation
    +export PATH=$EB_TMPDIR/bin:$PATH
    +export PYTHONPATH=$(/bin/ls -rtd -1 $EB_TMPDIR/lib*/python*/site-packages | tail -1):$PYTHONPATH
    +export EB_PYTHON=python3
    +
    +# install Easybuild in your $EASYBUILD_PREFIX
    +eb --install-latest-eb-release --prefix $EASYBUILD_PREFIX
    +
    + +

    Now you can use your freshly built software. The main EasyBuild command is eb:

    +
    $> eb --version             # expected ;)
    +-bash: eb: command not found
    +
    +# Load the newly installed Easybuild
    +$> echo $MODULEPATH
    +/opt/apps/resif/data/stable/default/modules/all/
    +
    +$> module use $LOCAL_MODULES
    +$> echo $MODULEPATH
    +/home/users/<login>/.local/easybuild/modules/all:/opt/apps/resif/data/stable/default/modules/all
    +
    +$> module spider Easybuild
    +$> module load tools/EasyBuild       # TAB is your friend...
    +$> eb --version
    +This is EasyBuild 3.6.1 (framework: 3.6.1, easyblocks: 3.6.1) on host iris-001.
    +
    + +

    Since you are going to use quite often the above command to use locally built modules and load easybuild, an alias mu is provided and can be used from now on. Use it now.

    +

    $> mu
    +$> module avail     # OR 'ma'
    +
    +To get help on the EasyBuild options, use the -h or -H option flags:

    +
    $> eb -h
    +$> eb -H
    +
    + + +

    b. Local vs. global usage

    +

    As you probably guessed, we are going to use two places for the installed software:

    +
      +
    • local builds ~/.local/easybuild (see $LOCAL_MODULES)
    • +
    • global builds (provided to you by the UL HPC team) in /opt/apps/resif/data/stable/default/modules/all (see default $MODULEPATH).
    • +
    +

    Default usage (with the eb command) would install your software and modules in ~/.local/easybuild.

    +

    Before that, let's explore the basic usage of EasyBuild and the eb command.

    +
    # Search for an Easybuild recipy with 'eb -S <pattern>'
    +$> eb -S Spark
    +CFGS1=/opt/apps/resif/data/easyconfigs/ulhpc/default/easybuild/easyconfigs/s/Spark
    +CFGS2=/home/users/<login>/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/s/Spark
    + * $CFGS1/Spark-2.1.1.eb
    + * $CFGS1/Spark-2.3.0-intel-2018a-Hadoop-2.7-Java-1.8.0_162-Python-3.6.4.eb
    + * $CFGS2/Spark-1.3.0.eb
    + * $CFGS2/Spark-1.4.1.eb
    + * $CFGS2/Spark-1.5.0.eb
    + * $CFGS2/Spark-1.6.0.eb
    + * $CFGS2/Spark-1.6.1.eb
    + * $CFGS2/Spark-2.0.0.eb
    + * $CFGS2/Spark-2.0.2.eb
    + * $CFGS2/Spark-2.2.0-Hadoop-2.6-Java-1.8.0_144.eb
    + * $CFGS2/Spark-2.2.0-Hadoop-2.6-Java-1.8.0_152.eb
    + * $CFGS2/Spark-2.2.0-intel-2017b-Hadoop-2.6-Java-1.8.0_152-Python-3.6.3.eb
    +
    + +

    c. Build software using provided EasyConfig file

    +

    In this part, we propose to build High Performance Linpack (HPL) using EasyBuild. +HPL is supported by EasyBuild, this means that an EasyConfig file allowing to build HPL is already provided with EasyBuild.

    +

    First of all, let's check if that software is not available by default:

    +
    $> module spider HPL
    +
    +Lmod has detected the following error: Unable to find: "HPL"
    +
    + +

    Then, search for available EasyConfig files with HPL in their name. The EasyConfig files are named with the .eb extension.

    +
    # Search for an Easybuild recipy with 'eb -S <pattern>'
    +$> eb -S HPL-2.2
    +CFGS1=/home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/h/HPL
    + * $CFGS1/HPL-2.2-foss-2016.07.eb
    + * $CFGS1/HPL-2.2-foss-2016.09.eb
    + * $CFGS1/HPL-2.2-foss-2017a.eb
    + * $CFGS1/HPL-2.2-foss-2017b.eb
    + * $CFGS1/HPL-2.2-foss-2018a.eb
    + * $CFGS1/HPL-2.2-fosscuda-2018a.eb
    + * $CFGS1/HPL-2.2-giolf-2017b.eb
    + * $CFGS1/HPL-2.2-giolf-2018a.eb
    + * $CFGS1/HPL-2.2-giolfc-2017b.eb
    + * $CFGS1/HPL-2.2-gmpolf-2017.10.eb
    + * $CFGS1/HPL-2.2-goolfc-2016.08.eb
    + * $CFGS1/HPL-2.2-goolfc-2016.10.eb
    + * $CFGS1/HPL-2.2-intel-2017.00.eb
    + * $CFGS1/HPL-2.2-intel-2017.01.eb
    + * $CFGS1/HPL-2.2-intel-2017.02.eb
    + * $CFGS1/HPL-2.2-intel-2017.09.eb
    + * $CFGS1/HPL-2.2-intel-2017a.eb
    + * $CFGS1/HPL-2.2-intel-2017b.eb
    + * $CFGS1/HPL-2.2-intel-2018.00.eb
    + * $CFGS1/HPL-2.2-intel-2018.01.eb
    + * $CFGS1/HPL-2.2-intel-2018.02.eb
    + * $CFGS1/HPL-2.2-intel-2018a.eb
    + * $CFGS1/HPL-2.2-intelcuda-2016.10.eb
    + * $CFGS1/HPL-2.2-iomkl-2016.09-GCC-4.9.3-2.25.eb
    + * $CFGS1/HPL-2.2-iomkl-2016.09-GCC-5.4.0-2.26.eb
    + * $CFGS1/HPL-2.2-iomkl-2017.01.eb
    + * $CFGS1/HPL-2.2-intel-2017.02.eb
    + * $CFGS1/HPL-2.2-intel-2017.09.eb
    + * $CFGS1/HPL-2.2-intel-2017a.eb
    + * $CFGS1/HPL-2.2-intel-2017b.eb
    + * $CFGS1/HPL-2.2-intel-2018.00.eb
    + * $CFGS1/HPL-2.2-intel-2018.01.eb
    + * $CFGS1/HPL-2.2-intel-2018.02.eb
    + * $CFGS1/HPL-2.2-intel-2018a.eb
    + * $CFGS1/HPL-2.2-intelcuda-2016.10.eb
    + * $CFGS1/HPL-2.2-iomkl-2016.09-GCC-4.9.3-2.25.eb
    + * $CFGS1/HPL-2.2-iomkl-2016.09-GCC-5.4.0-2.26.eb
    + * $CFGS1/HPL-2.2-iomkl-2017.01.eb
    + * $CFGS1/HPL-2.2-iomkl-2017a.eb
    + * $CFGS1/HPL-2.2-iomkl-2017b.eb
    + * $CFGS1/HPL-2.2-iomkl-2018.02.eb
    + * $CFGS1/HPL-2.2-iomkl-2018a.eb
    + * $CFGS1/HPL-2.2-pomkl-2016.09.eb
    +
    + +

    We are going to build HPL 2.2 against the intel toolchain, typically the 2017a version which is available by default on the platform.

    +

    Pick the corresponding recipy (for instance HPL-2.2-intel-2017a.eb), install it with

    +
       eb <name>.eb [-D] -r
    +
    + + +
      +
    • -D enables the dry-run mode to check what's going to be install -- ALWAYS try it first
    • +
    • -r enables the robot mode to automatically install all dependencies while searching for easyconfigs in a set of pre-defined directories -- you can also prepend new directories to search for eb files (like the current directory $PWD) using the option and syntax --robot-paths=$PWD: (do not forget the ':'). See Controlling the robot search path documentation
    • +
    • The $CFGS<n>/ prefix should be dropped unless you know what you're doing (and thus have previously defined the variable -- see the first output of the eb -S [...] command).
    • +
    +

    So let's install HPL version 2.2 and FIRST check which dependencies are satisfied with -Dr:

    +
    $> eb HPL-2.2-intel-2017a.eb -Dr
    +== temporary log file in case of crash /tmp/eb-CTC2hq/easybuild-gfLf1W.log
    +Dry run: printing build status of easyconfigs and dependencies
    +CFGS=/home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs
    + * [x] $CFGS/m/M4/M4-1.4.17.eb (module: devel/M4/1.4.17)
    + * [x] $CFGS/b/Bison/Bison-3.0.4.eb (module: lang/Bison/3.0.4)
    + * [x] $CFGS/f/flex/flex-2.6.0.eb (module: lang/flex/2.6.0)
    + * [x] $CFGS/z/zlib/zlib-1.2.8.eb (module: lib/zlib/1.2.8)
    + * [x] $CFGS/b/binutils/binutils-2.27.eb (module: tools/binutils/2.27)
    + * [x] $CFGS/g/GCCcore/GCCcore-6.3.0.eb (module: compiler/GCCcore/6.3.0)
    + * [x] $CFGS/m/M4/M4-1.4.18-GCCcore-6.3.0.eb (module: devel/M4/1.4.18-GCCcore-6.3.0)
    + * [x] $CFGS/z/zlib/zlib-1.2.11-GCCcore-6.3.0.eb (module: lib/zlib/1.2.11-GCCcore-6.3.0)
    + * [x] $CFGS/h/help2man/help2man-1.47.4-GCCcore-6.3.0.eb (module: tools/help2man/1.47.4-GCCcore-6.3.0)
    + * [x] $CFGS/b/Bison/Bison-3.0.4-GCCcore-6.3.0.eb (module: lang/Bison/3.0.4-GCCcore-6.3.0)
    + * [x] $CFGS/f/flex/flex-2.6.3-GCCcore-6.3.0.eb (module: lang/flex/2.6.3-GCCcore-6.3.0)
    + * [x] $CFGS/b/binutils/binutils-2.27-GCCcore-6.3.0.eb (module: tools/binutils/2.27-GCCcore-6.3.0)
    + * [x] $CFGS/i/icc/icc-2017.1.132-GCC-6.3.0-2.27.eb (module: compiler/icc/2017.1.132-GCC-6.3.0-2.27)
    + * [x] $CFGS/i/ifort/ifort-2017.1.132-GCC-6.3.0-2.27.eb (module: compiler/ifort/2017.1.132-GCC-6.3.0-2.27)
    + * [x] $CFGS/i/iccifort/iccifort-2017.1.132-GCC-6.3.0-2.27.eb (module: toolchain/iccifort/2017.1.132-GCC-6.3.0-2.27)
    + * [x] $CFGS/i/impi/impi-2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27.eb (module: mpi/impi/2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27)
    + * [x] $CFGS/i/iimpi/iimpi-2017a.eb (module: toolchain/iimpi/2017a)
    + * [x] $CFGS/i/imkl/imkl-2017.1.132-iimpi-2017a.eb (module: numlib/imkl/2017.1.132-iimpi-2017a)
    + * [x] $CFGS/i/intel/intel-2017a.eb (module: toolchain/intel/2017a)
    + * [ ] $CFGS/h/HPL/HPL-2.2-intel-2017a.eb (module: tools/HPL/2.2-intel-2017a)
    +== Temporary log file(s) /tmp/eb-CTC2hq/easybuild-gfLf1W.log* have been removed.
    +== Temporary directory /tmp/eb-CTC2hq has been removed.
    +
    + +

    As can be seen, there is a single element to install and this has not been done so far (box not checked). All the dependencies are already present (box checked). +Let's really install the selected software -- you may want to prefix the eb command with the time to collect the installation time:

    +
    $> time eb HPL-2.2-intel-2017a.eb -r       # Remove the '-D' (dry-run) flags
    +== temporary log file in case of crash /tmp/eb-nub_oL/easybuild-J8sNzx.log
    +== resolving dependencies ...
    +== processing EasyBuild easyconfig /home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/h/HPL/HPL-2.2-intel-2017a.eb
    +== building and installing tools/HPL/2.2-intel-2017a...
    +== fetching files...
    +== creating build dir, resetting environment...
    +== unpacking...
    +== patching...
    +== preparing...
    +== configuring...
    +== building...
    +== testing...
    +== installing...
    +== taking care of extensions...
    +== postprocessing...
    +== sanity checking...
    +== cleaning up...
    +== creating module...
    +== permissions...
    +== packaging...
    +== COMPLETED: Installation ended successfully
    +== Results of the build can be found in the log file(s) /home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a/easybuild/easybuild-HPL-2.2-20180608.094831.log
    +== Build succeeded for 1 out of 1
    +== Temporary log file(s) /tmp/eb-nub_oL/easybuild-J8sNzx.log* have been removed.
    +== Temporary directory /tmp/eb-nub_oL has been removed.
    +
    +real    0m56.472s
    +user    0m15.268s
    +sys     0m19.998s
    +
    + +

    Check the installed software:

    +
    $> module av HPL
    +
    +------------------------- /home/users/<login>/.local/easybuild/modules/all -------------------------
    +   tools/HPL/2.2-intel-2017a
    +
    +Use "module spider" to find all possible modules.
    +Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".
    +
    +$> module spider HPL
    +
    +----------------------------------------------------------------------------------------------------
    +  tools/HPL: tools/HPL/2.2-intel-2017a
    +----------------------------------------------------------------------------------------------------
    +    Description:
    +      HPL is a software package that solves a (random) dense linear system in double precision
    +      (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable
    +      as well as freely available implementation of the High Performance Computing Linpack Benchmark.
    +
    +    This module can be loaded directly: module load tools/HPL/2.2-intel-2017a
    +
    +    Help:
    +
    +      Description
    +      ===========
    +      HPL is a software package that solves a (random) dense linear system in double precision
    +      (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable
    +      as well as freely available implementation of the High Performance Computing Linpack Benchmark.
    +
    +
    +      More information
    +      ================
    +       - Homepage: http://www.netlib.org/benchmark/hpl/
    +
    +$> module show tools/HPL
    +---------------------------------------------------------------------------------------------------
    +   /home/users/svarrette/.local/easybuild/modules/all/tools/HPL/2.2-intel-2017a.lua:
    +---------------------------------------------------------------------------------------------------
    +help([[
    +Description
    +===========
    +HPL is a software package that solves a (random) dense linear system in double precision
    +(64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable
    +as well as freely available implementation of the High Performance Computing Linpack Benchmark.
    +
    +
    +More information
    +================
    + - Homepage: http://www.netlib.org/benchmark/hpl/
    +]])
    +whatis("Description: HPL is a software package that solves a (random) dense linear system in
    + double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded
    + as a portable as well as freely available implementation of the High Performance Computing
    + Linpack Benchmark.")
    +whatis("Homepage: http://www.netlib.org/benchmark/hpl/")
    +conflict("tools/HPL")
    +load("toolchain/intel/2017a")
    +prepend_path("PATH","/home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a/bin")
    +setenv("EBROOTHPL","/home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a")
    +setenv("EBVERSIONHPL","2.2")
    +setenv("EBDEVELHPL","/home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a/easybuild/tools-HPL-2.2-intel-2017a-easybuild-devel")
    +
    + +

    Note: to see the (locally) installed software, the MODULEPATH variable should include the $HOME/.local/easybuild/modules/all/ (of $LOCAL_MODULES) path (which is what happens when using module use <path> -- see the mu command)

    +

    You can now load the freshly installed module like any other:

    +
    $> module load tools/HPL
    +$> module list
    +
    +Currently Loaded Modules:
    +  1) tools/EasyBuild/3.6.1                          7) mpi/impi/2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27
    +  2) compiler/GCCcore/6.3.0                         8) toolchain/iimpi/2017a
    +  3) tools/binutils/2.27-GCCcore-6.3.0              9) numlib/imkl/2017.1.132-iimpi-2017a
    +  4) compiler/icc/2017.1.132-GCC-6.3.0-2.27        10) toolchain/intel/2017a
    +  5) compiler/ifort/2017.1.132-GCC-6.3.0-2.27      11) tools/HPL/2.2-intel-2017a
    +  6) toolchain/iccifort/2017.1.132-GCC-6.3.0-2.27
    +
    + +

    Tips: When you load a module <NAME> generated by Easybuild, it is installed within the directory reported by the $EBROOT<NAME> variable. +In the above case, you will find the generated binary for HPL in ${EBROOTHPL}/bin/xhpl.

    +

    You may want to test the newly built HPL benchmark (you need to reserve at least 4 cores for that to succeed):

    +
    # In another terminal, connect to the cluster frontend
    +# Have an interactive job
    +############### iris cluster (slurm) ###############
    +(access-iris)$> si -n 4        # this time reserve for 4 (mpi) tasks
    +$> mu
    +$> module load tools/HPL
    +$> cd $EBROOTHPL
    +$> ls
    +$> cd bin
    +$> ls
    +$> srun -n $SLURM_NTASKS ./xhpl
    +
    + +

    Running HPL benchmarks requires more attention -- a full tutorial is dedicated to it. +Yet you can see that we obtained HPL 2.2 without writing any EasyConfig file.

    +

    d. Build software using a customized EasyConfig file

    +

    There are multiple ways to amend an EasyConfig file. Check the --try-* option flags for all the possibilities.

    +

    Generally you want to do that when the up-to-date version of the software you want is not available as a recipy within Easybuild. +For instance, a very popular building environment CMake has recently released a new version (3.11.3), which you want to give a try.

    +

    It is not available as module, so let's build it.

    +

    First let's check for available easyconfigs recipy if one exist for the expected version:

    +
    $> eb -S Cmake-3
    +[...]
    + * $CFGS2/CMake-3.9.1.eb
    + * $CFGS2/CMake-3.9.4-GCCcore-6.4.0.eb
    + * $CFGS2/CMake-3.9.5-GCCcore-6.4.0.eb
    +
    + +

    We are going to reuse one of the latest EasyConfig available, for instance lets copy $CFGS2/CMake-3.9.1.eb

    +
    # Work in a dedicated directory
    +$> mkdir -p ~/software/CMake
    +$> cd ~/software/CMake
    +
    +$> eb -S Cmake-3|less   # collect the definition of the CFGS2 variable
    +$> CFGS2=/home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/c/CMake
    +$> cp $CFGS2/CMake-3.9.1.eb .
    +$> mv CMake-3.9.1.eb CMake-3.11.3.eb        # Adapt version suffix to the lastest realse
    +
    + +

    You need to perform the following changes (here: version upgrade, and adapted checksum)

    +
    --- CMake-3.9.1.eb      2018-06-08 10:56:24.447699000 +0200
    ++++ CMake-3.11.3.eb     2018-06-08 11:07:39.716672000 +0200
    +@@ -1,7 +1,7 @@
    + easyblock = 'ConfigureMake'
    +
    + name = 'CMake'
    +-version = '3.9.1'
    ++version = '3.11.3'
    +
    + homepage = 'http://www.cmake.org'
    + description = """CMake, the cross-platform, open-source build system.
    +@@ -11,7 +11,7 @@
    +
    + source_urls = ['http://www.cmake.org/files/v%(version_major_minor)s']
    + sources = [SOURCELOWER_TAR_GZ]
    +-checksums = ['d768ee83d217f91bb597b3ca2ac663da7a8603c97e1f1a5184bc01e0ad2b12bb']
    ++checksums = ['287135b6beb7ffc1ccd02707271080bbf14c21d80c067ae2c0040e5f3508c39a']
    +
    + configopts = '-- -DCMAKE_USE_OPENSSL=1'
    +
    + +

    If the checksum is not provided on the official software page, you will need to compute it yourself by downloading the sources and collect the checksum:

    +
    $> gsha256sum ~/Download/cmake-3.11.3.tar.gz
    +287135b6beb7ffc1ccd02707271080bbf14c21d80c067ae2c0040e5f3508c39a  cmake-3.11.3.tar.gz
    +
    + +

    Let's build it:

    +
    $>  eb ./CMake-3.11.3.eb -Dr
    +== temporary log file in case of crash /tmp/eb-UX7APP/easybuild-gxnyIv.log
    +Dry run: printing build status of easyconfigs and dependencies
    +CFGS=/mnt/irisgpfs/users/<login>/software/CMake
    + * [ ] $CFGS/CMake-3.11.3.eb (module: devel/CMake/3.11.3)
    +== Temporary log file(s) /tmp/eb-UX7APP/easybuild-gxnyIv.log* have been removed.
    +== Temporary directory /tmp/eb-UX7APP has been removed.
    +
    + +

    Dependencies are fine, so let's build it:

    +
    $> time eb ./CMake-3.11.3.eb -r
    +== temporary log file in case of crash /tmp/eb-JjF92B/easybuild-RjzRjb.log
    +== resolving dependencies ...
    +== processing EasyBuild easyconfig /mnt/irisgpfs/users/<login>/software/CMake/CMake-3.11.3.eb
    +== building and installing devel/CMake/3.11.3...
    +== fetching files...
    +== creating build dir, resetting environment...
    +== unpacking...
    +== patching...
    +== preparing...
    +== configuring...
    +== building...
    +== testing...
    +== installing...
    +== taking care of extensions...
    +== postprocessing...
    +== sanity checking...
    +== cleaning up...
    +== creating module...
    +== permissions...
    +== packaging...
    +== COMPLETED: Installation ended successfully
    +== Results of the build can be found in the log file(s) /home/users/<login>/.local/easybuild/software/devel/CMake/3.11.3/easybuild/easybuild-CMake-3.11.3-20180608.111611.log
    +== Build succeeded for 1 out of 1
    +== Temporary log file(s) /tmp/eb-JjF92B/easybuild-RjzRjb.log* have been removed.
    +== Temporary directory /tmp/eb-JjF92B has been removed.
    +
    +real    7m40.358s
    +user    5m56.442s
    +sys 1m15.185s
    +
    + +

    Note you can follow the progress of the installation in a separate shell on the node:

    +

    Check the result:

    +
    $> module av CMake
    +
    + +

    That's all ;-)

    +

    Final remaks

    +

    This workflow (copying an existing recipy, adapting the filename, the version and the source checksum) covers most of the test cases. +Yet sometimes you need to work on a more complex dependency check, in which case you'll need to adapt many eb files. +In this case, for each build, you need to instruct Easybuild to search for easyconfigs also in the current directory, in which case you will use:

    +
    $> eb <filename>.eb --robot=$PWD:$EASYBUILD_ROBOT -D
    +$> eb <filename>.eb --robot=$PWD:$EASYBUILD_ROBOT
    +
    + +
    +

    (OLD) Build software using your own EasyConfig file

    +

    Below are obsolete instructions to write a full Easyconfig file, left for archiving and informal purposes.

    +

    For this example, we create an EasyConfig file to build GZip 1.4 with the GOOLF toolchain. +Open your favorite editor and create a file named gzip-1.4-goolf-1.4.10.eb with the following content:

    +
    easyblock = 'ConfigureMake'
    +
    +name = 'gzip'
    +version = '1.4'
    +
    +homepage = 'http://www.gnu.org/software/gzip/'
    +description = "gzip (GNU zip) is a popular data compression program as a replacement for compress"
    +
    +# use the GOOLF toolchain
    +toolchain = {'name': 'goolf', 'version': '1.4.10'}
    +
    +# specify that GCC compiler should be used to build gzip
    +preconfigopts = "CC='gcc'"
    +
    +# source tarball filename
    +sources = ['%s-%s.tar.gz'%(name,version)]
    +
    +# download location for source files
    +source_urls = ['http://ftpmirror.gnu.org/gzip']
    +
    +# make sure the gzip and gunzip binaries are available after installation
    +sanity_check_paths = {
    +                      'files': ["bin/gunzip", "bin/gzip"],
    +                      'dirs': []
    +                     }
    +
    +# run 'gzip -h' and 'gzip --version' after installation
    +sanity_check_commands = [True, ('gzip', '--version')]
    +
    + + +

    This is a simple EasyConfig. Most of the fields are self-descriptive. No build method is explicitely defined, so it uses by default the standard configure/make/make install approach.

    +

    Let's build GZip with this EasyConfig file:

    +
    $> time eb gzip-1.4-goolf-1.4.10.eb
    +
    +== temporary log file in case of crash /tmp/eb-hiyyN1/easybuild-ynLsHC.log
    +== processing EasyBuild easyconfig /mnt/nfs/users/homedirs/mschmitt/gzip-1.4-goolf-1.4.10.eb
    +== building and installing base/gzip/1.4-goolf-1.4.10...
    +== fetching files...
    +== creating build dir, resetting environment...
    +== unpacking...
    +== patching...
    +== preparing...
    +== configuring...
    +== building...
    +== testing...
    +== installing...
    +== taking care of extensions...
    +== packaging...
    +== postprocessing...
    +== sanity checking...
    +== cleaning up...
    +== creating module...
    +== COMPLETED: Installation ended successfully
    +== Results of the build can be found in the log file /home/users/mschmitt/.local/easybuild/software/base/gzip/1.4-goolf-1.4.10/easybuild/easybuild-gzip-1.4-20150624.114745.log
    +== Build succeeded for 1 out of 1
    +== temporary log file(s) /tmp/eb-hiyyN1/easybuild-ynLsHC.log* have been removed.
    +== temporary directory /tmp/eb-hiyyN1 has been removed.
    +
    +real    1m39.982s
    +user    0m52.743s
    +sys     0m11.297s
    +
    + + +

    We can now check that our version of GZip is available via the modules:

    +
    $> module avail gzip
    +
    +--------- /mnt/nfs/users/homedirs/mschmitt/.local/easybuild/modules/all ---------
    +    base/gzip/1.4-goolf-1.4.10
    +
    + + +

    To go further into details

    +

    Please refer to the following pointers to get additionnal features:

    + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/build-tools/spack/index.html b/development/build-tools/spack/index.html new file mode 100644 index 00000000..cfbf690a --- /dev/null +++ b/development/build-tools/spack/index.html @@ -0,0 +1,2754 @@ + + + + + + + + + + + + + + + + + + + + + + + + Spack - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Spack

    + + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/performance-debugging-tools/advisor/index.html b/development/performance-debugging-tools/advisor/index.html new file mode 100644 index 00000000..a1c4ce50 --- /dev/null +++ b/development/performance-debugging-tools/advisor/index.html @@ -0,0 +1,3038 @@ + + + + + + + + + + + + + + + + + + + + + + + + Intel Advisor - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    + +
    + + +
    +
    + + + + + + + + + + +

    Intel Advisor

    +

    +Intel Advisor provides two workflows to help ensure that Fortran, C, and C++ +applications can make the most of modern Intel processors. Advisor contains +three key capabilities:

    +
      +
    • Vectorization + Advisor + identifies loops that will benefit most from vectorization, specifies what is + blocking effective vectorization, finds the benefit of alternative data + reorganizations, and increases the confidence that vectorization is safe.
    • +
    • Threading + Advisor is used + for threading design and prototyping and to analyze, design, tune, and check + threading design options without disrupting normal code development.
    • +
    • Advisor + Roofline + enables visualization of actual performance against hardware-imposed + performance ceilings (rooflines) such as memory bandwidth and compute + capacity - which provide an ideal roadmap of potential optimization steps.
    • +
    +

    The links to each capability above provide detailed information regarding how +to use each feature in Advisor. For more information on Intel Advisor, visit +this page.

    +

    Environmental models for Advisor on UL-HPC¶

    +
    module purge 
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load perf/Advisor/2019_update4
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    + +

    Interactive mode

    +

    # Compilation
    +$ icc -qopenmp example.c
    +
    +# Code execution
    +$ export OMP_NUM_THREADS=16
    +$ advixe-cl -collect survey -project-dir my_result -- ./a.out
    +
    +# Report collection
    +$ advixe-cl -report survey -project-dir my_result
    +
    +# To see the result in GUI
    +$ advixe-gui my_result
    +
    +VTune OpenMP result

    +

    $ advixe-cl will list out the analysis types and $ advixe-cl -hlep report will list out available reports in Advisor.

    +

    Batch mode

    +

    Shared memory programming model (OpenMP)

    +

    Example for the batch script: +

    #!/bin/bash -l
    +#SBATCH -J Advisor
    +#SBATCH -N 1
    +###SBATCH -A <project_name>
    +#SBATCH -c 28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge 
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load perf/Advisor/2019_update4
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    +export OMP_NUM_THREADS=16
    +advixe-cl -collect survey -project-dir my_result -- ./a.out
    +

    +

    Distributed memory programming model (MPI)

    +

    To compile just MPI application run $ mpiicc example.c and for MPI+OpenMP run $ mpiicc -qopenmp example.c

    +

    Example for the batch script: +

    #!/bin/bash -l
    +#SBATCH -J Advisor
    +#SBATCH -N 2
    +###SBATCH -A <project_name>
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge 
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load perf/Advisor/2019_update4
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    +srun -n ${SLURM_NTASKS} advixe-cl --collect survey --project-dir result -- ./a.out
    +
    +To collect the result and see the result in GUI use the below commands +
    # Report collection
    +$ advixe-cl --report survey --project-dir result
    +
    +# Result visualization 
    +$ advixe-gui result
    +
    +The below figure shows the hybrid(MPI+OpenMP) programming analysis results:

    +

    VTune MPI result

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/performance-debugging-tools/aps/index.html b/development/performance-debugging-tools/aps/index.html new file mode 100644 index 00000000..9ee2223b --- /dev/null +++ b/development/performance-debugging-tools/aps/index.html @@ -0,0 +1,3132 @@ + + + + + + + + + + + + + + + + + + + + + + + + APS - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Application Performance Snapshot (APS)

    +

    +Application Performance Snapshot (APS) is a lightweight open source profiling +tool developed by the Intel VTune developers. +Use Application Performance Snapshot for a quick view into a shared memory or +MPI application's use of available hardware (CPU, FPU, and memory). Application +Performance Snapshot analyzes your application's time spent in MPI, MPI and +OpenMP imbalance, memory access efficiency, FPU usage, and I/O and memory +footprint. After analysis, it displays basic performance enhancement +opportunities for systems using Intel platforms. Use this tool as a first step +in application performance analysis to get a simple snapshot of key +optimization areas and learn about profiling tools that specialize in +particular aspects of application performance.

    +

    Prerequisites

    +
    Optional Configuration

    Optional: Use the following software to get an advanced metric set when +running Application Performance Snapshot:

    +
      +
    • Recommended compilers: Intel C/C++ or Fortran Compiler (other compilers can + be used, but information about OpenMP imbalance is only available from the + Intel OpenMP library)
    • +
    • Use Intel MPI library version 2017 or later. Other MPICH-based MPI + implementations can be used, but information about MPI imbalance is only + available from the Intel MPI library. There is no support for OpenMPI.
    • +
    +

    Optional: Enable system-wide monitoring to reduce collection overhead and +collect memory bandwidth measurements. Use one of these options to enable +system-wide monitoring:

    + +
    +

    Before running the tool, set up your environment appropriately:

    +
    module purge
    +module load swenv/default-env/v1.2-20191021-production
    +module load tools/VTune/2019_update4
    +module load toolchain/intel/2019a
    +
    + +

    Analyzing Shared Memory Applications

    +

    Run the following commands (interactive mode):

    +
    # Compilation
    +$ icc -qopenmp example.c
    +
    +# Code execution
    +aps --collection-mode=all -r report_output ./a.out
    +
    + +

    aps -help will list out --collection-mode=<mode> available in APS.

    +

    # To create a .html file
    +aps-report -g report_output
    +
    +# To open an APS results in the browser
    +firefox report_output_<postfix>.html
    +
    +The below figure shows the example of result can be seen in the browser:

    +

    APS OpenMP result

    +
    # To see the command line output
    +$ aps-report <result_dir>
    +
    + +

    Example for the batch script: +

    #!/bin/bash -l
    +#SBATCH -J APS
    +#SBATCH -N 1
    +###SBATCH -A <project_name>
    +#SBATCH -c 28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +#SBATCH --nodelist=node0xx
    +
    +module purge
    +module load swenv/default-env/v1.2-20191021-production
    +module load tools/VTune/2019_update4
    +module load toolchain/intel/2019a
    +
    +export OMP_NUM_THREADS=16
    +aps --collection-mode=all -r report_output ./a.out
    +

    +

    Analyzing MPI Applications

    +

    To compile just MPI application run $ mpiicc example.c and for MPI+OpenMP run $ mpiicc -qopenmp example.c

    +

    Example for the batch script: +

    #!/bin/bash -l
    +#SBATCH -J APS
    +#SBATCH -N 2
    +###SBATCH -A <project_name>
    +#SBATCH --ntasks-per-node=14
    +#SBATCH -c 2
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +#SBATCH --reservation=<name>
    +
    +module purge
    +module load swenv/default-env/v1.2-20191021-production
    +module load tools/VTune/2019_update4
    +module load toolchain/intel/2019a
    +
    +# To collect all the results
    +export MPS_STAT_LEVEL=${SLURM_CPUS_PER_TASK:-1}
    +# An option for the OpenMP+MPI application
    +export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +srun -n ${SLURM_NTASKS} aps --collection-mode=mpi -r result_output ./a.out
    +

    +

    The below figure shows the hybrid(MPI+OpenMP) programming analysis results:

    +

    APS MPI result

    +

    Next Steps

    +
      +
    • Intel Trace Analyzer and Collector + is a graphical tool for understanding MPI application behavior, quickly + identifying bottlenecks, improving correctness, and achieving high + performance for parallel cluster applications running on Intel architecture. + Improve weak and strong scaling for applications. + Get started.
    • +
    • Intel VTune Amplifier + provides a deep insight into node-level performance including algorithmic + hotspot analysis, OpenMP threading, general exploration microarchitecture + analysis, memory access efficiency, and more. It supports C/C++, Fortran, + Java, Python, and profiling in containers. + Get started.
    • +
    • Intel Advisor provides + two tools to help ensure your Fortran, C, and C++ applications realize full + performance potential on modern processors. + Get started.
        +
      • Vectorization Advisor is an optimization tool to identify loops that will + benefit most from vectorization, analyze what is blocking effective + vectorization, and forecast the benefit of alternative data + reorganizations
      • +
      • Threading Advisor is a threading design and prototyping tool to analyze, + design, tune, and check threading design options without disrupting a + regular environment
      • +
      +
    • +
    +
    Quick Metrics Reference

    The following metrics are collected with Application Performance Snapshot. +Additional detail about each of these metrics is available in the +Intel VTune Amplifier online help.

    +

    Elapsed Time: Execution time of specified application in seconds.

    +

    SP GFLOPS: Number of single precision giga-floating point operations +calculated per second. All double operations are converted to two single +operations. SP GFLOPS metrics are only available for 3rd Generation Intel Core +processors, 5th Generation Intel processors, and 6th Generation Intel +processors.

    +

    Cycles per Instruction Retired (CPI): The amount of time each executed +instruction took measured by cycles. A CPI of 1 is considered acceptable for +high performance computing (HPC) applications, but different application +domains will have varied expected values. The CPI value tends to be greater +when there is long-latency memory, floating-point, or SIMD operations, +non-retired instructions due to branch mispredictions, or instruction +starvation at the front end.

    +

    MPI Time: Average time per process spent in MPI calls. This metric does not +include the time spent in MPI_Finalize. High values could be caused by high +wait times inside the library, active communications, or sub-optimal settings +of the MPI library. The metric is available for MPICH-based MPIs.

    +

    MPI Imbalance: CPU time spent by ranks spinning in waits on communication +operations. A high value can be caused by application workload imbalance +between ranks, or non-optimal communication schema or MPI library settings. +This metric is available only for Intel MPI Library version 2017 and later.

    +

    OpenMP Imbalance: Percentage of elapsed time that your application wastes +at OpenMP synchronization barriers because of load imbalance. This metric is +only available for the Intel OpenMP Runtime Library.

    +

    CPU Utilization: Estimate of the utilization of all logical CPU cores on +the system by your application. Use this metric to help evaluate the parallel +efficiency of your application. A utilization of 100% means that your +application keeps all of the logical CPU cores busy for the entire time that it +runs. Note that the metric does not distinguish between useful application work +and the time that is spent in parallel runtimes.

    +

    Memory Stalls: Indicates how memory subsystem issues affect application +performance. This metric measures a fraction of slots where pipeline could be +stalled due to demand load or store instructions. If the metric value is high, +review the Cache and DRAM Stalls and the percent of remote accesses metrics to +understand the nature of memory-related performance bottlenecks. If the average +memory bandwidth numbers are close to the system bandwidth limit, optimization +techniques for memory bound applications may be required to avoid memory +stalls.

    +

    FPU Utilization: The effective FPU usage while the application was running. +Use the FPU Utilization value to evaluate the vector efficiency of your +application. The value is calculated by estimating the percentage of operations +that are performed by the FPU. A value of 100% means that the FPU is fully +loaded. Any value over 50% requires additional analysis. FPU metrics are only +available for 3rd Generation Intel Core processors, 5th Generation Intel +processors, and 6th Generation Intel processors.

    +

    I/O Operations: The time spent by the application while reading data from +the disk or writing data to the disk. Read and Write values denote mean +and maximum amounts of data read and written during the elapsed time. This +metric is only available for MPI applications.

    +

    Memory Footprint: Average per-rank and per-node consumption of both virtual +and resident memory.

    +
    +

    Documentation and Resources

    + +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/performance-debugging-tools/arm-forge/index.html b/development/performance-debugging-tools/arm-forge/index.html new file mode 100644 index 00000000..b4f071f0 --- /dev/null +++ b/development/performance-debugging-tools/arm-forge/index.html @@ -0,0 +1,3023 @@ + + + + + + + + + + + + + + + + + + + + + + + + Arm Forge - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Arm Forge

    + +

    +Arm Forge is +the leading server and HPC development tool suite in research, +industry, and academia for C, C++, Fortran, and Python high performance code on Linux.
    +Arm Forge includes Arm DDT, the best debugger for time-saving high performance application +debugging, Arm MAP, the trusted performance profiler for invaluable optimization advice, +and Arm Performance Reports to help you analyze your HPC application runs.

    +

    Environmental models for Arm Forge in ULHPC

    +
    module purge
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/ArmForge/19.1
    +module load tools/ArmReports/19.1
    +
    + +

    Interactive Mode

    +

    To compile +

    $ icc -qopenmp example.c
    +
    +For debugging, profiling and analysing +
    # for debugging
    +$ ddt ./a .out
    +
    +# for profiling
    +$ map ./a .out
    +
    +# for analysis
    +$ perf-report ./a .out
    +

    +

    Batch Mode

    +

    Shared memory programming model (OpenMP)

    +

    Example for the batch script:

    +
    #!/bin/bash -l
    +#SBATCH -J ArmForge
    +#SBATCH -N 1
    +###SBATCH -A <project_name>
    +#SBATCH -c 16
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/ArmForge/19.1
    +module load tools/ArmReports/19.1
    +
    +export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +
    +# for debugging
    +$ ddt ./a .out
    +
    +# for profiling
    +$ map ./a .out
    +
    +# for analysis
    +$ perf-report ./a .out
    +
    + +

    Distributed memory programming model (MPI)

    +

    Example for the batch script:

    +

    #!/bin/bash -l
    +#SBATCH -J ArmForge
    +###SBATCH -A <project_name>
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node 28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/ArmForge/19.1
    +module load tools/ArmReports/19.1
    +
    +# for debugging
    +$ ddt srun -n ${SLURM_NTASKS} ./a .out
    +
    +# for profiling
    +$ map srun -n ${SLURM_NTASKS} ./a .out
    +
    +# for analysis
    +$ perf-report srun -n ${SLURM_NTASKS} ./a .out
    +
    +To see the result +ArmForge report

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/performance-debugging-tools/images/ITAC_1.png b/development/performance-debugging-tools/images/ITAC_1.png new file mode 100644 index 00000000..7438f8af Binary files /dev/null and b/development/performance-debugging-tools/images/ITAC_1.png differ diff --git a/development/performance-debugging-tools/images/ITAC_2.png b/development/performance-debugging-tools/images/ITAC_2.png new file mode 100644 index 00000000..d4df30db Binary files /dev/null and b/development/performance-debugging-tools/images/ITAC_2.png differ diff --git a/development/performance-debugging-tools/images/ITAC_summary.png b/development/performance-debugging-tools/images/ITAC_summary.png new file mode 100644 index 00000000..dfb4e00c Binary files /dev/null and b/development/performance-debugging-tools/images/ITAC_summary.png differ diff --git a/development/performance-debugging-tools/images/ITAC_summary_1.png b/development/performance-debugging-tools/images/ITAC_summary_1.png new file mode 100644 index 00000000..8d67ac83 Binary files /dev/null and b/development/performance-debugging-tools/images/ITAC_summary_1.png differ diff --git a/development/performance-debugging-tools/images/MPI-APS-big.png b/development/performance-debugging-tools/images/MPI-APS-big.png new file mode 100644 index 00000000..1be132f8 Binary files /dev/null and b/development/performance-debugging-tools/images/MPI-APS-big.png differ diff --git a/development/performance-debugging-tools/images/MPI-VTune.png b/development/performance-debugging-tools/images/MPI-VTune.png new file mode 100644 index 00000000..488c3141 Binary files /dev/null and b/development/performance-debugging-tools/images/MPI-VTune.png differ diff --git a/development/performance-debugging-tools/images/OpenMP-APS-normal.png b/development/performance-debugging-tools/images/OpenMP-APS-normal.png new file mode 100644 index 00000000..50ed94a5 Binary files /dev/null and b/development/performance-debugging-tools/images/OpenMP-APS-normal.png differ diff --git a/development/performance-debugging-tools/images/OpenMP-VTune.png b/development/performance-debugging-tools/images/OpenMP-VTune.png new file mode 100644 index 00000000..fd6dd187 Binary files /dev/null and b/development/performance-debugging-tools/images/OpenMP-VTune.png differ diff --git a/development/performance-debugging-tools/images/Scalasca_1.png b/development/performance-debugging-tools/images/Scalasca_1.png new file mode 100644 index 00000000..6c03cb7d Binary files /dev/null and b/development/performance-debugging-tools/images/Scalasca_1.png differ diff --git a/development/performance-debugging-tools/images/arm_forge_report.png b/development/performance-debugging-tools/images/arm_forge_report.png new file mode 100644 index 00000000..fa976b75 Binary files /dev/null and b/development/performance-debugging-tools/images/arm_forge_report.png differ diff --git a/development/performance-debugging-tools/images/itac-event-timeline.png b/development/performance-debugging-tools/images/itac-event-timeline.png new file mode 100644 index 00000000..bc453dc2 Binary files /dev/null and b/development/performance-debugging-tools/images/itac-event-timeline.png differ diff --git a/development/performance-debugging-tools/images/itac-quantitative-timeline.png b/development/performance-debugging-tools/images/itac-quantitative-timeline.png new file mode 100644 index 00000000..9f8b02fb Binary files /dev/null and b/development/performance-debugging-tools/images/itac-quantitative-timeline.png differ diff --git a/development/performance-debugging-tools/images/itac-show-advanced.png b/development/performance-debugging-tools/images/itac-show-advanced.png new file mode 100644 index 00000000..e01282dc Binary files /dev/null and b/development/performance-debugging-tools/images/itac-show-advanced.png differ diff --git a/development/performance-debugging-tools/images/itac-summary.png b/development/performance-debugging-tools/images/itac-summary.png new file mode 100644 index 00000000..598effea Binary files /dev/null and b/development/performance-debugging-tools/images/itac-summary.png differ diff --git a/development/performance-debugging-tools/inspector/index.html b/development/performance-debugging-tools/inspector/index.html new file mode 100644 index 00000000..c9efd117 --- /dev/null +++ b/development/performance-debugging-tools/inspector/index.html @@ -0,0 +1,3151 @@ + + + + + + + + + + + + + + + + + + + + + + + + Intel Inspector - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Intel Inspector

    +

    +Intel Inspector is a memory and threading error checking tool for users +developing serial and multithreaded applications on Windows and Linux operating +systems. The essential features of Intel Inspector for Linux are:

    +
      +
    • Standalone GUI and command-line environments
    • +
    • Preset analysis configurations (with some configurable settings) and the + ability to create custom analysis configurations to help the user control + analysis scope and cost
    • +
    • Interactive debugging capability so one can investigate problems more deeply + during the analysis
    • +
    • A large number of reported memory errors, including on-demand memory leak + detection
    • +
    • Memory growth measurement to help ensure that the application uses no more + memory than expected
    • +
    • Data race, deadlock, lock hierarchy violation, and cross-thread stack access + error detection
    • +
    +

    Options for the Collect Action

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    mi1Detect memory leaks
    mi2Detect memory problems
    mi3Locate memory problems
    ti1Detect deadlocks
    ti2Detect deadlocks and data races
    ti3Locate deadlocks and data races
    +

    Options for the Report Action

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    summaryA brief statement of the total number of new problems found grouped by problem type
    problemsA detailed report of detected problem sets in the result, along with their location in the source code
    observationsA detailed report of all code locations used to form new problem sets
    statusA brief statement of the total number of detected problems and the number that are not investigated, grouped by category
    +

    For more information on Intel Inspector, please visit +https://software.intel.com/en-us/intel-inspector-xe.

    +

    Environmental models for Inspector on UL-HPC

    +
    module purge 
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/Inspector/2019_update4
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    + +

    Interactive Mode

    +

    To launch Inspector on Iris, we recommend that you use the command +line tool inspxe-cl to collect data via batch jobs and then display +results using the GUI, inspxe-gui, on a login node.

    +
    # Compilation
    +$ icc -qopenmp example.cc
    +
    +# Result collection
    +$ inspxe-cl -collect mi1 -result-dir mi1 -- ./a.out
    +
    +# Result view
    +$ cat inspxe-cl.txt
    +=== Start: [2020/04/08 02:11:50] ===
    +2 new problem(s) found
    +1 Memory leak problem(s) detected
    +1 Memory not deallocated problem(s) detected
    +=== End: [2020/04/08 02:11:55] ===
    +
    + +

    Batch Mode

    +

    Shared memory programming model (OpenMP)

    +

    Example for the batch script:

    +

    #!/bin/bash -l
    +#SBATCH -J Inspector
    +#SBATCH -N 1
    +###SBATCH -A <project_name>
    +#SBATCH -c 28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge 
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/Inspector/2019_update4
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    +inspxe-cl -collect mi1 -result-dir mi1 -- ./a.out`
    +
    +To see the result:

    +
    # Result view
    +$ cat inspxe-cl.txt
    +=== Start: [2020/04/08 02:11:50] ===
    +2 new problem(s) found
    +1 Memory leak problem(s) detected
    +1 Memory not deallocated problem(s) detected
    +=== End: [2020/04/08 02:11:55] ===
    +
    + +

    Distributed memory programming model (MPI)

    +

    To compile: +

    # Compilation
    +$ mpiicc -qopenmp example.cc
    +
    +Example for batch script: +
    #!/bin/bash -l
    +#SBATCH -J Inspector
    +#SBATCH -N 2
    +###SBATCH -A <project_name>
    +#SBATCH --ntasks-per-node 28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge 
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/Inspector/2019_update4
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    +srun -n {SLURM_NTASKS} inspxe-cl -collect=ti2 -r result ./a.out
    +

    +

    To see result output: +

    $ cat inspxe-cl.txt
    +0 new problem(s) found
    +=== End: [2020/04/08 16:41:56] ===
    +=== End: [2020/04/08 16:41:56] ===
    +0 new problem(s) found
    +=== End: [2020/04/08 16:41:56] ===
    +

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/performance-debugging-tools/itac/index.html b/development/performance-debugging-tools/itac/index.html new file mode 100644 index 00000000..2891a7b9 --- /dev/null +++ b/development/performance-debugging-tools/itac/index.html @@ -0,0 +1,3022 @@ + + + + + + + + + + + + + + + + + + + + + + + + Intel Trace Analyzer and Collector - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    + +
    + + +
    +
    + + + + + + + + + + +

    Intel Trace Analyzer and Collector

    + +

    +Intel Trace +Analyzer and +Collector +(ITAC) are two tools used for analyzing MPI behavior in parallel applications. +ITAC identifies MPI load imbalance and communication hotspots in order to help +developers optimize MPI parallelization and minimize communication and +synchronization in their applications. Using Trace Collector on Cori must be +done with a command line interface, while Trace Analyzer supports both a +command line and graphical user interface which analyzes the data from Trace +Collector.

    +

    Environmental models for ITAC in ULHPC

    +
    module load purge
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/itac/2019.4.036
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    + +

    Interactive mode

    +
    # Compilation
    +$ icc -qopenmp -trance example.c
    +
    +# Code execution
    +$ export OMP_NUM_THREADS=16
    +$ -trace-collective ./a.out
    +
    +# Report collection
    +$ export VT_STATISTICS=ON
    +$ stftool tracefile.stf --print-statistics
    +
    + +

    Batch mode

    +

    Shared memory programming model (OpenMP)

    +

    Example for the batch script: +

    #!/bin/bash -l
    +#SBATCH -J ITAC
    +###SBATCH -A <project_name>
    +#SBATCH -N 1
    +#SBATCH -c 16
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/itac/2019.4.036
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    +$ export OMP_NUM_THREADS=16
    +$ -trace-collective ./a.out
    +

    +

    To see the result +

    $ export VT_STATISTICS=ON
    +$ stftool tracefile.stf --print-statistics
    +

    +

    Distributed memory programming model (MPI)

    +

    To compile +

    $ mpiicc -trace example.c
    +
    +Example for the batch script: +
    #!/bin/bash -l
    +#SBATCH -J ITAC
    +###SBATCH -A <project_name>
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/itac/2019.4.036
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    +srun -n ${SLURM_NTASKS} -trace-collective ./a.out
    +
    +To collect the result and see the result in GUI use the below commands +
    $ export VT_STATISTICS=ON
    +$ stftool tracefile.stf --print-statistics
    +

    +

    ITAC Summary

    +

    ITAC profile

    +

    ITAC event time

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/performance-debugging-tools/scalasca/index.html b/development/performance-debugging-tools/scalasca/index.html new file mode 100644 index 00000000..ed7fdda4 --- /dev/null +++ b/development/performance-debugging-tools/scalasca/index.html @@ -0,0 +1,3018 @@ + + + + + + + + + + + + + + + + + + + + + + + + Scalasca - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    + +
    +
    + + +
    +
    + + + + + + + + + + +

    Scalasca

    + +

    +Scalasca is a performance analysis tool that supports large-scale systems, +including IBM Blue Gene and CrayXT and small systems. +The Scalasca provides information about the communication and synchronization among +the processors. This information will help to do the performance analysis, optimization, +and tunning of scientificcodes. Scalasca supports OpenMP, MPI, and hybrid programming model, +and a analysis can be done by using the GUI which can be seen in below figure.

    +

    Scalasca overview

    +

    Environmental models for Scalasca on ULHPC

    +
    module load purge
    +module load swenv/default-env/v1.1-20180716-production
    +module load toolchain/foss/2018a
    +module load perf/Scalasca/2.3.1-foss-2018a
    +module load perf/Score-P/3.1-foss-2018a
    +
    + +

    Interactive Mode

    +

    Work flow: +

    # instrument
    +$ scorep mpicxx example.cc
    +
    +# analyze
    +scalasca -analyze mpirun -n 28 ./a.out
    +
    +# examine
    +$ scalasca -examine -s scorep_a_28_sum
    +INFO: Post-processing runtime summarization report...
    +INFO: Score report written to ./scorep_a_28_sum/scorep.score
    +
    +# graphical visualization
    +$ scalasca -examine result_folder
    +

    +

    Batch mode

    +

    Shared memory programming (OpenMP)

    +

    #!/bin/bash -l
    +#SBATCH -J Scalasca
    +###SBATCH -A <project_name>
    +#SBATCH -N 1
    +#SBATCH -c 16
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module load purge
    +module load swenv/default-env/v1.1-20180716-production
    +module load toolchain/foss/2018a
    +module load perf/Scalasca/2.3.1-foss-2018a
    +module load perf/Score-P/3.1-foss-2018a
    +
    +export OMP_NUM_THREADS=16
    +
    +# analyze
    +scalasca -analyze ./a.out
    +
    +Report collection and visualization +
    # examine
    +$ scalasca -examine -s scorep_a_28_sum
    +INFO: Post-processing runtime summarization report...
    +INFO: Score report written to ./scorep_a_28_sum/scorep.score
    +
    +# graphical visualization
    +$ scalasca -examine result_folder
    +

    +

    Distributed memory programming (MPI)

    +
    #!/bin/bash -l
    +#SBATCH -J Scalasca
    +###SBATCH -A <project_name>
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module load purge
    +module load swenv/default-env/v1.1-20180716-production
    +module load toolchain/foss/2018a
    +module load perf/Scalasca/2.3.1-foss-2018a
    +module load perf/Score-P/3.1-foss-2018a
    +
    +scalasca -analyze srun -n ${SLURM_NTASKS} ./a.out
    +
    + +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/performance-debugging-tools/valgrind/index.html b/development/performance-debugging-tools/valgrind/index.html new file mode 100644 index 00000000..91b595b1 --- /dev/null +++ b/development/performance-debugging-tools/valgrind/index.html @@ -0,0 +1,3016 @@ + + + + + + + + + + + + + + + + + + + + + + + + Valgrind - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Valgrind

    + +

    +The Valgrind tool suite provides a number of debugging and profiling +tools that help you make your programs faster and more correct. The +most popular of these tools is called Memcheck which can detect +many memory-related errors and memory leaks.

    +

    Prepare Your Program

    +

    Compile your program with -g to include debugging information so +that Memcheck's error messages include exact line numbers. Using +-O0 is also a good idea, if you can tolerate the slowdown. With -O1 +line numbers in error messages can be inaccurate, although generally +speaking running Memcheck on code compiled at -O1 works fairly well, +and the speed improvement compared to running -O0 is quite significant. +Use of -O2 and above is not recommended as Memcheck occasionally +reports uninitialised-value errors which don't really exist.

    +

    Environmental models for Valgrind in ULHPC

    +
    $ module purge
    +$ module load debugger/Valgrind/3.15.0-intel-2019a
    +
    + +

    Interactive mode

    +

    Example code: +

    #include <iostream>                                                                                           
    +using namespace std;                                                                                          
    +int main()                                                                                                    
    +{                                                                                                             
    +  const int SIZE = 1000;                                                                                      
    +  int *array = new int(SIZE);                                                                                 
    +
    +  for(int i=0; i<SIZE; i++)                                                                                   
    +    array[i] = i+1;                                                                                           
    +
    +  // delete[] array                                                                                           
    +
    +  return 0;                                                                                                   
    +}
    +

    +

    # Compilation
    +$ icc -g example.cc
    +
    +# Code execution
    +$ valgrind --leak-check=full --show-leak-kinds=all ./a.out
    +
    +Result output (with leak)

    +

    If we do not delete delete[] array the memory, then there will be a memory leak. +

    ==26756== Memcheck, a memory error detector
    +==26756== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
    +==26756== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
    +==26756== Command: ./a.out
    +==26756== 
    +==26756== Invalid write of size 4
    +==26756==    at 0x401275: main (mem-leak.cc:10)
    +==26756==  Address 0x5309c84 is 0 bytes after a block of size 4 alloc'd
    +==26756==    at 0x402DBE9: operator new(unsigned long) (vg_replace_malloc.c:344)
    +==26756==    by 0x401265: main (mem-leak.cc:8)
    +==26756== 
    +==26756== 
    +==26756== HEAP SUMMARY:
    +==26756==     in use at exit: 4 bytes in 1 blocks
    +==26756==   total heap usage: 2 allocs, 1 frees, 72,708 bytes allocated
    +==26756== 
    +==26756== 4 bytes in 1 blocks are definitely lost in loss record 1 of 1
    +==26756==    at 0x402DBE9: operator new(unsigned long) (vg_replace_malloc.c:344)
    +==26756==    by 0x401265: main (mem-leak.cc:8)
    +==26756== 
    +==26756== LEAK SUMMARY:
    +==26756==    definitely lost: 4 bytes in 1 blocks
    +==26756==    indirectly lost: 0 bytes in 0 blocks
    +==26756==      possibly lost: 0 bytes in 0 blocks
    +==26756==    still reachable: 0 bytes in 0 blocks
    +==26756==         suppressed: 0 bytes in 0 blocks
    +==26756== 
    +==26756== For lists of detected and suppressed errors, rerun with: -s
    +==26756== ERROR SUMMARY: 1000 errors from 2 contexts (suppressed: 0 from 0)
    +

    +

    Result output (without leak)

    +

    When we delete delete[] array the allocated memory, there will not be leaked memory. +

    ==26172== Memcheck, a memory error detector
    +==26172== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
    +==26172== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
    +==26172== Command: ./a.out
    +==26172== 
    +==26172== 
    +==26172== HEAP SUMMARY:
    +==26172==     in use at exit: 4 bytes in 1 blocks
    +==26172==   total heap usage: 2 allocs, 1 frees, 72,708 bytes allocated
    +==26172== 
    +==26172== 4 bytes in 1 blocks are definitely lost in loss record 1 of 1
    +==26172==    at 0x402DBE9: operator new(unsigned long) (vg_replace_malloc.c:344)
    +==26172==    by 0x401283: main (in /mnt/irisgpfs/users/ekrishnasamy/BPG/Valgrind/a.out)
    +==26172== 
    +==26172== LEAK SUMMARY:
    +==26172==    definitely lost: 4 bytes in 1 blocks
    +==26172==    indirectly lost: 0 bytes in 0 blocks
    +==26172==      possibly lost: 0 bytes in 0 blocks
    +==26172==    still reachable: 0 bytes in 0 blocks
    +==26172==         suppressed: 0 bytes in 0 blocks
    +==26172== 
    +==26172== For lists of detected and suppressed errors, rerun with: -s
    +==26172== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
    +

    +

    Additional information

    +

    This page is based on the "Valgrind Quick Start Page". For more +information about valgrind, please refer to +http://valgrind.org/.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/development/performance-debugging-tools/vtune/index.html b/development/performance-debugging-tools/vtune/index.html new file mode 100644 index 00000000..4353f69c --- /dev/null +++ b/development/performance-debugging-tools/vtune/index.html @@ -0,0 +1,3014 @@ + + + + + + + + + + + + + + + + + + + + + + + + Intel VTune - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    + +
    +
    + + +
    +
    + + + + + + + + + + +

    VTune

    +

    +Use Intel VTune Profiler to profile serial and multithreaded applications that are executed on a variety of hardware platforms (CPU, GPU, FPGA). The tool is delivered as a Performance Profiler with Intel Performance Snapshots and supports local and remote target analysis on the Windows, Linux, and Android* platforms. +Without the right data, you’re guessing about how to improve software performance and are unlikely to make the most effective improvements. +Intel® VTune™ Profiler collects key profiling data and presents it with a powerful interface that simplifies its analysis and interpretation.

    +

    Environmental models for VTune on ULHPC:

    +
    module purge 
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/VTune/2019_update4
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    + +

    Interactive Mode

    +

    # Compilation
    +$ icc -qopenmp example.c
    +
    +# Code execution
    +$ export OMP_NUM_THREADS=16
    +$ amplxe-cl -collect hotspots -r my_result ./a.out
    +
    +To see the result in GUI $ amplxe-gui my_result

    +

    VTune OpenMP result

    +

    $ amplxe-cl will list out the analysis types and $ amplxe-cl -hlep report will list out available reports in VTune.

    +

    Batch Mode

    +

    Shared Memory Programming Model (OpenMP)

    +
    #!/bin/bash -l
    +#SBATCH -J VTune
    +###SBATCH -A <project_name>
    +#SBATCH -N 1
    +#SBATCH -c 28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge 
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/VTune/2019_update4
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    +export OMP_NUM_THREADS=16
    +amplxe-cl -collect hotspots-r my_result ./a.out
    +
    + +

    Distributed Memory Programming Model

    +

    To compile just MPI application run $ mpiicc example.c +and for MPI+OpenMP run $ mpiicc -qopenmp example.c

    +
    #!/bin/bash -l
    +#SBATCH -J VTune
    +###SBATCH -A <project_name>
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +module purge 
    +module load swenv/default-env/v1.2-20191021-production
    +module load toolchain/intel/2019a
    +module load tools/VTune/2019_update4
    +module load vis/GTK+/3.24.8-GCCcore-8.2.0
    +
    +srun -n ${SLURM_NTASKS} amplxe-cl -collect uarch-exploration -r vtune_mpi -- ./a.out
    +
    + +

    # Report collection
    +$ amplxe-cl -report uarch-exploration -report-output output -r vtune_mpi
    +
    +# Result visualization 
    +$ amplxe-gui vtune_mpi
    +
    +The below figure shows the hybrid(MPI+OpenMP) programming analysis results:

    +

    VTune MPI result

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/environment/conda/index.html b/environment/conda/index.html new file mode 100644 index 00000000..26125005 --- /dev/null +++ b/environment/conda/index.html @@ -0,0 +1,3394 @@ + + + + + + + + + + + + + + + + + + + + + + + + Conda - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Self management of work environments in UL HPC with Conda

    + + +

    Packages provided through the standard channels of modules and containers are optimized for the ULHPC clusters to ensure their performance and stability. However, many packages where performance is not critical and are used by few users are not provided through the standard channels. These packages can still be installed locally by the users through an environment management system such as Conda.

    +
    +

    Contact the ULHPC before installing any software with Conda

    +

    Prefer binaries provided through modules or containers. Conda installs generic binaries that may be suboptimal for the configuration of the ULHPC clusters. Furthermore, installing packages locally with Conda consumes quotas in your or your project's account in terms of storage space and number of files.

    +

    Contact the ULHPC High Level Support Team in the service portal [Home > Research > HPC > Software environment > Request expertise] to discuss possible options before installing any software.

    +
    +

    Conda is an open source environment and package management system. With Conda you can create independent environments, where you can install applications such as python and R, together with any packages which will be used by these applications. The environments are independent, with the Conda package manager managing the binaries, resolving dependencies, and ensuring that package used in multiple environments are stored only once. In a typical setting, each user has their own installation of a Conda and a set of personal environments.

    + + +
    +

    TL;DR: install and use the Micromamba package manager.

    +
    +

    A brief introduction to Conda

    +

    A few concepts are necessary to start working with Conda. In brief, these are package managers which are the programs used to create and manage environments, channels which are the repositories that contain the packages from which environments are composed, and distributions which are methods for shipping package managers.

    +

    Package managers

    +

    Package managers are the programs that install and manage the Conda environments. There are multiple package managers, such as conda, mamba, and micromamba.

    +
    +

    The UL HPC centre supports the use of micromamba for the creation and management of personal Conda environments.

    +
    +

    Channels

    +

    Conda channels are the locations where packages are stored. There are also multiple channels, with some important channels being:

    +
      +
    • defaults, the default channel,
    • +
    • anaconda, a mirror of the default channel,
    • +
    • bioconda, a distribution of bioinformatics software, and
    • +
    • conda-forge, a community-led collection of recipes, build infrastructure, and distributions for the conda package manager.
    • +
    +

    The most useful channel that comes pre-installed in all distributions, is Conda-Forge. Channels are usually hosted in the official Anaconda page, but in some rare occasions custom channels may be used. For instance the default channel is hosted independently from the official Anaconda page. Many channels also maintain web pages with documentation both for their usage and for packages they distribute:

    + +

    Distributions

    +

    Quite often, the package manager is not distributed on its own, but with a set of packages that are required for the package manager to work, or even with some additional packages that required for most applications. For instance, the conda package manager is distributed with the Miniconda and Anaconda distributions. Miniconda contains the bare minimum packages for the conda package manager to work, and Anaconda contains multiple commonly used packages and a graphical user interface. The relation between these distributions and the package manager is depicted in the following diagram.

    +

    +

    The situation is similar for Mamba distributions. These distributions are supported by Conda-Forge, and their default installation options set-up conda-forge as the default and only channel during installation. The defaults or its mirror anaconda must be explicitly added if required. The distribution using the Mamba package manager was originally distributed as Mambaforge and was recently renamed to Miniforge. Miniforge comes with a minimal set of python packages required by the Mamba package manager. The distribution using the Micromamba package manager ships no accompanying packages, as Micromamba is a standalone executable with no dependencies. Micromamba is using libmamba, a C++ library implementing the Conda API.

    +

    The Micromamba package manager

    +

    +

    The Micromaba package manager is a minimal yet fairly complete implementation of the Conda interface in C++, that is shipped as a standalone executable. The package manager operates strictly on the user-space and thus it requires no special permissions are required to install packages. It maintains all its files in a couple of places, so uninstalling the package manager itself is also easy. Finally, the package manager is also lightweight and fast.

    +
    +

    UL HPC provides support only for the Micromamba package manager.

    +
    +

    Installation

    +

    A complete guide regarding Micromamba installation can be found in the official documentation. To install micromamaba in the HPC clusters, log in to Aion or Iris. Working on a login node, run the installation script, +

    "${SHELL}" <(curl -L micro.mamba.pm/install.sh)
    +
    +which will install the executable and setup the environment. There are 4 options to select during the installation of Micromamba:

    +
      +
    • The directory for the installation of the binary file: +
      Micromamba binary folder? [~/.local/bin]
      +
      + Leave empty and press enter to select the default displayed within brackets. Your .bashrc script should include ~/.local/bin in the $PATH by default.
    • +
    • The option to add to the environment autocomplete options for micromamba: +
      Init shell (bash)? [Y/n]
      +
      + Press enter to select the default option Y. This will append a clearly marked section in the .bashrc shell. Do not forget to remove this section when uninstalling Micromamba.
    • +
    • The option to configure the channels by adding conda-forge: +
      Configure conda-forge? [Y/n]
      +
      + Press enter to select the default option Y. This will setup the ~/.condarc file with conda-forge as the default channel. Note that Mamba and Micromamba will not use the defaults channel if it is not present in ~/.condarc like conda.
    • +
    • The option to select the directory where environment information and packages will be stored: +
      Prefix location? [~/micromamba]
      +
      + Press enter to select the default option displayed within brackets.
    • +
    +

    To setup the environment log-out and log-in again. Now you can use micromamba, including the auto-completion feature.

    +

    Managing environments

    +

    As an example, the creation and use of an environment for R jobs is presented. The command, +

    micromamba create --name R-project
    +
    +creates an environment named R-project. The environment is activated with the command +
    micromamba activate R-project
    +
    +anywhere in the file system.

    +

    Next, install the base R environment package that contains the R program, and any R packages required by the project. To install packages, first ensure that the R-project environment is active, and then install any package with the command +

    micromamba install <package_name>
    +
    +all the required packages. Quite often, the channel name must also be specified: +
    micromamba install --chanell <chanell_name> <package_name>
    +
    +Packages can be found by searching the conda-forge channel.

    +

    For instance, the basic functionality of the R software environment is contained in the r-base package. Calling +

    micromamba install --channel conda-forge r-base
    +
    +will install all the components required to run standalone R scripts. More involved scripts use functionality defined in various packages. The R packages are prepended with a prefix 'r-'. Thus, plm becomes r-plm and so on. After all the required packages have been installed, the environment is ready for use.

    +

    Packages in the conda-forge channel come with instructions for their installation. Quite often the channel is specified in the installation instructions, -c conda-forge or --channel conda-forge. While the Micromamba installer sets-up conda-forge as the default channel, latter modification in ~/.condarc may change the channel priority. Thus it is a good practice to explicitly specify the source channel when installing a package.

    +

    After work in an environment is complete, deactivate the environment, +

    micromamba deactivate
    +
    +to ensure that it does not interfere with any other operations. In contrast to modules, Conda is designed to operate with a single environment active at a time. Create one environment for each project, and Conda will ensure that any package that is shared between multiple environments is installed once.

    +

    Micromamba supports almost all the subcommands of Conda. For more details see the official documentation.

    +

    Using environments in submission scripts

    +

    Since all computationally heavy operations must be performed in compute nodes, Conda environments are also used in jobs submitted to the queuing system. Returning to the R example, a submission script running a single core R job can use the R-project_name environment as follows: +

    #SBATCH --job-name R-test-job
    +#SBATCH --nodes 1
    +#SBATCH --ntasks-per-node 1
    +#SBATCH --cpus-per-task 1
    +#SBATCH --time=0-02:00:00
    +#SBATCH --partition batch
    +#SBATCH --qos normal
    +
    +echo "Launched at $(date)"
    +echo "Job ID: ${SLURM_JOBID}"
    +echo "Node list: ${SLURM_NODELIST}"
    +echo "Submit dir.: ${SLURM_SUBMIT_DIR}"
    +echo "Numb. of cores: ${SLURM_CPUS_PER_TASK}"
    +
    +micromamba activate R-project
    +
    +export OMP_NUM_THREADS=1
    +srun Rscript --no-save --no-restore script.R
    +
    +micromamba deactivate
    +

    +

    Useful scripting resources

    + +

    Cleaning up package data

    +

    The Conda environment managers download and store a sizable amount of data to provided packages to the various environments. Even though the package data are shared between the various environments, they still consume space in your or your project's account. There are limits in the storage space and number of files that are available to projects and users in the cluster. Since Conda packages are self managed, you need to clean unused data yourself.

    +

    There are two main sources of unused data, the compressed archives of the packages that Conda stores in its cache when downloading a package, and the data of removed packages. All unused data in Micromoamba can be removed with the command +

    micromamba clean --all
    +
    +that opens up an interactive dialogue with details about the operations performed. You can follow the default option, unless you have manually edited any files in you package data directory (default location ${HOME}/micromamba).

    +
    Updating environments to remove old package versions

    As we create new environments, we often install the latest version of each package. However, if the environments are not updated regularly, we may end up with different versions of the same package across multiple environments. If we have the same version of a package installed in all environments, we can save space by removing unused older versions.

    +

    To update a package across all environments, use the command +

    for e in $(micromamba env list | awk 'FNR>2 {print $1}'); do micromamba update --name $e <package name>; done
    +
    +and to update all packages across all environments +
    for e in $(micromamba env list | awk 'FNR>2 {print $1}'); do micromamba update --name $e --all; done
    +
    +where FNR>2 removes the headers in the output of micromamba env list, and is thus sensitive to changes in the user interface of Micromamba.

    +

    After updating packages, the clean command can be called to removed the data of unused older package versions.

    +
    +

    Sources

    + +

    Combining Conda with other package and environment management tools

    +

    It may be desirable to use Conda to manage environments but a different tool to manage packages, such as pip. Or subenvironments may need to be used inside a Conda environment, as for instance with tools for creating and managing isolated Python installation, such as virtualenv, or with tools for integrating managed Python installations and packages in project directories, such as Pipenv and Poetry.

    +

    Conda integrates well with any such tool. Some of the most frequent cases are described bellow.

    +

    Managing packages with external tools

    +

    Quite often a package that is required in an environment is not available through a Conda channel, but it is available through some other distribution channel, such as the Python Package Index (PyPI). In these cases the only solution is to create a Conda environment and install the required packages with pip from the Python Package Index.

    +

    Using an external packaging tool is possible because of the method that Conda uses to install packages. Conda installs package versions in a central directory (e.g. ~/micromamba/pkgs). Any environment that requires a package links to the central directory with hard links. Links are added to the home directory (e.g. ~/micromamba/envs/R-project for the R-project environment) of any environment that requires them. When using an external package tool, package components are installed in the same directory where Conda would install the corresponding link. Thus, external package management tools integrate seamlessly with Conda, with a couple of caveats:

    +
      +
    • each package must be managed by one tool, otherwise package components will get overwritten, and
    • +
    • packages installed by the package tool are specific to an environment and cannot be shared as with Conda, since components are installed directly and not with links.
    • +
    +
    +

    Prefer Conda over external package managers

    +

    Installing the same package in multiple environments with an external package tool consumes quotas in terms of storage space and number of files, so prefer Conda when possible. This is particularly important for the inode limit, since some packages install a large number of files, and the hard links used by Conda do not consume inodes or disk space.

    +
    +

    Pip

    +

    In this example pip is used to manage packages in a Conda environment with MkDocs related packages. To install the packages, create an environment +

    micromamba env create --name mkdocs
    +
    +activate the environment, +
    micromamba activate mkdocs
    +
    +and install pip +
    micromamba install --channel conda-forge pip
    +
    +which will be used to install the remaining packages.

    +

    The pip will be the only package that will be managed with Conda. For instance, to update Pip activate the environment, +

    micromamba activate mkdocs
    +
    +and run +
    micromaba update --all
    +
    +to update all installed packaged (only pip in our case). All other packages are managed by pip.

    +

    For instance, assume that a mkdocs project requires the following packages:

    +
      +
    • mkdocs
    • +
    • mkdocs-minify-plugin
    • +
    +

    The package mkdocs-minify-plugin is less popular and thus is is not available though a Conda channel, but it is available in PyPI. To install it, activate the mkdocs environment +

    micromamba activate mkdocs
    +
    +and install the required packages with pip +
    pip install --upgrade mkdocs mkdocs-minify-plugin
    +
    +inside the environment. The packages will be installed inside a directory that micromamba created for the Conda environment, for instance +
    ${HOME}/micromamba/envs/mkdocs
    +
    +along side packages installed by micromamba. As a results, 'system-wide' installations with pip inside a Conda environment do not interfere with system packages.

    +
    +

    Do not install packages in Conda environments with pip as a user

    +

    User installed packages (e.g.pip install --user --upgrade mkdocs-minify-plugin) are installed in the same directory for all environments, typically in ~/.local/, and can interfere with other versions of the same package installed from other Conda environments.

    +
    +

    Pkg

    +

    The Julia programming language provides its own package and environment manager, Pkg. The package manager of Julia provides many useful capabilities and it is recommended that it is used with Julia projects. Details about the use of Pkg can be found in the official documentation.

    +

    The Pkg package manager comes packages with Julia. Start by creating an environment, +

    mocromamba env create --name julia
    +
    +activate the environment, +
    micromamba activate julia
    +
    +and install Julia, +
    micromamba install --channel conda-forge julia
    +
    +to start using Pkg.

    +

    In order to install a Julia package, activate the Julia environment, and start an interactive REPL session, +

    $ julia
    +julia>
    +
    +by just calling julia without any input files.

    +
      +
    • Enter the Pkg package manager by pressing ].
    • +
    • Exit the package manager by clearing all the input from the line with backspace, and then pressing backspace one more time.
    • +
    +

    In the package manager you can see the status of the current environment, +

    (@julia) pkg> status
    +Status `~/micromamba/envs/julia/share/julia/environments/julia/Project.toml` (empty project)
    +
    +add or remove packages, +
    (@julia) pkg> add Example
    +(@julia) pkg> remove Example
    +
    +update the packages in the environment, +
    (@julia) pkg> update
    +
    +and perform many other operations, such as exporting and importing environments from plain text files which describe the environment setup, and pinning packages to specific versions. The Pkg package manager maintains a global environment, but also supports the creation and use of local environments that are used within a project directory. The use of local environments is highly recommended, please read the documentation for more information.

    +

    After installing the Julia language in a Conda environment, the language distribution itself should be managed with micromamba and all packages in global or local environments with the Pkg package manager. To update Julia activate the Conda environment where Julia is stored and call +

    micromamba update julia
    +
    +where as to update packages installed with Pgk use the update command of Pkg. The packages for local and global environments are stored in the Julia installation directory, typically +
    ${HOME}/micromamba/envs/julia/share
    +
    +if the default location for the Micromamba environment directory is used.

    +
    Advanced management of package data

    Julia packages will consume storage and number of files quota. Pkg uses automatic garbage collection to cleanup packages that are no longer is use. In general you don't need to manage then package data, simply remove the package and its data will be deleted automatically after some time. However, when you exceed your quota you need to delete files immediately.

    +

    The immediate removal of the data of uninstalled packages can be forced with the command: +

    using Pkg
    +using Dates
    +Pkg.gc(;collect_delay=Dates.Day(0))
    +
    +Make sure that the packages have been removed from all the environments that use them

    +

    Sources: Immediate package data clean up

    +
    +

    Useful resources

    + +

    Combining Conda with external environment management tools

    +

    Quite often it is required to create isolated environments using external tools. For instance, tools such as virtualenv can install and manage a Python distribution in a given directory and export and import environment descriptions from text files. This functionalities allows for instance the shipping of a description of the Python environment as part of a project. Higher level tools such as pipenv automate the process by managing the Python environment as part of a project directory. The description of the environment is stored in version controlled files, and the Python packages are stored in a non-tracked directory within the project directory. Some wholistic project management tools, such as poetry, further integrate the management of the Python environment withing the project management workflow.

    +

    Installing and using in Conda environments tools that create isolated environments is relatively straight forward. Create an environment where only the required that tool is installed, and manage any subenvironments using the installed tool.

    +
    +

    Create a different environment for each tool

    +

    While this is not a requirement it is a good practice. For instance, pipenv and poetry used to and may still have conflicting dependencies; Conda detects the dependency and aborts the conflicting installation.

    +
    +

    Pipenv

    +

    To demonstrate the usage of pipenv, create a Conda environment, +

    micromamba enc create --name pipenv
    +
    +activate it +
    micromamba activate pipenv
    +
    +and install the pipenv package +
    micromamba install --channel conda-forge pipenv
    +
    + as the only package in this environment. Now the pipenv is managed with Conda, for instance to update pipenv activate the environment +
    micromamba activate pipenv
    +
    +and call +
    micromamba update --all
    +
    +to update the single installed package. Inside the environment use pipenv as usual to create and manage project environments.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/environment/easybuild/index.html b/environment/easybuild/index.html new file mode 100644 index 00000000..48d0bb03 --- /dev/null +++ b/environment/easybuild/index.html @@ -0,0 +1,2993 @@ + + + + + + + + + + + + + + + + + + + + + + + + Easybuild - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Easybuild

    + + +

    +

    EasyBuild (EB for short) is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. +A large number of scientific software are supported (at least 2175 supported software packages since the 4.3.2 release) - see also What is EasyBuild?.

    +

    For several years now, Easybuild is used to manage the ULHPC User Software Set and generate automatically the module files available to you on our computational resources in either prod (default) or devel (early development/testing) environment -- see ULHPC Toolchains and Software Set Versioning. +This enables users to easily extend the global Software Set with their own local software +builds, either performed within their global home +directory or (better) in a shared project +directory though Easybuild, which generate automatically module files compliant with the ULHPC module setup.

    + + +
    Why using an automatic building tool on HPC environment like Easybuild or Spack?

    Well that may seem obvious to some of you, but scientific software is often difficult to build. +Not all rely on standard building tools like Autotools/Automake (and the famous configure; make; make install) or CMake. +And even in that case, parsing the available option to ensure matching the hardware configuration of the computing resources used for the execution is time consuming and error-prone. +Most of the time unfortunately, scientific software embed hardcoded parameters and/or poor/outdated documentation with incomplete build procedures.

    +

    In this context, software build and installation frameworks like Easybuild or Spack helps to facilitate the building task in a consistent and automatic way, while generating also the LMod modulefiles.

    +

    We select Easybuild as primary building tool to ensure the best optimized builds. +Some HPC sites use both -- see this talk from William Lucas at EPCC for instance.

    +

    It does not prevent from maintaining your own build instructions notes.

    +
    +

    Easybuild Concepts and terminology

    +

    Official Easybuild Tutorial

    +

    EasyBuild relies on two main concepts: Toolchains and EasyConfig files.

    +

    A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. +The two main toolchains frequently used on the UL HPC platform are the foss ("Free and Open Source Software") and the intel one.

    +
      +
    1. foss, based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.).
    2. +
    3. intel, based on the Intel compiler suit ([])and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.).
    4. +
    +

    An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedures (like configure, make and make install), this file is very simple. +Many EasyConfig files are already provided with EasyBuild.

    +

    ULHPC Easybuild Configuration

    +

    To build software with Easybuild compliant with the configuration in place on the ULHPC facility, you need to be aware of the following setup:

    +
      +
    • Modules tool ($EASYBUILD_MODULES_TOOL): Lmod (see docs)
    • +
    • Module Naming Scheme (EASYBUILD_MODULE_NAMING_SCHEME): we use a special hierarchical organization where the software are classified/categorized under a pre-defined class.
    • +
    +

    These variables are defined at the global profile level, under /etc/profile.d/ulhpc_resif.sh on the compute nodes as follows:

    +
    export EASYBUILD_MODULES_TOOL=Lmod
    +export EASYBUILD_MODULE_NAMING_SCHEME=CategorizedModuleNamingScheme
    +
    + +

    All builds and installations are performed at user level, so you don't need the admin (i.e. root) rights. +Another very important configuration variable is the Overall Easybuild prefix path $EASYBUILD_PREFIX which affects the default value of several configuration options:

    +
      +
    • built software are placed under ${EASYBUILD_PREFIX}/software/
    • +
    • modules install path: ${EASYBUILD_PREFIX}/modules/all (determined via Overall prefix path (--prefix), --subdir-modules and --suffix-modules-path)
    • +
    +

    You can thus extend the ULHPC Software set with your own local builds by setting appropriately the variable $EASYBUILD_PREFIX:

    +
      +
    • For installation in your home directory: export EASYBUILD_PREFIX=$HOME/.local/easybuild
    • +
    • For installation in a shared project directory <name>: export EASYBUILD_PREFIX=$PROJECTHOME/<name>/easybuild
    • +
    +
    +

    Adapting you custom build to cluster, the toolchain version and the architecture

    +

    Just like the ULHPC software set (installed in +EASYBUILD_PREFIX=/opt/apps/resif/<cluster>/<version>/<arch>), +you may want to isolate your local builds to take into account +the cluster $ULHPC_CLUSTER ("iris" or "aion"), the +toolchain version <version> (Ex: 2019b, 2020b etc.) you build upon and +eventually the architecture <arch>. +In that case, you can use the following helper scripts: +

    resif-load-home-swset-prod
    +
    +which is roughly equivalent to the following code: +
    # EASYBUILD_PREFIX: [basedir]/<cluster>/<environment>/<arch>
    +# Ex: Default EASYBUILD_PREFIX in your home - Adapt to project directory if needed
    +_EB_PREFIX=$HOME/.local/easybuild
    +# ... eventually complemented with cluster
    +[ -n "${ULHPC_CLUSTER}" ] && _EB_PREFIX="${_EB_PREFIX}/${ULHPC_CLUSTER}"
    +# ... eventually complemented with software set version
    +_EB_PREFIX="${_EB_PREFIX}/${RESIF_VERSION_PROD}"
    +# ... eventually complemented with arch
    +[ -n "${RESIF_ARCH}" ] && _EB_PREFIX="${_EB_PREFIX}/${RESIF_ARCH}"
    +export EASYBUILD_PREFIX="${_EB_PREFIX}"
    +export LOCAL_MODULES=${EASYBUILD_PREFIX}/modules/all
    +

    +

    For a shared project directory <name> located under $PROJECTHOME/<name>, you can use the following following helper scripts: +

    resif-load-project-swset-prod $PROJECTHOME/<name>
    +

    +
    +
    +

    ACM PEARC'21: RESIF 3.0

    +

    For more details on the way we setup and deploy the User Software Environment on ULHPC systems through the RESIF 3 framework, see the ACM PEARC'21 conference paper presented on July 22, 2021.

    +
    +

    ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides | Github:
    +Sebastien Varrette, Emmanuel Kieffer, Frederic Pinel, Ezhilmathi Krishnasamy, Sarah Peter, Hyacinthe Cartiaux, and Xavier Besseron. 2021. RESIF 3.0: Toward a Flexible & Automated Management of User Software Environment on HPC facility. In Practice and Experience in Advanced Research Computing (PEARC '21). Association for Computing Machinery (ACM), New York, NY, USA, Article 33, 1–4. https://doi.org/10.1145/3437359.3465600

    +
    +
    +

    Installation / Update local Easybuild

    +

    You can of course use the default Easubuild that comes with the ULHPC software +set with module load tools/EasyBuild. +But as soon as you want to install your local builds, you have interest to +install the up-to-date release of EasyBuild in +your local $EASYBUILD_PREFIX.

    +

    For this purpose, you can follow the official instructions.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/environment/images/Miniconda-vs-Anaconda.jpg b/environment/images/Miniconda-vs-Anaconda.jpg new file mode 100644 index 00000000..cf2025c3 Binary files /dev/null and b/environment/images/Miniconda-vs-Anaconda.jpg differ diff --git a/environment/images/ULHPC-software-stack.pdf b/environment/images/ULHPC-software-stack.pdf new file mode 100644 index 00000000..0f4efe8d Binary files /dev/null and b/environment/images/ULHPC-software-stack.pdf differ diff --git a/environment/images/ULHPC-software-stack.png b/environment/images/ULHPC-software-stack.png new file mode 100644 index 00000000..48163407 Binary files /dev/null and b/environment/images/ULHPC-software-stack.png differ diff --git a/environment/images/bash_startup.png b/environment/images/bash_startup.png new file mode 100644 index 00000000..dacd74f6 Binary files /dev/null and b/environment/images/bash_startup.png differ diff --git a/environment/index.html b/environment/index.html new file mode 100644 index 00000000..af1caf3d --- /dev/null +++ b/environment/index.html @@ -0,0 +1,3252 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    ULHPC User Environment

    + + +

    Your typical journey on the ULHPC facility is illustrated in the below figure.

    +

    + + +
    Typical workflow on UL HPC resources

    You daily interaction with the ULHPC facility includes the following +actions:

    +

    Preliminary setup

    +
      +
    1. Connect to the access/login servers
        +
      • This can be done either by ssh +(recommended) or via the ULHPC OOD portal
      • +
      • (advanced users) at this point, you probably want to create (or +reattach) to a screen or tmux session
      • +
      +
    2. +
    3. Synchronize you code and/or transfer your input +data using rsync/svn/git typically +
    4. +
    5. Reserve a few interactive resources with salloc -p interactive [...]
        +
      • recall that the module command (used to load the ULHPC User + software) is only available on the compute + nodes
      • +
      • (eventually) build your program, typically using gcc/icc/mpicc/nvcc..
      • +
      • Test your workflow / HPC analysis on a small size problem (srun/python/sh...)
      • +
      • Prepare a launcher script <launcher>.{sh|py}
      • +
      +
    6. +
    +

    Then you can proceed with your Real Experiments:

    +
      +
    1. Reserve passive resources: sbatch [...] <launcher>
    2. +
    3. Grab the results and (eventually) transfer back your output + results using rsync/svn/git
    4. +
    +
    + + +

    For more information:

    + +
    +

    '-bash: module: command not found' on access/login servers

    +

    Recall that by default, the module command is (on purpose) NOT available on the access/login servers. +You HAVE to be on a computing node (within a slurm job)

    +
    +

    Home and Directories Layout

    +

    All ULHPC systems use global home directories. +You also have access to several other pre-defined directories setup over several different File Systems which co-exist on the ULHPC facility and are configured for different purposes. They are listed below:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    DirectoryEnv.file systembackup
    /home/users/<login>$HOMEGPFS/Spectrumscaleno
    /work/projects/<name>-GPFS/Spectrumscaleyes (partial, backup subdirectory)
    /scratch/users/<login>$SCRATCHLustreno
    /mnt/isilon/projects/<name>-OneFSyes (live sync and snapshots)
    + + +

    Shell and Dotfiles

    +

    The default login shell is bash -- see /etc/shells for supported shells.

    +
    +

    ULHPC dotfiles vs. default dotfiles

    +

    The ULHPC team DOES NOT populate shell initialization files (also known as dotfiles) on users' home directories - the default system ones are used in your home -- you can check them in /etc/skel/.* on the access/login servers. +However, you may want to install the ULHPC/dotfiles available as a Github repository. See installation notes. +A working copy of that repository exists in /etc/dotfiles.d on the access/login servers. You can thus use it: +

    $ /etc/dotfiles.d/install.sh -h
    +# Example to install ULHPC GNU screen configuration file
    +$ /etc/dotfiles.d/install.sh -d /etc/dotfiles.d/ --screen -n   # Dry-run
    +$ /etc/dotfiles.d/install.sh -d /etc/dotfiles.d/ --screen      # real install
    +

    +
    +
    Changing Default Login Shell (or NOT)

    If you want to change your your default login shell, you should set that up using the ULHPC IPA portal (change the Login Shell attribute). +Note however that we STRONGLY discourage you to do so. You may hit unexpected issues with system profile scripts expecting bash as running shell.

    +
    +

    System Profile

    +

    /etc/profile contains Linux system wide environment and startup programs. +Specific scripts are set to improve your ULHPC experience, in particular those set in the ULHPC/tools repository, for instance:

    + +

    Customizing Shell Environment

    +

    You can create dotfiles (e.g., .bashrc, .bash_profile, or +.profile, etc) in your $HOME directory to put your personal shell +modifications.

    +
    Custom Bash Initialisation Files

    On ULHPC system ~/.bash_profile and ~/.profile are sourced by login +shells, while ~/.bashrc is sourced by most of the shell invocations +including the login shells. In general you can put the environment +variables, such as PATH, which are inheritable to subshells in +~/.bash_profile or ~/.profile and functions and aliases in the +~/.bashrc file in order to make them available in subshells. +ULHPC/dotfiles bash +configuration +even source the following files for that specific purpose:

    +
      +
    • ~/.bash_private: custom private functions
    • +
    • ~/.bash_aliases: custom private aliases.
    • +
    +
    +
    Understanding Bash Startup Files order

    See reference documentation. +That's somehow hard to understand. Some tried to explicit it under the form +of a "simple" graph -- credits for the one below to Ian +Miell +(another one)

    +

    +

    This explains why normally all ULHPC launcher +scripts start with +the following sha-bang (#!) +header

    +

    #!/bin/bash -l
    +#
    +#SBATCH [...]
    +[...]
    +
    +That's indeed the only way (i.e. using /bin/bash -l instead of the +classical /bin/bash) to ensure that /etc/profile is sourced natively, +and thus that all ULHPC environments variables and modules are loaded. +If you don't proceed that way (i.e. following the classical approach), you +MUST then use the following template you may see from other HPC centers: +
    #!/bin/bash
    +#
    +#SBATCH [...]
    +[...]
    +# Load ULHPC Profile
    +if [ -f  /etc/profile ]; then
    +   .  /etc/profile
    +fi
    +

    +
    +

    Since all ULHPC systems share the Global HOME filesystem, +the same $HOME is available regardless of the platform. +To make system specific customizations use the pre-defined environment +ULHPC_CLUSTER variable:

    +
    +

    Example of cluster specific settings

    case $ULHPC_CLUSTER in
    +    "iris")
    +        : # Settings for iris
    +        export MYVARIABLE="value-for-iris"
    +        ;;
    +    "aion")
    +        : # settings for aion
    +        export MYVARIABLE="value-for-aion"
    +        ;;
    +    *)
    +        : # default value for
    +        export MYVARIABLE="default-value"
    +        ;;
    +esac
    +
    + +

    +
    +

    CentOS

    +

    Operating Systems

    +

    RedHat

    +

    The ULHPC facility runs RedHat-based Linux Distributions, in particular:

    + +

    Thus, you are more than encouraged to become familiar - if not yet - with Linux commands. We can recommend the following sites and resources:

    + +
    Impact of CentOS project shifting focus starting 2021 from CentOS Linux to CentOS Stream

    You may have followed the official announcement on Dec 8, 2020 where Red Hat announced that it will discontinue CentOS 8 by the end of 2021 and instead will focus on CentOS Stream going forward. Fortunately CentOS 7 will continue to be updated until 2024 and is therefore not affected by this change.

    +

    While CentOS traditionally has been a rebuild of RHEL, CentOS Stream will be +more or less a testing ground for changes that will eventually go into RHEL. +Unfortunately this means that CentOS Stream will likely become incompatible +with RHEL (e.g. binaries compiled on CentOS Stream will not necessarily run +on RHEL and vice versa). It is also questionable whether CentOS Stream is a +suitable environment for running production systems.

    +

    For all these reasons, the migration to CentOS 8 for Iris (initially planned for Q1 2021) has been cancelled. +Alternative approaches are under investigation, including an homogeneous +setup between Iris and Aion over Redhat 8.

    +
    +

    Discovering, visualizing and reserving UL HPC resources

    +

    See ULHPC Tutorial / Getting Started

    +

    ULHPC User Software Environment

    + + +
    +

    The UL HPC facility provides a large variety of scientific applications to its user community, either domain-specific codes and general purpose development tools which enable research and innovation excellence across a wide set of computational fields. -- see software list.

    +
    +

    +

    We use the Environment Modules / LMod framework which provided the module utility on Compute nodes +to manage nearly all software.
    +There are two main advantages of the module approach:

    +
      +
    1. ULHPC can provide many different versions and/or installations of a + single software package on a given machine, including a default + version as well as several older and newer version.
    2. +
    3. Users can easily switch to different versions or installations + without having to explicitly specify different paths. With modules, + the MANPATH and related environment variables are automatically + managed.
    4. +
    +
    +

    ULHPC modules are in practice automatically generated by Easybuild.

    +
    + + + + +

    +

    EasyBuild (EB for short) is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. +A large number of scientific software are supported (at least 2175 supported software packages since the 4.3.2 release) - see also What is EasyBuild?.

    +

    For several years now, Easybuild is used to manage the ULHPC User Software Set and generate automatically the module files available to you on our computational resources in either prod (default) or devel (early development/testing) environment -- see ULHPC Toolchains and Software Set Versioning. +This enables users to easily extend the global Software Set with their own local software +builds, either performed within their global home +directory or (better) in a shared project +directory though Easybuild, which generate automatically module files compliant with the ULHPC module setup.

    + + +

    ULHPC Environment modules + Using Easybuild on ULHPC Clusters

    +

    Self management of work environments in UL HPC with Conda

    + + +

    Packages provided through the standard channels of modules and containers are optimized for the ULHPC clusters to ensure their performance and stability. However, many packages where performance is not critical and are used by few users are not provided through the standard channels. These packages can still be installed locally by the users through an environment management system such as Conda.

    +
    +

    Contact the ULHPC before installing any software with Conda

    +

    Prefer binaries provided through modules or containers. Conda installs generic binaries that may be suboptimal for the configuration of the ULHPC clusters. Furthermore, installing packages locally with Conda consumes quotas in your or your project's account in terms of storage space and number of files.

    +

    Contact the ULHPC High Level Support Team in the service portal [Home > Research > HPC > Software environment > Request expertise] to discuss possible options before installing any software.

    +
    +

    Conda is an open source environment and package management system. With Conda you can create independent environments, where you can install applications such as python and R, together with any packages which will be used by these applications. The environments are independent, with the Conda package manager managing the binaries, resolving dependencies, and ensuring that package used in multiple environments are stored only once. In a typical setting, each user has their own installation of a Conda and a set of personal environments.

    + + +

    Management of work environments with Conda

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/environment/modules/index.html b/environment/modules/index.html new file mode 100644 index 00000000..4118a2ae --- /dev/null +++ b/environment/modules/index.html @@ -0,0 +1,3410 @@ + + + + + + + + + + + + + + + + + + + + + + + + Modules - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    ULHPC Software/Modules Environment

    + + +
    +

    The UL HPC facility provides a large variety of scientific applications to its user community, either domain-specific codes and general purpose development tools which enable research and innovation excellence across a wide set of computational fields. -- see software list.

    +
    +

    +

    We use the Environment Modules / LMod framework which provided the module utility on Compute nodes +to manage nearly all software.
    +There are two main advantages of the module approach:

    +
      +
    1. ULHPC can provide many different versions and/or installations of a + single software package on a given machine, including a default + version as well as several older and newer version.
    2. +
    3. Users can easily switch to different versions or installations + without having to explicitly specify different paths. With modules, + the MANPATH and related environment variables are automatically + managed.
    4. +
    +
    +

    ULHPC modules are in practice automatically generated by Easybuild.

    +
    + + + + +

    +

    EasyBuild (EB for short) is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. +A large number of scientific software are supported (at least 2175 supported software packages since the 4.3.2 release) - see also What is EasyBuild?.

    +

    For several years now, Easybuild is used to manage the ULHPC User Software Set and generate automatically the module files available to you on our computational resources in either prod (default) or devel (early development/testing) environment -- see ULHPC Toolchains and Software Set Versioning. +This enables users to easily extend the global Software Set with their own local software +builds, either performed within their global home +directory or (better) in a shared project +directory though Easybuild, which generate automatically module files compliant with the ULHPC module setup.

    + + +

    Environment modules and LMod

    +

    Environment Modules are a standard and well-established technology across HPC sites, to permit developing and using complex software and libraries build with dependencies, allowing multiple versions of software stacks and combinations thereof to co-exist.

    +

    It brings the module command which is used to manage environment variables such as PATH, LD_LIBRARY_PATH and MANPATH, enabling the easy loading and unloading of application/library profiles and their dependencies.

    +
    Why do you need [Environment] Modules?

    When users login to a Linux system, they get a login shell and the shell uses Environment variables to run commands and applications. Most common are:

    +
      +
    • PATH: colon-separated list of directories in which your system looks for executable files;
    • +
    • MANPATH: colon-separated list of directories in which man searches for the man pages;
    • +
    • LD_LIBRARY_PATH: colon-separated list of directories in which your system looks for for ELF / *.so libraries at execution time needed by applications.
    • +
    +

    There are also application specific environment variables such as CPATH, LIBRARY_PATH, JAVA_HOME, LM_LICENSE_FILE, MKLROOT etc.

    +

    A traditional way to setup these Environment variables is by customizing the shell initialization files: i.e. /etc/profile, .bash_profile, and .bashrc +This proves to be very impractical on multi-user systems with various applications and multiple application versions installed as on an HPC facility.

    +

    To overcome the difficulty of setting and changing the Environment variables, the TCL/C Environment Modules were introduced over 2 decades ago. +The Environment Modules package is a tool that simplify shell initialization and lets users easily modify their environment during the session with modulefiles.

    +
      +
    • Each modulefile contains the information needed to configure the shell for an application. Once the Modules package is initialized, the environment can be modified on a per-module basis using the module command which interprets modulefiles. Typically modulefiles instruct the module command to alter or set shell environment variables such as PATH, MANPATH, etc.
    • +
    • Modulefiles may be shared by many users on a system (as done on the ULHPC clusters) and users may have their own collection to supplement or replace the shared modulefiles.
    • +
    +

    Modules can be loaded and unloaded dynamically and atomically, in an clean fashion. All popular shells are supported, including bash, ksh, zsh, sh, csh, tcsh, fish, as well as some scripting languages such as perl, ruby, tcl, python, cmake and R. Modules are useful in managing different versions of applications. +Modules can also be bundled into metamodules that will load an entire suite of different applications -- this is precisely the way we manage the ULHPC Software Set

    +
    +
    Tcl/C Environment Modules (Tmod) vs. Tcl Environment Modules vs. Lmod

    There exists several implementation of the module tool:

    +
      +
    • Tcl/C Environment Modules (3.2.10 \leq version < 4), also called Tmod: the seminal (old) implementation
    • +
    • Tcl-only variant of Environment modules (version \geq 4), previously called Modules-Tcl
    • +
    • (recommended) Lmod, a Lua based Environment Module System
        +
      • Lmod ("L" stands for Lua) provides all of the functionality of TCL/C Environment Modules plus more features:
          +
        • support for hierarchical module file structure
        • +
        • MODULEPATH is dynamically updated when modules are loaded.
        • +
        • makes loaded modules inactive and active to provide sane environment.
        • +
        • supports for hidden modules
        • +
        • support for optional usage tracking (implemented on ULHPC facilities)
        • +
        +
      • +
      +
    • +
    • In particular, Lmod enforces the following safety features that are not always guaranted with the other tools:
        +
      1. The One Name Rule: Users can only have one version active
      2. +
      3. Users can only load one compiler or MPI stack at a time (through the family(...) directive)
      4. +
      +
    • +
    +

    The ULHPC Facility relies on Lmod -- the associated Modulefiles being automatically generated by Easybuild.

    +
    +

    The ULHPC Facility relies on Lmod, a Lua-based Environment module system that easily handles the MODULEPATH Hierarchical problem. In this context, the module command supports the following subcommands:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CommandDescription
    module availLists all the modules which are available to be loaded
    module spider <pattern>Search for among available modules (Lmod only)
    module load <mod1> [mod2...]Load a module
    module unload <module>Unload a module
    module listList loaded modules
    module purgeUnload all modules (purge)
    module display <module>Display what a module does
    module use <path>Prepend the directory to the MODULEPATH environment variable
    module unuse <path>Remove the directory from the MODULEPATH environment variable
    +
    What is module?

    module is a shell function that modifies user shell upon load of a modulefile. +It is defined as follows +

    $ type module
    +module is a function
    +module ()
    +{
    +    eval $($LMOD_CMD bash "$@") && eval $(${LMOD_SETTARG_CMD:-:} -s sh)
    +}
    +
    +In particular, module is NOT a program

    +
    +

    At the heart of environment modules interaction resides the following components:

    +
      +
    • the MODULEPATH environment variable, which defines a colon-separated list of directories to search for modulefiles
    • +
    • modulefile (see an example) associated to each available software.
    • +
    +
    Example of ULHPC toolchain/foss (auto-generated) Modulefile
    $ module show toolchain/foss
    +-------------------------------------------------------------------------------
    +   /opt/apps/resif/iris/2019b/broadwell/modules/all/toolchain/foss/2019b.lua:
    +-------------------------------------------------------------------------------
    +help([[
    +Description
    +===========
    +GNU Compiler Collection (GCC) based compiler toolchain, including
    + OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    +
    +More information
    +================
    + - Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain
    +]])
    +whatis("Description: GNU Compiler Collection (GCC) based compiler toolchain, including
    + OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.")
    +whatis("Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain")
    +whatis("URL: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain")
    +conflict("toolchain/foss")
    +load("compiler/GCC/8.3.0")
    +load("mpi/OpenMPI/3.1.4-GCC-8.3.0")
    +load("numlib/OpenBLAS/0.3.7-GCC-8.3.0")
    +load("numlib/FFTW/3.3.8-gompi-2019b")
    +load("numlib/ScaLAPACK/2.0.2-gompi-2019b")
    +setenv("EBROOTFOSS","/opt/apps/resif/iris/2019b/broadwell/software/foss/2019b")
    +setenv("EBVERSIONFOSS","2019b")
    +setenv("EBDEVELFOSS","/opt/apps/resif/iris/2019b/broadwell/software/foss/2019b/easybuild/toolchain-foss-2019b-easybuild-devel")
    +
    + +
    +
    +

    (reminder): the module command is ONLY available on the compute nodes, NOT on the access front-ends.
    +In particular, you need to be within a job to load ULHPC or private modules.

    +
    +

    ULHPC $MODULEPATH

    +

    By default, the MODULEPATH environment variable holds a single searched directory holding the optimized builds prepared for you by the ULHPC Team. +The general format of this directory is as follows:

    +
    /opt/apps/resif/<cluster>/<version>/<arch>/modules/all
    +
    + +

    where:

    +
      +
    • <cluster> depicts the name of the cluster (iris or aion). Stored as $ULHPC_CLUSTER.
    • +
    • <version> corresponds to the ULHPC Software set release (aligned with Easybuid toolchains release), i.e. 2019b, 2020a etc. Stored as $RESIF_VERSION_{PROD,DEVEL,LEGACY} depending on the Production / development / legacy ULHPC software set version
    • +
    • <arch> is a lower-case strings that categorize the CPU architecture of the build host, and permits to easyli identify optimized target architecture. It is stored as $RESIF_ARCH.
        +
      • On Intel nodes: broadwell (default), skylake
      • +
      • On AMD nodes: epyc
      • +
      • On GPU nodes: gpu
      • +
      +
    • +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ClusterArch. $RESIF_ARCH$MODULEPATH Environment variable
    Irisbroadwell (default)/opt/apps/resif/iris/<version>/broadwell/modules/all
    Irisskylake/opt/apps/resif/iris/<version>/skylake/modules/all
    Irisgpu/opt/apps/resif/iris/<version>/gpu/modules/all
    Aionepyc (default)/opt/apps/resif/aion/<version>/{epyc}/modules/all
    +
      +
    • On skylake nodes, you may want to use the optimized modules for skylake
    • +
    • On GPU nodes, you may want to use the CPU-optimized builds for skylake (in addition to the gpu-enabled softwares)
    • +
    + + +
    +

    ACM PEARC'21: RESIF 3.0

    +

    If you are interested to know more on the wey we setup and deploy the User Software Environment on ULHPC systems through the RESIF 3 framework, you can refer to the below article presented during the ACM PEARC'21 conference, on July 22, 2021.

    +
    +

    ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides | Github:
    +Sebastien Varrette, Emmanuel Kieffer, Frederic Pinel, Ezhilmathi Krishnasamy, Sarah Peter, Hyacinthe Cartiaux, and Xavier Besseron. 2021. RESIF 3.0: Toward a Flexible & Automated Management of User Software Environment on HPC facility. In Practice and Experience in Advanced Research Computing (PEARC '21). Association for Computing Machinery (ACM), New York, NY, USA, Article 33, 1–4. https://doi.org/10.1145/3437359.3465600

    +
    +
    + + +

    Module Naming Schemes

    +
    What is a Module Naming Scheme?

    The full software and module install paths for a particular software package +are determined by the active module naming scheme along with the general +software and modules install paths specified by the EasyBuild configuration.

    +

    You can list the supported module naming schemes of Easybuild using: +

    $ eb --avail-module-naming-schemes
    +List of supported module naming schemes:
    +    EasyBuildMNS
    +    CategorizedHMNS
    +    MigrateFromEBToHMNS
    +    HierarchicalMNS
    +    CategorizedModuleNamingScheme
    +
    +See Flat vs. Hierarchical module naming scheme +for an illustrated explaination of the difference between two extreme cases: flat or 3-level hierarchical. +On ULHPC systems, we selected an intermediate scheme called CategorizedModuleNamingScheme.

    +
    +
    +

    Module Naming Schemes on ULHPC system

    +

    ULHPC modules are organised through the Categorized Naming Scheme
    +Format: <category>/<name>/<version>-<toolchain><versionsuffix>

    +
    +

    This means that the typical module hierarchy has as prefix a category level, taken out from one of the supported software category or module class: +

    $ eb --show-default-moduleclasses
    +Default available module classes:
    +
    +    base:      Default module class
    +    astro:     Astronomy, Astrophysics and Cosmology
    +    bio:       Bioinformatics, biology and biomedical
    +    cae:       Computer Aided Engineering (incl. CFD)
    +    chem:      Chemistry, Computational Chemistry and Quantum Chemistry
    +    compiler:  Compilers
    +    data:      Data management & processing tools
    +    debugger:  Debuggers
    +    devel:     Development tools
    +    geo:       Earth Sciences
    +    ide:       Integrated Development Environments (e.g. editors)
    +    lang:      Languages and programming aids
    +    lib:       General purpose libraries
    +    math:      High-level mathematical software
    +    mpi:       MPI stacks
    +    numlib:    Numerical Libraries
    +    perf:      Performance tools
    +    quantum:   Quantum Computing
    +    phys:      Physics and physical systems simulations
    +    system:    System utilities (e.g. highly depending on system OS and hardware)
    +    toolchain: EasyBuild toolchains
    +    tools:     General purpose tools
    +    vis:       Visualization, plotting, documentation and typesetting
    +

    +

    It follows that the ULHPC software modules are structured according to the organization depicted below (click to enlarge).

    +

    +

    ULHPC Toolchains and Software Set Versioning

    + + +

    We offer a YEARLY release of the ULHPC Software Set based on Easybuid release of toolchains +-- see Component versions (fixed per release) in the foss and intel toolchains. +However, count at least 6 months of validation/import after EB release before ULHPC release

    +

    An overview of the currently available component versions is depicted below:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    NameType2019b (legacy)2020a2020b (prod)2021a2021b (devel)
    GCCCorecompiler8.3.09.3.010.2.010.3.011.2.0
    fosstoolchain2019b2020a2020b2021a2021b
    inteltoolchain2019b2020a2020b2021a2021b
    binutils2.322.342.352.362.37
    Python3.7.4 (and 2.7.16)3.8.2 (and 2.7.18)3.8.63.9.23.9.6
    LLVMcompiler9.0.110.0.111.0.011.1.012.0.1
    OpenMPIMPI3.1.44.0.34.0.54.1.14.1.2
    + + +

    Once on a node, the current version of the ULHPC Software Set in production is stored in $RESIF_VERSION_PROD. +You can use the variables $MODULEPATH_{LEGACY,PROD,DEVEL} to access or set the MODULEPATH command with the appropriate value. Yet we have define utility scripts to facilitate your quick reset of the module environment, i.e., resif-load-swset-{legacy,prod,devel} and resif-reset-swset

    +

    For instance, if you want to use the legacy software set, proceed as follows in your launcher scripts:

    +
    resif-load-swset-legacy   # Eq. of export MODULEPATH=$MODULEPATH_LEGACY
    +# [...]
    +# Restore production settings
    +resif-load-swset-prod     # Eq. of export MODULEPATH=$MODULEPATH_PROD
    +
    + +

    If on the contrary you want to test the (new) development software set, i.e., the devel version, stored in $RESIF_VERSION_DEVEL:

    +
    resif-load-swset-devel  # Eq. of export MODULEPATH=$MODULEPATH_DEVEL
    +# [...]
    +# Restore production settings
    +resif-reset-swset         # As resif-load-swset-prod
    +
    + +
    (iris only) Skylake Optimized builds

    Skylake optimized build can be loaded on regular nodes using +

    resif-load-swset-skylake  # Eq. of export MODULEPATH=$MODULEPATH_PROD_SKYLAKE
    +
    +You MUST obviously be on a Skylake node (sbatch -C skylake [...]) to take benefit from it. +Note that this action is not required on GPU nodes.

    +
    +
    +

    GPU Optimized builds vs. CPU software set on GPU nodes

    +

    On GPU nodes, be aware that the default MODULEPATH holds two directories:

    +
      +
    1. GPU Optimized builds (i.e. typically against the {foss,intel}cuda toolchains) stored under /opt/apps/resif/<cluster>/<version>/gpu/modules/all
    2. +
    3. CPU Optimized builds (ex: skylake on Iris)) stored under /opt/apps/resif/<cluster>/<version>/skylake/modules/all
    4. +
    +

    You may want to exclude CPU builds to ensure you take the most out of the GPU accelerators. In that case, you may want to run:

    +
    # /!\ ADAPT <version> accordingly
    +module unuse /opt/apps/resif/${ULHPC_CLUSTER}/${RESIF_VERSION_PROD}/skylake/modules/all
    +
    + +
    + + +

    Using Easybuild to Create Custom Modules

    +

    Just like we do, you probably want to use Easybuild to complete the existing +software set with your own modules and software builds.

    +

    See Building Custom (or missing) software documentation +for more details.

    +

    Creating a Custom Module Environment

    +

    You can modify your environment so that certain modules are loaded +whenever you log in. +Use module save [<name>] and module restore [<name>] for that purpose -- see Lmod documentation on User collections

    +

    You can also create and install your own modules for your convenience or +for sharing software among collaborators. +See the modulefile documentation for +details of the required format and available commands. +These custom modulefiles can be made visible to the module command by

    +
    module use /path/to/the/custom/modulefiles
    +
    + +
    +

    Warning

    +
      +
    1. Make sure the UNIX file permissions grant access to all users who want to use the software.
    2. +
    3. Do not give write permissions to your home directory to anyone else.
    4. +
    +
    +
    +

    Note

    +

    The module use command adds new directories before +other module search paths (defined as $MODULEPATH), so modules +defined in a custom directory will have precedence if there are +other modules with the same name in the module search paths. If +you prefer to have the new directory added at the end of +$MODULEPATH, use module use -a instead of module use.

    +
    +

    Module FAQ

    +
    +

    Is there an environment variable that captures loaded modules?

    +

    Yes, active modules can be retrieved via $LOADEDMODULES, this environment variable is

    +
    +

    automatically changed to reflect active loaded modules that is reflected via module list. +If you want to access modulefile path for loaded modules you can retrieve via $_LM_FILES

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/environment/workflow/index.html b/environment/workflow/index.html new file mode 100644 index 00000000..0415ce4a --- /dev/null +++ b/environment/workflow/index.html @@ -0,0 +1,2833 @@ + + + + + + + + + + + + + + + + + + + + + + + + Workflow - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Workflow

    + +

    ULHPC Workflow

    + + +

    Your typical journey on the ULHPC facility is illustrated in the below figure.

    +

    + + +
    Typical workflow on UL HPC resources

    You daily interaction with the ULHPC facility includes the following +actions:

    +

    Preliminary setup

    +
      +
    1. Connect to the access/login servers
        +
      • This can be done either by ssh +(recommended) or via the ULHPC OOD portal
      • +
      • (advanced users) at this point, you probably want to create (or +reattach) to a screen or tmux session
      • +
      +
    2. +
    3. Synchronize you code and/or transfer your input +data using rsync/svn/git typically +
    4. +
    5. Reserve a few interactive resources with salloc -p interactive [...]
        +
      • recall that the module command (used to load the ULHPC User + software) is only available on the compute + nodes
      • +
      • (eventually) build your program, typically using gcc/icc/mpicc/nvcc..
      • +
      • Test your workflow / HPC analysis on a small size problem (srun/python/sh...)
      • +
      • Prepare a launcher script <launcher>.{sh|py}
      • +
      +
    6. +
    +

    Then you can proceed with your Real Experiments:

    +
      +
    1. Reserve passive resources: sbatch [...] <launcher>
    2. +
    3. Grab the results and (eventually) transfer back your output + results using rsync/svn/git
    4. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/gpfs/index.html b/filesystems/gpfs/index.html new file mode 100644 index 00000000..10127351 --- /dev/null +++ b/filesystems/gpfs/index.html @@ -0,0 +1,3048 @@ + + + + + + + + + + + + + + + + + + + + + + + + GPFS/SpectrumScale - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    GPFS/SpectrumScale ($HOME, project)

    +

    +

    Introduction

    +

    IBM Spectrum Scale, formerly known as the General Parallel File System (GPFS), is global high-performance clustered file system available on all ULHPC computational systems through a DDN GridScaler/GS7K system.

    +

    It allows sharing homedirs and project data between users, systems, and eventually (i.e. if needed) with the "outside world". +In terms of raw storage capacities, it represents more than 4PB.

    + +

    Global Home directory $HOME

    +

    Home directories provide a convenient means for a user to have access to files such as dotfiles, source files, input files, configuration files regardless of the platform.

    +

    Refer to your home directory using the environment variable $HOME whenever possible. +The absolute path may change, but the value of $HOME will always be correct.

    + + +
    +

    $HOME quotas and backup policies

    +

    See quotas for detailed information about inode, +space quotas, and file system purge policies. +Your HOME is backuped weekly, according to the policy detailed in the ULHPC backup policies.

    +
    + + +

    Global Project directory $PROJECTHOME=/work/projects/

    +

    Project directories are intended for sharing data within a group of researchers, under /work/projects/<name>

    +

    Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible.

    + + + + +
    +

    Global Project quotas and backup policies

    +

    See quotas for detailed information about inode, +space quotas, and file system purge policies. +Your projects backup directories are backuped weekly, according to the policy detailed in the ULHPC backup policies.

    +
    + + +
    +

    Access rights to project directory: Quota for clusterusers group in project directories is 0 !!!

    +

    When a project <name> is created, a group of the same name (<name>) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory.

    +

    Be aware that your default group as a user is clusterusers which has (on purpose) a quota in project directories set to 0. +You thus need to ensure you always write data in your project directory using the <name> group (instead of yoru default one.). +This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...]

    +

    When using rsync to transfer file toward the project directory /work/projects/<name> as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to:

    +
      +
    • give new files the destination-default permissions with --no-p (--no-perms), and
    • +
    • use the default group <name> of the destination dir with --no-g (--no-group)
    • +
    • (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX
    • +
    +

    Your full rsync command becomes (adapt accordingly):

    +
      rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] <source> /work/projects/<name>/[...]
    +
    + + +
    +

    For the same reason detailed above, in case you are using a build command or +more generally any command meant to write data in your project directory +/work/projects/<name>, you want to use the +sg as follows:

    +
    # /!\ ADAPT <name> accordingly
    +sg <name> -c "<command> [...]"
    +
    + +

    This is particularly important if you are building dedicated software with +Easybuild for members of the project - you typically want to do it as follows:

    +
    # /!\ ADAPT <name> accordingly
    +sg <name> -c "eb [...] -r --rebuild -D"   # Dry-run - enforce using the '<name>' group
    +sg <name> -c "eb [...] -r --rebuild"      # Dry-run - enforce using the '<name>' group
    +
    + + + +

    Storage System Implementation

    +

    The way the ULHPC GPFS file system is implemented is depicted on the below figure.

    +

    +

    It is composed of:

    +
      +
    • Two NAS protocol servers (see below
    • +
    • One DDN GridScaler 7K system acquired as part of RFP 160019 deployed in 2017 and later extended, composed of
        +
      • 1x DDN GS7K enclosure (~11GB/s IO throughput)
      • +
      • 4x SS8460 disk expansion enclosures
      • +
      • 350x HGST disks (7.2K RPM HDD, 6TB, Self Encrypted Disks (SED) configured over 35 RAID6 (8+2) pools
      • +
      • 28x Sandisk SSD 400GB disks
      • +
      +
    • +
    • Another DDN GridScaler 7K system acquired as part of RFP 190027 deployed in 2020 as part of Aion and later extended.
        +
      • 1x DDN GS7990-EDR embedded storage
      • +
      • 4x SS9012 disk expansion enclosures
      • +
      • 360x NL-SAS HDDs (6TB, Self Encrypted Disks (SED)) configured over 36 RAID6 (8+2) pools
      • +
      • 10x 3.2TB SED SAS-SSD for metadata.
      • +
      +
    • +
    +

    There is no single point of failure within the storage solution and the setup is fully redundant. +The data paths from the storage to the NSD servers are redundant and providing one link from each of the servers to each controller in the storage unit. There are redundant power supplies, redundant fans, redundant storage controller with mirrored cache and battery backup to secure the cache data when power is lost completely. The data paths to the enclosures are redundant so that links can fail, and the system will still be fully operational.

    +

    Filesystem Performance

    +

    The performance of the GS7990 storage system via native GPFS and RDMA based data transport for the HPC filesystem is expected to be in the range of at least 20GB/s for large sequential read and writes, using a filesystem block size of 16MB and scatter or cluster allocation. +Performance measurement by IOR, a synthetic benchmark for testing the performance of distributed filesystems is planned upon finalization of the installation.

    +
    The IOR benchmark

    IOR is a parallel IO benchmark that can be used to test the performance of parallel storage systems using various interfaces and access patterns. It supports a variety of different APIs to simulate IO load and is nowadays considered as a reference Parallel filesystem I/O benchmark. It recently embedded another well-known benchmark suite called MDTest, a synthetic MPI parallel benchmark for testing the metadata performance of filesystems (such as Lustre or Spectrum Scale GPFS) where each thread is operating its own working set (to create directory/files, read files, delete files or directory tree).

    +
    +

    In complement to IOR, the IO-500 benchmarking suite (see also the white paper "Establishing the IO-500 Benchmark") will be performed. +IO-500 aims at capturing user-experienced performance with measured performance representative for:

    +
      +
    • applications with well optimised I/O patterns;
    • +
    • applications with random-like workloads;
    • +
    • workloads involving metadata small/objects.
    • +
    +

    NAS/NFS Servers

    +

    Two NAS protocol servers are available, each connected via 2 x IB EDR links to the IB fabric and exporting the filesystem via NFS and SMB over 2 x 10GE links into the Ethernet network.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/home/index.html b/filesystems/home/index.html new file mode 100644 index 00000000..cd640933 --- /dev/null +++ b/filesystems/home/index.html @@ -0,0 +1,2793 @@ + + + + + + + + + + + + + + + + + + + + + + + + Home - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Home

    + +

    Global Home directory $HOME

    +

    Home directories provide a convenient means for a user to have access to files such as dotfiles, source files, input files, configuration files regardless of the platform.

    +

    Refer to your home directory using the environment variable $HOME whenever possible. +The absolute path may change, but the value of $HOME will always be correct.

    + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/images/isilon.jpg b/filesystems/images/isilon.jpg new file mode 100644 index 00000000..9a8854d2 Binary files /dev/null and b/filesystems/images/isilon.jpg differ diff --git a/filesystems/images/ulhpc_gpfs.png b/filesystems/images/ulhpc_gpfs.png new file mode 100644 index 00000000..8434e293 Binary files /dev/null and b/filesystems/images/ulhpc_gpfs.png differ diff --git a/filesystems/images/ulhpc_lustre.png b/filesystems/images/ulhpc_lustre.png new file mode 100644 index 00000000..eb0f1184 Binary files /dev/null and b/filesystems/images/ulhpc_lustre.png differ diff --git a/filesystems/index.html b/filesystems/index.html new file mode 100644 index 00000000..1af5452a --- /dev/null +++ b/filesystems/index.html @@ -0,0 +1,2956 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Overview

    + +

    Your journey on the ULHPC facility is illustrated in the below figure.

    +

    +

    In particular, once connected, you have access to several different File Systems (FS) which are configured for different purposes.

    +
    What is a File System (FS) ?

    A File System (FS) is just the logical manner to store, organize & access data. +There are different types of file systems available nowadays:

    +
      +
    • (local) Disk FS you find on laptops and servers: FAT32, NTFS, HFS+, ext4, {x,z,btr}fs...
    • +
    • Networked FS, such as NFS, CIFS/SMB, AFP, allowing to access a remote storage system as a NAS (Network Attached Storage)
    • +
    • Parallel and Distributed FS: such as SpectrumScale/GPFS or Lustre. Those are typical FileSystems you meet on HPC or HTC (High Throughput Computing) facility as they exhibit several unique capabilities:
        +
      • data is spread across multiple storage nodes for redundancy and performance.
      • +
      • the global capacity AND the global performance levels are increased with every systems added to the storage infrastructure.
      • +
      +
    • +
    +
    +

    Storage Systems Overview

    +

    +

    Current statistics of the available filesystems are depicted on the side figure. +The ULHPC facility relies on 2 types of Distributed/Parallel File Systems to deliver high-performant Data storage at a BigData scale:

    +
      +
    • IBM Spectrum Scale, formerly known as the General Parallel File System (GPFS), a global high-performance clustered file system hosting your $HOME and projects data.
    • +
    • Lustre, an open-source, parallel file system dedicated to large, local, parallel scratch storage.
    • +
    +

    In addition, the following file-systems complete the ULHPC storage infrastructure:

    +
      +
    • OneFS, A global low-performance Dell/EMC Isilon solution used to host project data, and serve for backup and archival purposes
    • +
    • The ULHPC team relies on other filesystems within its internal backup infrastructure, such as xfs, a high-performant disk file-system deployed on storage/backup servers.
    • +
    +

    Summary

    + + +

    Several File Systems co-exist on the ULHPC facility and are configured for different purposes. +Each servers and computational resources has access to at least three different file systems with different levels of performance, permanence and available space summarized below

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    DirectoryEnv.file systembackup
    /home/users/<login>$HOMEGPFS/Spectrumscaleno
    /work/projects/<name>-GPFS/Spectrumscaleyes (partial, backup subdirectory)
    /scratch/users/<login>$SCRATCHLustreno
    /mnt/isilon/projects/<name>-OneFSyes (live sync and snapshots)
    + + + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/isilon/index.html b/filesystems/isilon/index.html new file mode 100644 index 00000000..e0fc43a7 --- /dev/null +++ b/filesystems/isilon/index.html @@ -0,0 +1,2827 @@ + + + + + + + + + + + + + + + + + + + + + + + + OneFS Isilon - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Dell EMC Isilon (Archives and cold project data)

    + + +

    OneFS, A global low-performance Dell/EMC Isilon solution is used to host project data, and serve for backup and archival purposes. You will find them mounted under /mnt/isilon/projects.

    + + +

    In 2014, the IT Department of the University, the UL HPC and the LCSB join their forces (and their funding) to acquire a scalable and modular NAS solution able to sustain the need for an internal big data storage, i.e. provides space for centralized data and backups of all devices used by the UL staff and all research-related data, including the one proceed on the UL HPC platform.

    +

    At the end of a public call for tender released in 2014, the EMC Isilon system was finally selected with an effective deployment in 2015. It is physically hosted in the new CDC (Centre de Calcul) server room in the Maison du Savoir. Composed by a large number of disk enclosures featuring the OneFS File System, it currently offers an effective capacity of 3.360 PB.

    +

    A secondary Isilon cluster, acquired in 2020 and deployed in 2021 is duplicating this setup in a redundant way.

    +

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/lfs/index.html b/filesystems/lfs/index.html new file mode 100644 index 00000000..d72b9f8a --- /dev/null +++ b/filesystems/lfs/index.html @@ -0,0 +1,3087 @@ + + + + + + + + + + + + + + + + + + + + + + + + Scratch Data Management - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Scratch Data Management

    + +

    Understanding Lustre I/O

    +

    When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.

    +

    If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency, so that all clients see consistent results.

    +

    Discover MDTs and OSTs

    +

    ULHPC's Lustre file systems look and act like a single logical storage, but a large files on Lustre can be divided into multiple chunks (stripes) and stored across over OSTs. +This technique is called file striping. +The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. +It is thus important to know the number of OST on your running system.

    +

    As mentioned in the Lustre implementation section, the ULHPC Lustre infrastructure is composed of 2 MDS servers (2 MDT), 2 OSS servers and 16 OSTs. +You can list the MDTs and OSTs with the command lfs df:

    +
    $ cds      # OR: cd $SCRATCH
    +$ lfs df -h
    +UUID                       bytes        Used   Available Use% Mounted on
    +lscratch-MDT0000_UUID        3.2T       15.4G        3.1T   1% /mnt/lscratch[MDT:0]
    +lscratch-MDT0001_UUID        3.2T        3.8G        3.2T   1% /mnt/lscratch[MDT:1]
    +lscratch-OST0000_UUID       57.4T       16.7T       40.2T  30% /mnt/lscratch[OST:0]
    +lscratch-OST0001_UUID       57.4T       18.8T       38.0T  34% /mnt/lscratch[OST:1]
    +lscratch-OST0002_UUID       57.4T       17.6T       39.3T  31% /mnt/lscratch[OST:2]
    +lscratch-OST0003_UUID       57.4T       16.6T       40.3T  30% /mnt/lscratch[OST:3]
    +lscratch-OST0004_UUID       57.4T       16.5T       40.3T  30% /mnt/lscratch[OST:4]
    +lscratch-OST0005_UUID       57.4T       16.5T       40.3T  30% /mnt/lscratch[OST:5]
    +lscratch-OST0006_UUID       57.4T       16.3T       40.6T  29% /mnt/lscratch[OST:6]
    +lscratch-OST0007_UUID       57.4T       17.0T       39.9T  30% /mnt/lscratch[OST:7]
    +lscratch-OST0008_UUID       57.4T       16.8T       40.0T  30% /mnt/lscratch[OST:8]
    +lscratch-OST0009_UUID       57.4T       13.2T       43.6T  24% /mnt/lscratch[OST:9]
    +lscratch-OST000a_UUID       57.4T       13.2T       43.7T  24% /mnt/lscratch[OST:10]
    +lscratch-OST000b_UUID       57.4T       13.3T       43.6T  24% /mnt/lscratch[OST:11]
    +lscratch-OST000c_UUID       57.4T       14.0T       42.8T  25% /mnt/lscratch[OST:12]
    +lscratch-OST000d_UUID       57.4T       13.9T       43.0T  25% /mnt/lscratch[OST:13]
    +lscratch-OST000e_UUID       57.4T       14.4T       42.5T  26% /mnt/lscratch[OST:14]
    +lscratch-OST000f_UUID       57.4T       12.9T       43.9T  23% /mnt/lscratch[OST:15]
    +
    +filesystem_summary:       919.0T      247.8T      662.0T  28% /mnt/lscratch
    +
    + +

    File striping

    +

    File striping permits to increase the throughput of operations by taking advantage of several OSSs and OSTs, by allowing one or more clients to read/write different parts of the same file in parallel. On the other hand, striping small files can decrease the performance.

    +

    File striping allows file sizes larger than a single OST, large files MUST be striped over several OSTs in order to avoid filling a single OST and harming the performance for all users. +There is default stripe configuration for ULHPC Lustre filesystems (see below). +However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance. +You can tune file striping using 3 properties:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    PropertyEffectDefaultAccepted valuesAdvised values
    stripe_sizeSize of the file stripes in bytes1048576 (1m)> 0> 0
    stripe_countNumber of OST to stripe across1-1 (use all the OSTs), 1-16-1
    stripe_offsetIndex of the OST where the first stripe of files will be written-1 (automatic)-1, 0-15-1
    +

    Note: With regards stripe_offset (the index of the OST where the first stripe is to be placed); the default is -1 which results in random selection and using a non-default value is NOT recommended.

    +
    +

    Note

    +

    Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.

    +
    +
      +
    • Use the lfs getstripe command for getting the stripe parameters.
    • +
    • Use lfs setstripe for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns.
        +
      • Newly created files and directories will inherit these parameters from their parent directory. However, the parameters cannot be changed on an existing file.
      • +
      +
    • +
    +
    $ lfs getstripe dir|filename
    +$ lfs setstripe -s <stripe_size> -c <stripe_count> -o <stripe_offset> dir|filename
    +    usage: lfs setstripe -d <directory>   (to delete default striping from an existing directory)
    +    usage: lfs setstripe [--stripe-count|-c <stripe_count>]
    +                         [--stripe-index|-i <start_ost_idx>]
    +                         [--stripe-size|-S <stripe_size>]  <directory|filename>
    +
    + +

    Example:

    +
    $ lfs getstripe $SCRATCH
    +/scratch/users/<login>/
    +stripe_count:   1 stripe_size:    1048576 stripe_offset:  -1
    +[...]
    +$ lfs setstripe -c -1 $SCRATCH
    +$ lfs getstripe $SCRATCH
    +/scratch/users/<login>/
    +stripe_count:  -1 stripe_size:   1048576 pattern:       raid0 stripe_offset: -1
    +
    + +

    In this example, we view the current stripe setting of the $SCRATCH directory. The stripe count is changed to all OSTs and verified. +All files written to this directory will be striped over the maximum number of OSTs (16). +Use lfs check osts to see the number and status of active OSTs for each filesystem on the cluster. Learn more by reading the man page:

    +
    $ lfs check osts
    +$ man lfs
    +
    + +

    File stripping Examples

    +
      +
    • Set the striping parameters for a directory containing only small files (< 20MB)
    • +
    +
    $ cd $SCRATCH
    +$ mkdir test_small_files
    +$ lfs getstripe test_small_files
    +test_small_files
    +stripe_count:   1 stripe_size:    1048576 stripe_offset:  -1 pool:
    +$ lfs setstripe --stripe-size 1M --stripe-count 1 test_small_files
    +$ lfs getstripe test_small_files
    +test_small_files
    +stripe_count:   1 stripe_size:    1048576 stripe_offset:  -1
    +
    + +
      +
    • Set the striping parameters for a directory containing only large files between 100MB and 1GB
    • +
    +
    $ mkdir test_large_files
    +$ lfs setstripe --stripe-size 2M --stripe-count 2 test_large_files
    +$ lfs getstripe test_large_files
    +test_large_files
    +stripe_count:   2 stripe_size:    2097152 stripe_offset:  -1
    +
    + +
      +
    • Set the striping parameters for a directory containing files larger than 1GB
    • +
    +
    $ mkdir test_larger_files
    +$ lfs setstripe --stripe-size 4M --stripe-count 6 test_larger_files
    +$ lfs getstripe test_larger_files
    +test_larger_files
    +stripe_count:   6 stripe_size:    4194304 stripe_offset:  -1
    +
    + +
    +

    Big Data files management on Lustre

    +

    Using a large stripe size can improve performance when accessing very large files

    +
    +

    Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.

    +

    Note that these are simple examples, the optimal settings defer depending on the application (concurrent threads accessing the same file, size of each write operation, etc).

    +

    Lustre Best practices

    +
    +

    Parallel I/O on the same file

    +

    Increase the stripe_count for parallel I/O to the same file.

    +
    +

    When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.

    +

    Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. +For more details, you can read the following external resources:

    + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/lustre/index.html b/filesystems/lustre/index.html new file mode 100644 index 00000000..e6777fcc --- /dev/null +++ b/filesystems/lustre/index.html @@ -0,0 +1,3409 @@ + + + + + + + + + + + + + + + + + + + + + + + + Lustre - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Lustre ($SCRATCH)

    +

    +

    Introduction

    +

    The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments.

    +

    It is available as a global high-performance file system on all ULHPC computational systems through a DDN ExaScaler system.

    +

    It is meant to host temporary scratch data within your jobs. +In terms of raw storage capacities, it represents more than 1.6PB.

    + + + +

    Global Scratch directory $SCRATCH

    +

    The scratch area is a Lustre-based file system designed for high performance temporary storage of large files.

    +

    It is thus intended to support large I/O for jobs that are being actively computed on the ULHPC systems. +We recommend that you run your jobs, especially data intensive ones, from the ULHPC scratch file system.

    +

    Refer to your scratch directory using the environment variable $SCRATCH whenever possible (which expands to /scratch/users/$(whoami)). +The scratch file system is shared via the Infiniband network of the ULHPC facility and is available from all nodes while being tuned for high performance.

    + + +
    +

    ULHPC $SCRATCH quotas and backup

    +

    Extended ACLs are provided for sharing data with other users using fine-grained control. +See quotas for detailed information about inode, space quotas, and file system policies. +In particular, your SCRATCH directory is NOT backuped according to the policy detailed in the ULHPC backup policies.

    +
    +
    A short history of Lustre

    Lustre was initiated & funded by the U.S. Department of Energy Office of Science & National Nuclear Security Administration laboratories in mid 2000s. Developments continue through the Cluster File Systems (ClusterFS) company founded in 2001. +Sun Microsystems acquired ClusterFS in 2007 with the intent to bring Lustre technologies to Sun's ZFS file system and the Solaris operating system. In 2010, Oracle bought Sun and began to manage and release Lustre, however the company was not known for HPC. +In December 2010, Oracle announced that they would cease Lustre 2.x development and place Lustre 1.8 into maintenance-only support, creating uncertainty around the future development of the file system. Following this announcement, several new organizations sprang up to provide support and development in an open community development model, including Whamcloud, Open Scalable File Systems (OpenSFS, a nonprofit organization promoting the Lustre file system to ensure Lustre remains vendor-neutral, open, and free), Xyratex or DDN. +By the end of 2010, most Lustre developers had left Oracle.

    +

    WhamCloud was bought by Intel in 2011 and Xyratex took over the Lustre trade mark, logo, related assets (support) from Oracle. +In June 2018, the Lustre team and assets were acquired from Intel by DDN. +DDN organized the new acquisition as an independent division, reviving the Whamcloud name for the new division.

    +
    +

    General Architecture

    +

    A Lustre file system has three major functional units:

    +
      +
    • One or more MetaData Servers (MDS) nodes (here two) that have one or more MetaData Target (MDT) devices per Lustre filesystem that stores namespace metadata, such as filenames, directories, access permissions, and file layout. The MDT data is stored in a local disk filesystem. However, unlike block-based distributed filesystems, such as GPFS/SpectrumScale and PanFS, where the metadata server controls all of the block allocation, the Lustre metadata server is only involved in pathname and permission checks, and is not involved in any file I/O operations, avoiding I/O scalability bottlenecks on the metadata server.
    • +
    • One or more Object Storage Server (OSS) nodes that store file data on one or more Object Storage Target (OST) devices.
        +
      • The capacity of a Lustre file system is the sum of the capacities provided by the OSTs.
      • +
      • OSSs do most of the work and thus require as much RAM as possible
          +
        • Rule of thumb: ~2 GB base memory + 1 GB / OST
        • +
        • Failover configurations: ~2 GB / OST
        • +
        +
      • +
      • OSSs should have as much CPUs as possible, but it is not as much critical as on MDS
      • +
      +
    • +
    • Client(s) that access and use the data. Lustre presents all clients with a unified namespace for all of the files and data in the filesystem, using standard POSIX semantics, and allows concurrent and coherent read and write access to the files in the filesystem.
    • +
    +
    Lustre general features and numbers

    Lustre brings a modern architecture within an Object based file system with the following features:

    +
      +
    • Adaptable: supports wide range of networks and storage hardware
    • +
    • Scalable: Distributed file object handling for 100.000 clients and more
    • +
    • Stability: production-quality stability and failover
    • +
    • Modular: interfaces for easy adaption
    • +
    • Highly Available: no single point of failure when configured with HA software
    • +
    • BIG and exapandable: allow for multiple PB in one namespace
    • +
    • Open-source and community driven.
    • +
    +

    Lustre provides a POSIX compliant layer supported on most Linux flavours. +In terms of raw number capabilities for the Lustre:

    +
      +
    • Max system size: about 64PB
    • +
    • Max number of OSTs: 8150
    • +
    • Max number of MDTs: multiple per filesystem supported since Lustre 2.4
    • +
    • Files per directory: 25 Millions (**don't run ls -al)
    • +
    • Max stripes: 2000 since Lustre 2.2
    • +
    • Stripe size: Min 64kB -- Max 2TB
    • +
    • Max object size: 16TB(ldiskfs) 256PB (ZFS)
    • +
    • Max file size: 31.35PB (ldiskfs) 8EB (ZFS)
    • +
    +
    +
    When to use Lustre?
      +
    • Lustre is optimized for:
        +
      • Large files
      • +
      • Sequential throughput
      • +
      • Parallel applications writing to different parts of a file
      • +
      +
    • +
    • Lustre will not perform well for
        +
      • Lots of small files
      • +
      • High number of meta data requests, improved on new versions
      • +
      • Waste of space on the OSTs
      • +
      +
    • +
    +
    + +

    Storage System Implementation

    +

    The way the ULHPC Lustre file system is implemented is depicted on the below figure.

    +

    +

    Acquired as part of RFP 170035, the ULHPC configuration is based upon:

    +
      +
    • a set of 2x EXAScaler Lustre building blocks that each consist of:
        +
      • 1x DDN SS7700 base enclosure and its controller pair with 4x FDR ports
      • +
      • 1x DDN SS8460 disk expansion enclosure (84-slot drive enclosures)
      • +
      +
    • +
    • OSTs: 160x SEAGATE disks (7.2K RPM HDD, 8TB, Self Encrypted Disks (SED))
        +
      • configured over 16 RAID6 (8+2) pools and extra disks in spare pools
      • +
      +
    • +
    • MDTs: 18x HGST disks (10K RPM HDD, 1.8TB, Self Encrypted Disks (SED))
        +
      • configured over 8 RAID1 pools and extra disks in spare pools
      • +
      +
    • +
    • Two redundant MDS servers
        +
      • Dell R630, 2x Intel Xeon E5-2667v4 @ 3.20GHz [8c], 128GB RAM
      • +
      +
    • +
    • Two redundant OSS servers
        +
      • Dell R630XL, 2x Intel Xeon E5-2640v4 @ 2.40GHz [10c], 128GB RAM
      • +
      +
    • +
    + + + + + + + + + + + + + + + + + + + + + + + + + +
    CriteriaValue
    Power (nominal)6.8 KW
    Power (idle)5.5 KW
    Weight432 kg
    Rack Height22U
    +

    LNet is configured to be performed with OST based balancing.

    +

    Filesystem Performance

    +

    The performance of the ULHPC Lustre filesystem is expected to be in the range of at least 15GB/s for large sequential read and writes.

    +

    IOR

    +

    Upon release of the system, performance measurement by IOR, a synthetic benchmark for testing the performance of distributed filesystems, was run for an increasing number of clients as well as with 1kiB, 4kiB, 1MiB and 4MiB transfer sizes.

    +

    +

    As can be seen, aggregated writes and reads exceed 15 GB/s (depending on the test) which meets the minimum requirement.

    +

    FIO

    +

    Random IOPS benchmark was performed using FIO with 20 and 40 GB file size over 8 jobs, leading to the following total size of 160GB and 320 GB

    +
      +
    • 320 GB is > 2\times RAM size of the OSS node (128 GB RAM)
    • +
    • 160 GB is > 1\times RAM size of the OSS node (128 GB RAM)
    • +
    +

    +

    MDTEST

    +

    Mdtest (based on the 7c0ec41 on September 11 , 2017 (based on v1.9.3)) was used to benchmark the metadata capabilities of the delivered system. +HT was turned on to be able to run 32 threads.

    +

    +

    Mind the logarithmic Y-Axis. +Tests on 4 clients with up to 20 threads have been included as well to show the scalability of the system.

    +

    Lustre Usage

    + + +

    Understanding Lustre I/O

    +

    When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.

    +

    If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency, so that all clients see consistent results.

    +

    Discover MDTs and OSTs

    +

    ULHPC's Lustre file systems look and act like a single logical storage, but a large files on Lustre can be divided into multiple chunks (stripes) and stored across over OSTs. +This technique is called file striping. +The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. +It is thus important to know the number of OST on your running system.

    +

    As mentioned in the Lustre implementation section, the ULHPC Lustre infrastructure is composed of 2 MDS servers (2 MDT), 2 OSS servers and 16 OSTs. +You can list the MDTs and OSTs with the command lfs df:

    +
    $ cds      # OR: cd $SCRATCH
    +$ lfs df -h
    +UUID                       bytes        Used   Available Use% Mounted on
    +lscratch-MDT0000_UUID        3.2T       15.4G        3.1T   1% /mnt/lscratch[MDT:0]
    +lscratch-MDT0001_UUID        3.2T        3.8G        3.2T   1% /mnt/lscratch[MDT:1]
    +lscratch-OST0000_UUID       57.4T       16.7T       40.2T  30% /mnt/lscratch[OST:0]
    +lscratch-OST0001_UUID       57.4T       18.8T       38.0T  34% /mnt/lscratch[OST:1]
    +lscratch-OST0002_UUID       57.4T       17.6T       39.3T  31% /mnt/lscratch[OST:2]
    +lscratch-OST0003_UUID       57.4T       16.6T       40.3T  30% /mnt/lscratch[OST:3]
    +lscratch-OST0004_UUID       57.4T       16.5T       40.3T  30% /mnt/lscratch[OST:4]
    +lscratch-OST0005_UUID       57.4T       16.5T       40.3T  30% /mnt/lscratch[OST:5]
    +lscratch-OST0006_UUID       57.4T       16.3T       40.6T  29% /mnt/lscratch[OST:6]
    +lscratch-OST0007_UUID       57.4T       17.0T       39.9T  30% /mnt/lscratch[OST:7]
    +lscratch-OST0008_UUID       57.4T       16.8T       40.0T  30% /mnt/lscratch[OST:8]
    +lscratch-OST0009_UUID       57.4T       13.2T       43.6T  24% /mnt/lscratch[OST:9]
    +lscratch-OST000a_UUID       57.4T       13.2T       43.7T  24% /mnt/lscratch[OST:10]
    +lscratch-OST000b_UUID       57.4T       13.3T       43.6T  24% /mnt/lscratch[OST:11]
    +lscratch-OST000c_UUID       57.4T       14.0T       42.8T  25% /mnt/lscratch[OST:12]
    +lscratch-OST000d_UUID       57.4T       13.9T       43.0T  25% /mnt/lscratch[OST:13]
    +lscratch-OST000e_UUID       57.4T       14.4T       42.5T  26% /mnt/lscratch[OST:14]
    +lscratch-OST000f_UUID       57.4T       12.9T       43.9T  23% /mnt/lscratch[OST:15]
    +
    +filesystem_summary:       919.0T      247.8T      662.0T  28% /mnt/lscratch
    +
    + +

    File striping

    +

    File striping permits to increase the throughput of operations by taking advantage of several OSSs and OSTs, by allowing one or more clients to read/write different parts of the same file in parallel. On the other hand, striping small files can decrease the performance.

    +

    File striping allows file sizes larger than a single OST, large files MUST be striped over several OSTs in order to avoid filling a single OST and harming the performance for all users. +There is default stripe configuration for ULHPC Lustre filesystems (see below). +However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance. +You can tune file striping using 3 properties:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    PropertyEffectDefaultAccepted valuesAdvised values
    stripe_sizeSize of the file stripes in bytes1048576 (1m)> 0> 0
    stripe_countNumber of OST to stripe across1-1 (use all the OSTs), 1-16-1
    stripe_offsetIndex of the OST where the first stripe of files will be written-1 (automatic)-1, 0-15-1
    +

    Note: With regards stripe_offset (the index of the OST where the first stripe is to be placed); the default is -1 which results in random selection and using a non-default value is NOT recommended.

    +
    +

    Note

    +

    Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.

    +
    +
      +
    • Use the lfs getstripe command for getting the stripe parameters.
    • +
    • Use lfs setstripe for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns.
        +
      • Newly created files and directories will inherit these parameters from their parent directory. However, the parameters cannot be changed on an existing file.
      • +
      +
    • +
    +
    $ lfs getstripe dir|filename
    +$ lfs setstripe -s <stripe_size> -c <stripe_count> -o <stripe_offset> dir|filename
    +    usage: lfs setstripe -d <directory>   (to delete default striping from an existing directory)
    +    usage: lfs setstripe [--stripe-count|-c <stripe_count>]
    +                         [--stripe-index|-i <start_ost_idx>]
    +                         [--stripe-size|-S <stripe_size>]  <directory|filename>
    +
    + +

    Example:

    +
    $ lfs getstripe $SCRATCH
    +/scratch/users/<login>/
    +stripe_count:   1 stripe_size:    1048576 stripe_offset:  -1
    +[...]
    +$ lfs setstripe -c -1 $SCRATCH
    +$ lfs getstripe $SCRATCH
    +/scratch/users/<login>/
    +stripe_count:  -1 stripe_size:   1048576 pattern:       raid0 stripe_offset: -1
    +
    + +

    In this example, we view the current stripe setting of the $SCRATCH directory. The stripe count is changed to all OSTs and verified. +All files written to this directory will be striped over the maximum number of OSTs (16). +Use lfs check osts to see the number and status of active OSTs for each filesystem on the cluster. Learn more by reading the man page:

    +
    $ lfs check osts
    +$ man lfs
    +
    + +

    File stripping Examples

    +
      +
    • Set the striping parameters for a directory containing only small files (< 20MB)
    • +
    +
    $ cd $SCRATCH
    +$ mkdir test_small_files
    +$ lfs getstripe test_small_files
    +test_small_files
    +stripe_count:   1 stripe_size:    1048576 stripe_offset:  -1 pool:
    +$ lfs setstripe --stripe-size 1M --stripe-count 1 test_small_files
    +$ lfs getstripe test_small_files
    +test_small_files
    +stripe_count:   1 stripe_size:    1048576 stripe_offset:  -1
    +
    + +
      +
    • Set the striping parameters for a directory containing only large files between 100MB and 1GB
    • +
    +
    $ mkdir test_large_files
    +$ lfs setstripe --stripe-size 2M --stripe-count 2 test_large_files
    +$ lfs getstripe test_large_files
    +test_large_files
    +stripe_count:   2 stripe_size:    2097152 stripe_offset:  -1
    +
    + +
      +
    • Set the striping parameters for a directory containing files larger than 1GB
    • +
    +
    $ mkdir test_larger_files
    +$ lfs setstripe --stripe-size 4M --stripe-count 6 test_larger_files
    +$ lfs getstripe test_larger_files
    +test_larger_files
    +stripe_count:   6 stripe_size:    4194304 stripe_offset:  -1
    +
    + +
    +

    Big Data files management on Lustre

    +

    Using a large stripe size can improve performance when accessing very large files

    +
    +

    Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.

    +

    Note that these are simple examples, the optimal settings defer depending on the application (concurrent threads accessing the same file, size of each write operation, etc).

    +

    Lustre Best practices

    +
    +

    Parallel I/O on the same file

    +

    Increase the stripe_count for parallel I/O to the same file.

    +
    +

    When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.

    +

    Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. +For more details, you can read the following external resources:

    + + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/overview/index.html b/filesystems/overview/index.html new file mode 100644 index 00000000..13d3fe7e --- /dev/null +++ b/filesystems/overview/index.html @@ -0,0 +1,2831 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Overview

    + +

    ULHPC File Systems Overview

    + + +

    Several File Systems co-exist on the ULHPC facility and are configured for different purposes. +Each servers and computational resources has access to at least three different file systems with different levels of performance, permanence and available space summarized below

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    DirectoryEnv.file systembackup
    /home/users/<login>$HOMEGPFS/Spectrumscaleno
    /work/projects/<name>-GPFS/Spectrumscaleyes (partial, backup subdirectory)
    /scratch/users/<login>$SCRATCHLustreno
    /mnt/isilon/projects/<name>-OneFSyes (live sync and snapshots)
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/perfs/2018-Lustre_FIO-DDN.png b/filesystems/perfs/2018-Lustre_FIO-DDN.png new file mode 100644 index 00000000..d030df03 Binary files /dev/null and b/filesystems/perfs/2018-Lustre_FIO-DDN.png differ diff --git a/filesystems/perfs/2018-Lustre_FIO-MBs_DDN.png b/filesystems/perfs/2018-Lustre_FIO-MBs_DDN.png new file mode 100644 index 00000000..38be7eab Binary files /dev/null and b/filesystems/perfs/2018-Lustre_FIO-MBs_DDN.png differ diff --git a/filesystems/perfs/2018-Lustre_IOR-DDN.png b/filesystems/perfs/2018-Lustre_IOR-DDN.png new file mode 100644 index 00000000..639cebf3 Binary files /dev/null and b/filesystems/perfs/2018-Lustre_IOR-DDN.png differ diff --git a/filesystems/perfs/2018-Lustre_MDTest-DDN.png b/filesystems/perfs/2018-Lustre_MDTest-DDN.png new file mode 100644 index 00000000..ee40ef72 Binary files /dev/null and b/filesystems/perfs/2018-Lustre_MDTest-DDN.png differ diff --git a/filesystems/projecthome/index.html b/filesystems/projecthome/index.html new file mode 100644 index 00000000..dd7bb082 --- /dev/null +++ b/filesystems/projecthome/index.html @@ -0,0 +1,2792 @@ + + + + + + + + + + + + + + + + + + + + + + + + Projecthome - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Projecthome

    + +

    Global Project directory $PROJECTHOME=/work/projects/

    +

    Project directories are intended for sharing data within a group of researchers, under /work/projects/<name>

    +

    Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible.

    + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/quotas/index.html b/filesystems/quotas/index.html new file mode 100644 index 00000000..0a727738 --- /dev/null +++ b/filesystems/quotas/index.html @@ -0,0 +1,3012 @@ + + + + + + + + + + + + + + + + + + + + + + + + Quotas - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Quotas

    +

    Overview

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    DirectoryDefault space quotaDefault inode quota
    $HOME500 GB1 M
    $SCRATCH10 TB1 M
    /work/projects/...1 TB1 M
    /mnt/isilon/projects/...1.14 PB globally-
    +

    Quotas

    +
    +

    Warning

    +

    When a quota is reached writes to that directory will fail.

    +
    +
    +

    Note

    +

    On Isilon everyone shares one global quota and the HPC Platform team sets up project quotas. Unfortunately it is not possible to see the quota status on the cluster.

    +
    +

    Current usage

    +

    We provide the df-ulhpc command on the cluster login nodes, which displays current usage, soft quota, hard quota and grace period. Any directories that have exceeded the quota will be highlighted in red.

    +

    Once you reach the soft quota you can still write data until the grace period expires (7 days) or you reach the hard quota. After you reach the end of the grace period or the hard quota, you have to reduce your usage to below the soft quota to be able to write data again.

    +

    Check current space quota status:

    +
    df-ulhpc
    +
    + +

    Check current inode quota status:

    +
    df-ulhpc -i
    +
    + +

    Check free space on all file systems:

    +
    df -h
    +
    + +

    Check free space on current file system:

    +
    df -h .
    +
    + +

    To detect the exact source of inode usage, you can use the command +

    du --max-depth=<depth> --human-readable --inodes <directory>
    +
    +where

    +
      +
    • depth: the inode usage for any file from depth and bellow is summed in the report for the directory in level depth in which the file belongs, and
    • +
    • directory: the directory for which the analysis is curried out; leaving empty performs the analysis in the current working directory.
    • +
    +

    For a more graphical approach, use ncdu, with the c option to display the aggregate inode number for the directories in the current working directory.

    +

    Increases

    +

    If your project needs additional space or inodes for a specific project directory you may request it via ServiceNow (HPC → Storage & projects → Extend quota).

    +

    Quotas on the home directory and scratch cannot be increased.

    +

    Troubleshooting

    +

    The quotas on project directories are based on the group. Be aware that the quota for the default user group clusterusers is 0. If you get a quota error, but df-ulhpc and df-ulhpc -i confirm that the quota is not expired, you are most likely trying to write a file with the group clusterusers instead of the project group.

    +

    To avoid this issue, check out the newgrp command or set the s mode bit ("set group ID") on the directory with chmod g+s <directory>. The s bit means that any file or folder created below will inherit the group.

    +

    To transfer data with rsync into a project directory, please check the data transfer documentation.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/scratch/index.html b/filesystems/scratch/index.html new file mode 100644 index 00000000..97aeaf09 --- /dev/null +++ b/filesystems/scratch/index.html @@ -0,0 +1,2795 @@ + + + + + + + + + + + + + + + + + + + + + + + + Scratch - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Scratch

    + +

    Global Scratch directory $SCRATCH

    +

    The scratch area is a Lustre-based file system designed for high performance temporary storage of large files.

    +

    It is thus intended to support large I/O for jobs that are being actively computed on the ULHPC systems. +We recommend that you run your jobs, especially data intensive ones, from the ULHPC scratch file system.

    +

    Refer to your scratch directory using the environment variable $SCRATCH whenever possible (which expands to /scratch/users/$(whoami)). +The scratch file system is shared via the Infiniband network of the ULHPC facility and is available from all nodes while being tuned for high performance.

    + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/filesystems/unix-file-permissions/index.html b/filesystems/unix-file-permissions/index.html new file mode 100644 index 00000000..a3667b93 --- /dev/null +++ b/filesystems/unix-file-permissions/index.html @@ -0,0 +1,3394 @@ + + + + + + + + + + + + + + + + + + + + + + + + Unix File Permissions - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Unix File Permissions

    +

    Brief Overview

    +

    Every file (and directory) has an owner, an associated Unix group, and a set of +permission flags that specify separate read, write, and execute permissions for +the "user" (owner), "group", and "other". Group permissions apply to all users +who belong to the group associated with the file. "Other" is also sometimes +known as "world" permissions, and applies to all users who can login to the +system. The command ls -l displays the permissions and associated group for +any file. Here is an example of the output of this command:

    +
    drwx------ 2 elvis elvis  2048 Jun 12 2012  private
    +-rw------- 2 elvis elvis  1327 Apr  9 2012  try.f90
    +-rwx------ 2 elvis elvis 12040 Apr  9 2012  a.out
    +drwxr-x--- 2 elvis bigsci 2048 Oct 17 2011  share
    +drwxr-xr-x 3 elvis bigsci 2048 Nov 13 2011  public
    +
    + +

    From left to right, the fields above represent:

    +
      +
    1. set of ten permission flags
    2. +
    3. link count (irrelevant to this topic)
    4. +
    5. owner
    6. +
    7. associated group
    8. +
    9. size
    10. +
    11. date of last modification
    12. +
    13. name of file
    14. +
    +

    The permission flags from left to right are:

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    PositionMeaning
    1"d" if a directory, "-" if a normal file
    2, 3, 4read, write, execute permission for user (owner) of file
    5, 6, 7read, write, execute permission for group
    8, 9, 10read, write, execute permission for other (world)
    +

    and have the following meanings:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ValueMeaning
    -Flag is not set.
    rFile is readable.
    wFile is writable. For directories, files may be created or removed.
    xFile is executable. For directories, files may be listed.
    sSet group ID (sgid). For directories, files created therein will be associated with the same group as the directory, rather than default group of the user. Subdirectories created therein will not only have the same group, but will also inherit the sgid setting.
    +

    These definitions can be used to interpret the example output of ls -l +presented above:

    +
    drwx------ 2 elvis elvis  2048 Jun 12 2012  private
    +
    + +

    This is a directory named "private", owned by user elvis and associated with +Unix group elvis. The directory has read, write, and execute permissions for +the owner, and no permissions for any other user.

    +
    -rw------- 2 elvis elvis  1327 Apr  9 2012  try.f90
    +
    + +

    This is a normal file named "try.f90", owned by user elvis and associated with +group elvis. It is readable and writable by the owner, but is not accessible +to any other user.

    +
    -rwx------ 2 elvis elvis 12040 Apr  9 2012  a.out
    +
    + +

    This is a normal file named "a.out", owned by user elvis and associated with +group elvis. It is executable, as well as readable and writable, for the +owner only.

    +
    drwxr-x--- 2 elvis bigsci 2048 Oct 17 2011  share
    +
    + +

    This is a directory named "share", owned by user elvis and associated with +group bigsci. The owner can read and write the directory; all members of the +file group bigsci can list the contents of the directory. Presumably, this +directory would contain files that also have "group read" permissions.

    +
    drwxr-xr-x 3 elvis bigsci 2048 Nov 13 2011  public
    +
    + +

    This is a directory named "public", owned by user elvis and associated with +group bigsci. The owner can read and write the directory; all other users can +only read the contents of the directory. A directory such as this would most +likely contain files that have "world read" permissions.

    +

    Useful File Permission Commands

    +

    umask

    +

    When a file is created, the permission flags are set according to the file mode +creation mask, which can be set using the umask command. The file mode +creation mask (sometimes referred to as "the umask") is a three-digit octal +value whose nine bits correspond to fields 2-10 of the permission flags. The +resulting permissions are calculated via the bitwise AND of the unary +complement of the argument (using bitwise NOT) and the default permissions +specified by the shell (typically 666 for files and 777 for directories). +Common useful values are:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    umask valueFile PermissionsDirectory Permissions
    002-rw-rw-r--drwxrwxr-x
    007-rw-rw----drwxrwx---
    022-rw-r--r--drwxr-xr-x
    027-rw-r-----drwxr-x---
    077-rw-------drwx------
    +

    Note that at ULHPC, the default umask is left unchanged (022), +yet it can be redefined in your ~/.bash_profile configuration file if needed.

    +

    chmod

    +

    The chmod ("change mode") command is used to change the permission flags on +existing files. It can be applied recursively using the "-R" option. It can be +invoked with either octal values representing the permission flags, or with +symbolic representations of the flags. The octal values have the following +meaning:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Octal DigitBinary Representation (rwx)Permission
    0000none
    1001execute only
    2010write only
    3011write and execute
    4100read only
    5101read and execute
    6110read and write
    7111read, write, and execute (full permissions)
    +

    Here is an example of chmod using octal values:

    +
    $ umask
    +0022
    +$ touch foo
    +$ ls -l foo
    +-rw-r--r--. 1 elvis elvis 0 Nov 19 14:49 foo
    +$ chmod 755 foo
    +$ ls -l foo
    +-rwxr-xr-x. 1 elvis elvis 0 Nov 19 14:49 foo
    +
    + +

    In the above example, the umask for user elvis results in a file that is +read-write for the user, and read for group and other. The chmod command specifies +read-write-execute permissions for the user, and read-execute permissions for +group and other.

    +

    Here is the format of the chmod command when using symbolic values:

    +
    chmod [-R] [classes][operator][modes] file ...
    +
    + +

    The classes determine to which combination of user/group/other the operation +will apply, the operator specifies whether permissions are being added or +removed, and the modes specify the permissions to be added or removed. +Classes are formed by combining one or more of the following letters:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    LetterClassDescription
    uuserOwner of the file
    ggroupUsers who are members of the file's group
    ootherUsers who are not the owner of the file or members of the file's group
    aallAll of the above (equivalent to ugo)
    +

    The following operators are supported:

    + + + + + + + + + + + + + + + + + + + + + +
    OperatorDescription
    +Add the specified modes to the specified classes.
    -Remove the specified modes from the specified classes.
    =The specified modes are made the exact modes for the specified classes.
    +

    The modes specify which permissions are to be added to or removed from the +specified classes. There are three primary values which correspond to the basic +permissions, and two less frequently-used values that are useful in specific +circumstances:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ModeNameDescription
    rreadRead a file or list a directory's contents.
    wwriteWrite to a file or directory.
    xexecuteExecute a file or traverse a directory.
    X"special" executeThis is a slightly more restrictive version of "x". It applies execute permissions to directories in all cases, and to files only if at least one execute permission bit is already set. It is typically used with the "+" operator and the "-R" option, to give group and/or other access to a large directory tree, without setting execute permissions on normal (non-executable) files (e.g., text files). For example, chmod -R go+rx bigdir would set read and execute permissions on every file (including text files) and directory in the bigdir directory, recursively, for group and other. The command chmod -R go+rX bigdir would set read and execute permissions on every directory, and would set group and other read and execute permissions on files that were already executable by the owner.
    ssetgid or sgidThis setting is typically applied to directories. If set, any file created in that directory will be associated with the directory's group, rather than with the default file group of the owner. This is useful in setting up directories where many users share access. This setting is sometimes referred to as the "sticky bit", although that phrase has a historical meaning unrelated to this context.
    +

    Sets of class/operator/mode may separated by commas. Using the above +definitions, the previous (octal notation) example can be done symbolically:

    +
    $ umask
    +0022
    +$ touch foo
    +$ ls -l foo
    +-rw-r--r--. 1 elvis elvis 0 Nov 19 14:49 foo
    +$ chmod u+x,go+rx foo
    +$ ls -l foo
    +-rwxr-xr-x. 1 elvis elvis 0 Nov 19 14:49 foo
    +
    + +

    Unix File Groups

    +

    Unix file groups provide a means to control access to shared data on +disk and tape.

    +

    Overview of Unix Groups

    +

    Every user on a Unix system is a member of one or more Unix groups, +including their primary or default group. Every file (or directory) on +the system has an owner and an associated group. When a user creates a +file, the file's associated group will be the user's default +group. The user (owner) has the ability to change the associated group +to any of the groups to which the user belongs. Unix groups can be +defined that allow users to share data with other users who belong to +the same group.

    +

    Unix Groups at ULHPC

    +

    All user's default group is clusterusers. Users usually belong to several +other groups, including groups associated with specific research +projects.

    +

    Groups are used to shared file between project members, and can be created on request. +See the page about Project Data Management for more information.

    +

    Useful Unix Group Commands

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CommandDescription
    groups usernameList group membership
    id usernameList group membership with group ids
    ls -lList group associated with file or directory
    chgrpChange group associated with file or directory
    newgrpCreate new shell with different default group
    sgExecute command with different default group
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/getting-started/index.html b/getting-started/index.html new file mode 100644 index 00000000..a8cfcdd0 --- /dev/null +++ b/getting-started/index.html @@ -0,0 +1,3369 @@ + + + + + + + + + + + + + + + + + + + + + + + + Getting Started - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Getting Started on ULHPC Facilities

    +

    +

    Welcome to the High Performance Computing (HPC) Facility of the University of Luxembourg (ULHPC)!

    +
    +

    This page will guide you through the basics of using ULHPC's +supercomputers, storage systems, and services.

    +
    +

    What is ULHPC ?

    +

    HPC is crucial in academic environments to achieve high-quality results in all application areas. +All world-class universities require this type of facility to accelerate its research and ensure cutting-edge results in time to face the global competition.

    +
    What is High Performance Computing?

    If you're new to all of this, this is probably the first question you have in mind. Here is a possible definition:

    +

    "High Performance Computing (HPC) most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business."

    +

    Indeed, with the advent of the technological revolution and the digital transformation that made all scientific disciplines becoming computational nowadays, High-Performance Computing (HPC) is increasingly identified as a strategic asset and enabler to accelerate the research performed in all areas requiring intensive computing and large-scale Big Data analytic capabilities. Tasks which would typically require several years or centuries to be computed on a typical desktop computer may only require a couple of hours, days or weeks over an HPC system.

    +

    For more details, you may want to refer to this Inside HPC article.

    +
    +

    Since 2007, the University of Luxembourg (UL) has invested tens of millions of euros into its own HPC facilities to responds to the growing needs for increased computing and storage. +ULHPC (sometimes referred to as Uni.lu HPC) is the entity providing High Performance Computing and Big Data Storage services and support for UL researchers and its external partners.

    +

    The University manages several research computing facilities located on the Belval campus, offering a cutting-edge research infrastructure to Luxembourg public research while serving as edge access to bigger systems from PRACE or EuroHPC, such as the Euro-HPC Luxembourg supercomputer "MeluXina".

    +
    +

    Warning

    +

    In particular, the ULHPC is NOT the national HPC center of Luxembourg, but simply one of its strategic partner operating the second largest HPC facility of the country.

    +
    +

    The HPC facility is one element of the extensive digital research infrastructure and expertise developed by the University over the last years. It also supports the University’s ambitious digital strategy and in particular the creation of a Facility for Data and HPC Sciences. This facility aims to provide a world-class user-driven digital infrastructure and services for fostering the development of collaborative activities related to frontier research and teaching in the fields of Computational and Data Sciences, including High Performance Computing, Data Analytics, Big Data Applications, Artificial Intelligence and Machine Learning.

    +
    +

    Reference ULHPC Article to cite

    +

    If you want to get a good overview of the way our facility is setup, managed and evaluated, you can refer to the reference article you are in all cases entitled to refer to when crediting the ULHPC facility as per AUP (see also the publication page instructions).

    +
    +

    ACM Reference Format | ORBilu entry | ULHPC blog post | slides :
    +Sebastien Varrette, Hyacinthe Cartiaux, Sarah Peter, Emmanuel Kieffer, Teddy Valette, and Abatcha Olloh. 2022. Management of an Academic HPC & Research Computing Facility: The ULHPC Experience 2.0. In 6th High Performance Computing and Cluster Technologies Conference (HPCCT 2022), July 08-10, 2022, Fuzhou, China. ACM, New York, NY, USA, 14 pages. +https://doi.org/10.1145/3560442.3560445

    +
    +
    +
    +

    Supercomputing and Storage Resources at a glance

    +

    ULHPC is a strategic asset of the university and an important factor for the scientific and therefore also economic competitiveness of the Grand Duchy of Luxembourg. +We provide a key research infrastructure featuring state-of-the-art computing and storage resources serving the UL HPC community primarily composed by UL researchers.

    +

    The UL HPC platform has kept growing over time thanks to the continuous efforts of the core HPC / Digital Platform team - contact: hpc-team@uni.lu, recently completed with the EuroHPC Competence Center Task force (A. Vandeventer (Project Manager), L. Koutsantonis).

    +
    +

    ULHPC Computing and Storage Capacity (2022)

    +

    Installed in the premises of the University’s Centre de Calcul (CDC), the UL HPC facilities provides a total computing capacity of 2.76 PetaFlops and a shared storage capacity of around 10 PetaBytes.

    +
    +
    How big is 1 PetaFlops? 1 PetaByte?
      +
    • 1 PetaFlops = 1015 floating-point operations per second (PFlops or PF for short), corresponds to the cumulative performance of more than 3510 Macbook Pro 13" laptops 1, or 7420 iPhone XS 2
    • +
    • 1 PetaByte = 1015 bytes = 8*1015 bits, corresponding to the cumulative raw capacity of more than 1950 SSDs 512GB.
    • +
    +
    +

    +

    +

    This places the HPC center of the University of Luxembourg as one of the major actors in HPC and Big Data for the Greater Region Saar-Lor-Lux.

    +

    In practice, the UL HPC Facility features 3 types of computing resources:

    +
      +
    • "regular" nodes: Dual CPU, no accelerators, 128 to 256 GB of RAM
    • +
    • "gpu" nodes: Dual CPU, 4 Nvidia accelerators, 768 GB RAM
    • +
    • "bigmem" nodes: Quad-CPU, no accelerators, 3072 GB RAM
    • +
    +

    These resources can be reserved and allocated for the execution of jobs scheduled on the platform thanks to a Resource and Job Management Systems (RJMS) - Slurm in practice. This tool allows for a fine-grain analysis and accounting of the used resources, facilitating the generation of activity reports for a given time period.

    +

    Iris

    +

    iris, in production since June 2017, is a Dell/Intel supercomputer with a theoretical peak performance of 1082 TFlop/s, featuring 196 computing nodes (totalling 5824 computing cores) and 96 GPU accelerators (NVidia V100).

    +

    Iris Detailed system specifications +

    +

    Aion

    +

    aion, in production since October 2020, is a Bull Sequana XH2000/AMD supercomputer offering a peak performance of 1692 TFlop/s, featuring 318 compute nodes (totalling 40704 computing cores).

    +

    Aion Detailed system specifications +

    +

    GPFS/SpectrumScale File System ($HOME, project)

    +

    IBM Spectrum Scale, formerly known as the General Parallel File System (GPFS), is global high-performance clustered file system available on all ULHPC computational systems through a DDN GridScaler/GS7K system.

    +

    It allows sharing homedirs and project data between users, systems, and eventually (i.e. if needed) with the "outside world".

    +

    GPFS/Spectrumscale Detailed specifications +

    +

    Lustre File System ($SCRATCH)

    +

    The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. It is available as a global high-performance file system on all ULHPC computational systems through a DDN ExaScaler +and is meant to host temporary scratch data.

    +

    Lustre Detailed specifications +

    +

    OneFS File System (project, backup, archival)

    +

    In 2014, the SIU, the UL HPC and the LCSB join their forces (and their funding) to acquire a scalable and modular NAS solution able to sustain the need for an internal big data storage, i.e. provides space for centralized data and backups of all devices used by the UL staff and all research-related data, including the one proceed on the UL HPC platform. +A global low-performance Dell/EMC Isilon system is available on all ULHPC computational systems. It is intended for long term storage of data that is not frequently accessed. For more details, see Isilon specifications.

    + + +

    Fast Infiniband Network

    +

    High Performance Computing (HPC) encompasses advanced computation over parallel processing, enabling faster execution of highly compute intensive tasks. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores and their utilisation factor and the interconnect performance, efficiency, and scalability. +InfiniBand is the fast interconnect technology implemented within all ULHPC supercomputers, more specifically:

    +
      +
    • Iris relies on a EDR Infiniband (IB) Fabric in a Fat-Tree Topology
    • +
    • Aion relies on a HDR100 Infiniband (IB) Fabric in a Fat-Tree Topology
    • +
    +

    For more details, see ULHPC IB Network Detailed specifications.

    +

    Acceptable Use Policy (AUP)

    + + +

    There are a number of policies which apply to ULHPC users.

    +

    UL HPC Acceptable Use Policy (AUP) [pdf]

    +
    +

    Important

    +

    All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP). +You should read and keep a signed copy of this document before using the facility.

    +

    Access and/or usage of any ULHPC system assumes the tacit acknowledgement to this policy.

    +
    + + +

    ULHPC Accounts

    +

    In order to use the ULHPC facilities, you need to have a user account with an associated user login name (also called username) placed under an account hierarchy.

    + +

    Connecting to ULHPC supercomputers

    +
    +

    MFA is strongly encouraged for all ULHPC users

    +

    It will be soon become mandatory - detailed instructions will be provided soon.

    +
    + + + + + +

    Data Management

    + +

    User Environment

    +
    +

    Info

    +

    $HOME, Project and $SCRATCH directories are shared +across all ULHPC systems, meaning that

    +
      +
    • every file/directory pushed or created on the front-end is available on the computing nodes
    • +
    • every file/directory pushed or created on the computing nodes is available on the front-end
    • +
    +
    +

    ULHPC User Environment

    + + + + + + + + +

    Computing Software Environment

    +

    The ULHPC Team supplies a large variety of HPC utilities, scientific applications and programming libraries to its user community. +The user software environment is generated using Easybuild (EB) and is made available as environment modules through LMod.

    + + + + + + + + + + +
    +

    Software building support

    +

    If you need help to build / develop software, we encourage you to first try using Easybuild as a recipe probably exist for the software you consider. +You can then open a ticket on HPC Help Desk Portal and we will evaluate the cost and effort required. +You may also ask the help of other ULHPC users using the HPC User community mailing list: (moderated): `hpc-users@uni.lu.

    +
    +

    Running Jobs

    +

    Typical usage of the ULHPC supercomputers involves the reservation and allocation of computing resources for the execution of jobs (submitted via launcher scripts) and scheduled on the platform thanks to a Resource and Job Management Systems (RJMS) - Slurm in our case.

    +

    Slurm on ULHPC clusters + Convenient Slurm Commands

    + +

    Interactive Computing

    +

    ULHPC also supports interactive computing.

    + +

    Getting Help

    +

    ULHPC places a very strong emphasis on enabling science and providing +user-oriented systems and services.

    +

    Documentation

    +

    We have always maintained an extensive documentation and HPC tutorials available online, which aims at being the most up-to-date and comprehensive while covering many (many) topics.

    +

    ULHPC Technical Documentation + ULHPC Tutorials

    +
    +

    The ULHPC Team welcomes your contributions

    +

    These pages are hosted from a git repository and contributions +are welcome! +Fork this repo

    +
    +

    Support

    +

    ULHPC Support Overview + Service Now HPC Support Portal

    +
    +

    Availability and Response Time

    +

    HPC support is provided on a volunteer basis by UL HPC staff and associated UL experts working at normal business hours. We offer no guarantee on response time except with paid support contracts.

    +
    +
    +
    +
      +
    1. +

      The best MacBook Pro 13" in 2020 is equiped with Ice Lake 2 GHz Intel Quad-Core i5 processors with an estimated computing performance of 284.3 Gflops as measured by the Geekbench 4 multi-core benchmarks platform, with SGEMM 

      +
    2. +
    3. +

      Apple A12 Bionic, the 64-bit ARM-based system on a chip (SoC) proposed on the iPhone XS has an estimated performance of 134.7 GFlops as measured by the Geekbench 4 multi-core benchmarks platform, with SGEMM 

      +
    4. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/help/index.html b/help/index.html new file mode 100644 index 00000000..16309f2c --- /dev/null +++ b/help/index.html @@ -0,0 +1,2892 @@ + + + + + + + + + + + + + + + + + + + + + + + + Support - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Support

    +

    ULHPC strives to support in a user friendly way your [super]computing needs. +Note however that we are not here to make your PhD at your place ;)

    +

    Service Now HPC Support Portal

    +

    FAQ/Troubleshooting

    + +

    Read the Friendly Manual

    +

    We have always maintained an extensive documentation and tutorials available online, which aims at being the most up-to-date and comprehensive.

    +

    So please, read the documentation first if you have a question of problem -- we probably provide detailed instructions here

    +

    Help Desk

    +

    The online help desk Service is the preferred +method for contacting ULHPC.

    +
    +

    Tips

    +

    Before reporting a problem or and issue, kindly remember that:

    +
      +
    1. Your issue is probably documented here on the ULHPC Technical documentation
    2. +
    3. An event may be on-going: check the ULHPC Live status page
        +
      • Planned maintenance are announced at least 2 weeks in advance - -- see Maintenance and Downtime Policy
      • +
      • The proper SSH banner is displayed during planned downtime
      • +
      +
    4. +
    5. check the state of your nodes and jobs +
    6. +
    +
    +

    Service Now HPC Support Portal

    +

    You can make code snippets, shell outputs, etc in your ticket much more readable by inserting a line with: +

    [code]<pre>
    +
    +before the snippet, and another line with: +
    </pre>[/code]
    +
    +after it. For a full list of formatting options, see this ServiceNow article.

    +
    +

    Be as precise and complete as possible

    +

    ULHPC team handle thousands of support requests per year. +In order to ensure efficient timely resolution of issues, ensure that:

    +
      +
    1. you select the appropriate category (left menu)
    2. +
    3. you include as much of the following as possible when making a request:
        +
      • Who? - Name and user id (login), eventually project name
      • +
      • When? - When did the problem occur?
      • +
      • Where? - Which cluster ? Which node ? Which job ?
          +
        • Really include Job IDs
        • +
        • Location of relevant files
            +
          • input/output, job launcher scripts, source code, executables etc.
          • +
          +
        • +
        +
      • +
      • What? - What happened? What exactly were you doing or trying to do ?
          +
        • include Error messages - kindly report system or software messages literally and exactly.
        • +
        • output of module list
        • +
        • any steps you have tried
        • +
        • Steps to reproduce
        • +
        +
      • +
      • Any part of this technical documentation you checked before opening the ticket
      • +
      +
    4. +
    +
    +

    Access to the online help system requires logging in with your Uni.lu username, password, and eventually one-time password. +If you are an existing user unable to log in, you can send us an email.

    +
    +

    Availability and Response Time

    +

    HPC support is provided on a volunteer basis by UL HPC staff and associated UL experts working at normal business hours. We offer no guarantee on response time except with paid support contracts.

    +
    +

    Email support

    +

    You can contact us by mail to the ULHPC Team Email (ONLY if you cannot login/access the HPC Support helpdesk portal : hpc-team@uni.lu

    +

    You may also ask the help of other ULHPC users using the HPC User community mailing list: (moderated): hpc-users@uni.lu

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/hpc-schools/index.html b/hpc-schools/index.html new file mode 100644 index 00000000..90ea9348 --- /dev/null +++ b/hpc-schools/index.html @@ -0,0 +1,3275 @@ + + + + + + + + + + + + + + + + + + + + + + + + HPC Trainings and Tutorials - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    On-site HPC trainings and tutorials

    +

    We propose periodical on-site events for our users. They are free of charge and can be attended by anyone from the University of Luxembourg faculties and interdisciplinary centers. +Additionally, we also accept users from LIST, LISER and LIH. If you are part of another public research center, please contact us.

    +

    Forthcoming events

    +
      +
    • HPC School for beginners - July 2024, 1st-2nd, 1.040, MNO - Belval Campus
    • +
    • Python HPC School - March 2024, 27-28th, 1.030 MNO - Belval Campus
    • +
    +

    HPC School for beginners

    +

    This event aims to equip you with essential skills and knowledge to embark on your High-Performance Computing journey. The event is organized monthly and is composed of two half days (usually 9am-12pm).

    +

    Feel free to only attend the second day session if:

    +
      +
    • You can connect to the ULHPC
    • +
    • You are comfortable with the command line interface
    • +
    +

    Limited spots available per session (usually 30 max).

    +

    Upcoming sessions:

    +
      +
    • Date: July 2024, 1st-2nd
    • +
    • Time: 9am to 12pm (both days).
    • +
    • Location: 1.040, MNO - Belval Campus.
    • +
    +

    Morning 1 - Accessing the Cluster and Command Line Introduction

    +

    Learn how to access the HPC cluster, set up your machine, and navigate the command line interface effectively. Gain confidence in interacting with the cluster environment.

    +

    Morning 2 - Understanding HPC Workflow: Job Submission and Monitoring

    +

    Explore the inner workings of HPC systems. Discover the process of submitting and managing computational tasks. Learn how to monitor and optimize job performance.

    +

    Python HPC School

    +

    In this workshop, we will explore the process of improving Python code for efficient execution. Chances are, you 're already familiar with Python and Numpy. However, we will start by mastering profiling and efficient NumPy usage as these are crucial steps before venturing into parallelization. Once your code is fine-tuned with Numpy we will explore the utilization of Python's parallel libraries to unlock the potential of using multiple CPU cores. By the end, you will be well equipped to harness Python's potential for high-performance tasks on the HPC infrastructure.

    +

    Target Audience Description

    +

    The workshop is designed for individuals who are interested in advancing their skills and knowledge in Python-based scientific and data computing. The ideal participants would typically possess basic to intermediate Python and Numpy skills, along with some familiarity with parallel programming. This workshop will give a good starting point to leverage the usage of the HPC computing power to speed up your Python programs.

    +

    Upcoming sessions

    +

    Limited spots available per session (usually 30 max).

    +
      +
    • Date: March, 2024, 27th and 28th.
    • +
    • Time: 10h to 12h and 14h to 16h (both days).
    • +
    • Location: MNO 1.030. - Belval campus
    • +
    +

    First day – Jupyter notebook on ULHPC / profiling efficient usage of Numpy

    +

    Program

    +
      +
    • Setting up a Jupyter notebook on an HPC node - 10am to 11am
    • +
    • Taking time and profiling python code - 11am to 12pm
    • +
    • Lunch break - 12pm to 2pm
    • +
    • Numpy basics for replacing python loops for efficient computations - 2pm to 4pm
    • +
    +

    Requirements

    +
      +
    • Having an HPC account to access the cluster.
    • +
    • Basic knowledge on SLURM (beginners HPC school).
    • +
    • A basic understanding of Python programming.
    • +
    • Familiarity with Jupyter Notebook (installed and configured).
    • +
    • A basic understanding of Numpy and linear algebra.
    • +
    +

    Second day – Improving performance with python parallel packages

    +

    Program

    +
      +
    • Use case understanding and Python implementation - 10am to 10:30am
    • +
    • Numpy implementation - 10:30am to 11am
    • +
    • Python’s Multiprocessing - 11am to 12pm
    • +
    • Lunch break - 12pm to 2pm
    • +
    • PyMP - 2pm to 2:30pm
    • +
    • Cython - 2:30pm to 3pm
    • +
    • Numba and final remarks- 3pm to 4pm
    • +
    +

    Requirements

    +
      +
    • Having an HPC account to access the cluster.
    • +
    • Basic knowledge on SLURM (beginners HPC school).
    • +
    • A basic understanding of Python programming.
    • +
    • Familiarity with Jupyter Notebook (installed and configured).
    • +
    • A basic understanding of Numpy and linear algebra.
    • +
    • Familiarity with parallel programming.
    • +
    +

    Conda environment management for Python and R

    +

    The creation of Conda environments is supported in the University of Luxembourg HPC systems. But when Conda environments are needed and what tools are available to create Conda environments? Attend this tutorial if your projects involve R or Python and you need support with installing packages.

    +

    The topics that will be covered include:

    +
      +
    • how to install packages using the facilities available in R and Python,
    • +
    • how to document and exchange environment setups,
    • +
    • when a Conda environment is required for a project, and
    • +
    • what tools are available for the creation of Conda environments.
    • +
    +

    Upcoming sessions

    +
      +
    • Mar. 2024 (please await further announcements regarding specific dates)
    • +
    +

    Introduction to numerical methods with BLAS

    +

    This seminar covers basic principles of numerical library usage with BLAS as an example. The library mechanisms for organizing software are studied in detail, covering topics such as the differences between static and dynamic libraries. The practical sessions will demonstrate the generation of library files from source code, and how programs can use library functions.

    +

    After an overview of software libraries, the BLAS library is presented, including the available operations and the organization of the code. The attendees will have the opportunity to use functions of BLAS in a few practical examples. The effects of caches in numerical library performance are then studied in detail. In the practical sessions the attendees will have the opportunity to try cache aware programming techniques that better exploit the performance of the available hardware.

    +

    Overall in this seminar you learn how to:

    +
      +
    • compile libraries from source code,
    • +
    • compile and link code that uses numerical libraries,
    • +
    • understand the effects of caches in numerical library performance, and
    • +
    • exploit caches to leverage better performance.
    • +
    +

    Upcoming sessions

    +
      +
    • No sessions are planned at the moment. Future sessions will be announced here, please wait for announcements or contact the HPC team via email to express your interest.
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/images/Logo_HPC_FR-01.png b/images/Logo_HPC_FR-01.png new file mode 100644 index 00000000..05a82be6 Binary files /dev/null and b/images/Logo_HPC_FR-01.png differ diff --git a/images/ULHPC-simplified-workflow-overview.pdf b/images/ULHPC-simplified-workflow-overview.pdf new file mode 100644 index 00000000..2cf3a859 Binary files /dev/null and b/images/ULHPC-simplified-workflow-overview.pdf differ diff --git a/images/ULHPC-simplified-workflow-overview.png b/images/ULHPC-simplified-workflow-overview.png new file mode 100644 index 00000000..7a889621 Binary files /dev/null and b/images/ULHPC-simplified-workflow-overview.png differ diff --git a/images/logo_ULHPC.png b/images/logo_ULHPC.png new file mode 100644 index 00000000..42730efd Binary files /dev/null and b/images/logo_ULHPC.png differ diff --git a/images/logo_ULHPC_square.jpg b/images/logo_ULHPC_square.jpg new file mode 100644 index 00000000..9f707115 Binary files /dev/null and b/images/logo_ULHPC_square.jpg differ diff --git a/images/logo_ULHPC_wide.png b/images/logo_ULHPC_wide.png new file mode 100644 index 00000000..03d5f596 Binary files /dev/null and b/images/logo_ULHPC_wide.png differ diff --git a/images/plots/plot_compute_capacity_yearly_evolution.png b/images/plots/plot_compute_capacity_yearly_evolution.png new file mode 100644 index 00000000..ba7308d5 Binary files /dev/null and b/images/plots/plot_compute_capacity_yearly_evolution.png differ diff --git a/images/plots/plot_piechart_compute_cluster.png b/images/plots/plot_piechart_compute_cluster.png new file mode 100644 index 00000000..7c0aac4c Binary files /dev/null and b/images/plots/plot_piechart_compute_cluster.png differ diff --git a/images/plots/plot_piechart_compute_cluster_2020.png b/images/plots/plot_piechart_compute_cluster_2020.png new file mode 100644 index 00000000..53c8405a Binary files /dev/null and b/images/plots/plot_piechart_compute_cluster_2020.png differ diff --git a/images/plots/plot_piechart_compute_cluster_2021.png b/images/plots/plot_piechart_compute_cluster_2021.png new file mode 100644 index 00000000..553851dc Binary files /dev/null and b/images/plots/plot_piechart_compute_cluster_2021.png differ diff --git a/images/plots/plot_piechart_compute_cluster_2022.png b/images/plots/plot_piechart_compute_cluster_2022.png new file mode 100644 index 00000000..7c0aac4c Binary files /dev/null and b/images/plots/plot_piechart_compute_cluster_2022.png differ diff --git a/images/plots/plot_piechart_servers_2021.png b/images/plots/plot_piechart_servers_2021.png new file mode 100644 index 00000000..2e1ba1d9 Binary files /dev/null and b/images/plots/plot_piechart_servers_2021.png differ diff --git a/images/plots/plot_piechart_storage_fs.png b/images/plots/plot_piechart_storage_fs.png new file mode 100644 index 00000000..1a593664 Binary files /dev/null and b/images/plots/plot_piechart_storage_fs.png differ diff --git a/images/plots/plot_piechart_storage_fs_2020.png b/images/plots/plot_piechart_storage_fs_2020.png new file mode 100644 index 00000000..c20ccbb5 Binary files /dev/null and b/images/plots/plot_piechart_storage_fs_2020.png differ diff --git a/images/plots/plot_piechart_storage_fs_2021.png b/images/plots/plot_piechart_storage_fs_2021.png new file mode 100644 index 00000000..39327bc2 Binary files /dev/null and b/images/plots/plot_piechart_storage_fs_2021.png differ diff --git a/images/plots/plot_piechart_storage_fs_2022.png b/images/plots/plot_piechart_storage_fs_2022.png new file mode 100644 index 00000000..1a593664 Binary files /dev/null and b/images/plots/plot_piechart_storage_fs_2022.png differ diff --git a/images/plots/plot_storage_capacity_yearly_evolution.png b/images/plots/plot_storage_capacity_yearly_evolution.png new file mode 100644 index 00000000..70113f30 Binary files /dev/null and b/images/plots/plot_storage_capacity_yearly_evolution.png differ diff --git a/images/plots/plot_ulhpc_cluster_utilization.png b/images/plots/plot_ulhpc_cluster_utilization.png new file mode 100644 index 00000000..91c9c03d Binary files /dev/null and b/images/plots/plot_ulhpc_cluster_utilization.png differ diff --git a/images/slurm_account_hierarchy.pdf b/images/slurm_account_hierarchy.pdf new file mode 100644 index 00000000..de0b27aa Binary files /dev/null and b/images/slurm_account_hierarchy.pdf differ diff --git a/images/slurm_account_hierarchy.png b/images/slurm_account_hierarchy.png new file mode 100644 index 00000000..eb69a254 Binary files /dev/null and b/images/slurm_account_hierarchy.png differ diff --git a/index.html b/index.html new file mode 100644 index 00000000..244f59b3 --- /dev/null +++ b/index.html @@ -0,0 +1,2899 @@ + + + + + + + + + + + + + + + + + + + + + + + + ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    By ULHPC

    +

    Uni.lu HPC Technical Documentation

    +

    +

    hpc-docs.uni.lu is a resource with the +technical details for users to make effective use +of the Uni.lu High Performance Computing (ULHPC) Facility's resources.

    +

    ULHPC Supercomputers + Getting Started

    +

    New: monthly HPC trainings for beginners, see our dedicated page.

    +

    ULHPC Web Portals

    +

    ULHPC Platform status

    + + + +
    About this site

    The ULHPC Technical Documentation is based on MkDocs and the mkdocs-material theme, and inspired by the (excellent) NERSC documentation site. +These pages are hosted from a git repository and contributions are welcome!

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/interconnect/ethernet/index.html b/interconnect/ethernet/index.html new file mode 100644 index 00000000..2a0b46c5 --- /dev/null +++ b/interconnect/ethernet/index.html @@ -0,0 +1,2840 @@ + + + + + + + + + + + + + + + + + + + + + + + + Ethernet Interconnect - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Ethernet Network

    +

    Having a single high-bandwidth and low-latency network as the local Fast IB interconnect network to support efficient HPC and Big data workloads would not provide the necessary flexibility brought by the Ethernet protocol. +Especially applications that are not able to employ the native protocol foreseen for that network and thus forced to use an IP emulation layer will benefit from the flexibility of Ethernet-based networks.

    +

    An additional, Ethernet-based network offers the robustness and resiliency needed for management tasks inside the system in such cases

    +

    Outside the Fast IB interconnect network used inside the clusters, we maintain an Ethernet network organized as a 2-layer topology:

    +
      +
    1. one upper level (Gateway Layer) with routing, switching features, network isolation and filtering (ACL) rules and meant to interconnect only switches.
        +
      • This layer is handled by a redundant set of site routers (ULHPC gateway routers).
      • +
      • it allows to interface the University network for both internal (LAN) and external (WAN) communications
      • +
      +
    2. +
    3. one bottom level (Switching Layer) composed by the [stacked] core switches as well as the TOR (Top-the-rack) switches, meant to interface the HPC servers and compute nodes.
    4. +
    +

    An overview of this topology is provided in the below figure.

    +

    +
    +

    ACM PEARC'22 article

    +

    If you are interested to get more details on the implemented Ethernet network, you can refer to the following article published to the ACM PEARC'22 conference (Practice and Experience in Advanced Research Computing) in Boston, USA on July 13, 2022.

    +
    +

    ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides
    +Sebastien Varrette, Hyacinthe Cartiaux, Teddy Valette, and Abatcha Olloh. 2022. Aggregating and Consolidating two High Performant Network Topologies: The ULHPC Experience. In Practice and Experience in Advanced Research Computing (PEARC '22). Association for Computing Machinery, New York, NY, USA, Article 61, 1–6. https://doi.org/10.1145/3491418.3535159

    +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/interconnect/ib/index.html b/interconnect/ib/index.html new file mode 100644 index 00000000..1c909902 --- /dev/null +++ b/interconnect/ib/index.html @@ -0,0 +1,3073 @@ + + + + + + + + + + + + + + + + + + + + + + + + Fast Infiniband Interconnect - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Fast Local Interconnect Network

    +

    High Performance Computing (HPC) encompasses advanced computation over parallel processing, enabling faster execution of highly compute intensive tasks. +The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores and their utilisation factor and the interconnect performance, efficiency, and scalability. +HPC interconnect technologies can be nowadays divided into three categories: Ethernet, InfiniBand, and vendor specific interconnects. While Ethernet is established as the dominant interconnect standard for mainstream commercial computing requirements, the underlying protocol has inherent limitations preventing low-latency deployments expected in real HPC environment. +When in need of high-bandwidth and low-latency as required in efficient high performance computing systems, better options have emerged and are considered:

    + +

    Within the ULHPC facility, the InfiniBand solution was preferred as the predominant interconnect technology in the HPC market, tested against the largest set of HPC workloads. +In practice:

    +
      +
    • Iris relies on a EDR Infiniband (IB) Fabric in a Fat-Tree Topology
    • +
    • Aion relies on a HDR100 Infiniband (IB) Fabric in a Fat-Tree Topology
    • +
    +
    +

    ACM PEARC'22 article

    +

    If you are interested to understand the architecture and the solutions designed upon Aion acquisition to expand and consolidate the previously existing IB networks beyond its seminal capacity limits (while keeping at best their Bisection bandwidth), you can refer to the following article published to the ACM PEARC'22 conference (Practice and Experience in Advanced Research Computing) in Boston, USA on July 13, 2022.

    +
    +

    ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides
    +Sebastien Varrette, Hyacinthe Cartiaux, Teddy Valette, and Abatcha Olloh. 2022. Aggregating and Consolidating two High Performant Network Topologies: The ULHPC Experience. In Practice and Experience in Advanced Research Computing (PEARC '22). Association for Computing Machinery, New York, NY, USA, Article 61, 1–6. https://doi.org/10.1145/3491418.3535159

    +
    +
    +

    ULHPC IB Topology

    +

    One of the most significant differentiators between HPC systems and lesser performing systems is, apart from the interconnect technology deployed, the supporting topology. There are several topologies commonly used in large-scale HPC deployments (Fat-Tree, 3D-Torus, Dragonfly+ etc.). +Fat-tree remains the widely used topology in HPC clusters due to its versatility, high bisection bandwidth and well understood routing. +For this reason, each production clusters of the ULHPC facility rely on Fat-Tree topology.

    +

    To minimize the number of switches per nodes while keeping a good Bisection bandwidth and allowing to interconnect Iris and Aion IB networks, the following configuration has been implemented:

    +

    +

    For more details: + Iris IB Interconnect + Aion IB Interconnect

    +

    The tight integration of I/O and compute in the ULHPC supercomputer architecture gives a very robust, time critical production systems. The selected routing algorithms also provides a dedicated and fast path to the IO targets, avoiding congestion on the high-speed network and mitigating the risk of runtime "jitter" for time critical jobs.

    +

    IB Fabric Diagnostic Utilities

    +

    An InfiniBand fabric is composed of switches and channel adapter (HCA/Connect-X cards) devices. +To identify devices in a fabric (or even in one switch system), each device is given a GUID (a MAC address equivalent). +Since a GUID is a non-user-friendly string of characters, we alias it to a meaningful, user-given name. +There are a few IB diagnostic tools (typically installed by the infiniband-diags package) using these names. +The ULHPC team is using them to diagnose Infiniband Fabric Information2 -- see also InfiniBand Guide by Atos/Bull (PDF)

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ToolsDescription
    ibnodesShow Infiniband nodes in topology
    ibhostsShow InfiniBand host nodes in topology
    ibswitchesShow InfiniBand switch nodes in topology
    ibnetdiscoverDiscover InfiniBand topology
    ibdiagScans the fabric using directed route packets and extracts all the available information
    (connectivity, devices)
    perfqueryfind errors on a particular or number of HCA’s and switch ports
    sminfoGet InfiniBand Subnet Manager Info
    +

    Mellanox Equipment FW Update

    +

    An InfiniBand fabric is composed of switches and channel adapter (HCA/Connect-X cards) devices. Both should be kept up-to-date to mitigate potential security issues.

    +

    Mellanox ConnectX HCA cards

    +

    The Mellanox HCA firmware updater tool: mlxup, can be downloaded from mellanox.com.
    +A Typical workflow applied within the ULHPC team to update the firmware of the Connect-X cards :

    +
      +
    1. +

      Query specific device or all devices (if no device were supplied)

      +
      mlxup --query
      +
      + + +
    2. +
    3. +

      Go to https://support.mellanox.com/s/downloads-center then click on Adapter > ConnectX-<N> > All downloads (select any OS, it will redirect you to the same page)

      + +
    4. +
    5. +

      Unzip the downloaded file: unzip [...]

      +
    6. +
    7. +

      Burn device with latest firmware

      +
      mlxup -d <PCI-device-name> -i <unzipped-image-file>.bin
      +
      + + +
    8. +
    9. +

      Reboot

      +
    10. +
    +

    Mellanox IB Switches

    +
      +
    • Reference documentation
        +
      • You need to download from Mellanox Download Center
          +
        • BEWARE of the processor architecture (X86 vs. PPC) when selecting the images
        • +
        +
      • +
      • select the switch model and download the proposed images -- Pay attention to the download path
      • +
      +
    • +
    +
    +
    +
      +
    1. +

      Originated in 1999 to specifically address workload requirements that were not adequately addressed by Ethernet and designed for scalability, using a switched fabric network topology together with Remote Direct Memory Access (RDMA) to reduce CPU overhead. Although InfiniBand is backed by a standards organisation (InfiniBand Trade Association with formal and open multi-vendor processes, the InfiniBand market is currently dominated by a single significant vendor Mellanox recently acquired by NVidia, which also dominates the non-Ethernet market segment across HPC deployments. 

      +
    2. +
    3. +

      Most require priviledged (root) right and thus are not available for ULHPC end users. 

      +
    4. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/interconnect/images/iris-aion_Ethernet-network_overview.pdf b/interconnect/images/iris-aion_Ethernet-network_overview.pdf new file mode 100644 index 00000000..a2660f2e Binary files /dev/null and b/interconnect/images/iris-aion_Ethernet-network_overview.pdf differ diff --git a/interconnect/images/iris-aion_Ethernet-network_overview.png b/interconnect/images/iris-aion_Ethernet-network_overview.png new file mode 100644 index 00000000..5ecd12e0 Binary files /dev/null and b/interconnect/images/iris-aion_Ethernet-network_overview.png differ diff --git a/interconnect/images/iris-aion_IB-network_overview.pdf b/interconnect/images/iris-aion_IB-network_overview.pdf new file mode 100644 index 00000000..1da8535d Binary files /dev/null and b/interconnect/images/iris-aion_IB-network_overview.pdf differ diff --git a/interconnect/images/iris-aion_IB-network_overview.png b/interconnect/images/iris-aion_IB-network_overview.png new file mode 100644 index 00000000..e98ce994 Binary files /dev/null and b/interconnect/images/iris-aion_IB-network_overview.png differ diff --git a/javascripts/bootstrap-4.5.0.min.js b/javascripts/bootstrap-4.5.0.min.js new file mode 100644 index 00000000..3ecf55f2 --- /dev/null +++ b/javascripts/bootstrap-4.5.0.min.js @@ -0,0 +1,7 @@ +/*! + * Bootstrap v4.5.0 (https://getbootstrap.com/) + * Copyright 2011-2020 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors) + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) + */ +!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?e(exports,require("jquery"),require("popper.js")):"function"==typeof define&&define.amd?define(["exports","jquery","popper.js"],e):e((t=t||self).bootstrap={},t.jQuery,t.Popper)}(this,(function(t,e,n){"use strict";function i(t,e){for(var n=0;n=4)throw new Error("Bootstrap's JavaScript requires at least jQuery v1.9.1 but less than v4.0.0")}};c.jQueryDetection(),e.fn.emulateTransitionEnd=l,e.event.special[c.TRANSITION_END]={bindType:"transitionend",delegateType:"transitionend",handle:function(t){if(e(t.target).is(this))return t.handleObj.handler.apply(this,arguments)}};var h="alert",u=e.fn[h],d=function(){function t(t){this._element=t}var n=t.prototype;return n.close=function(t){var e=this._element;t&&(e=this._getRootElement(t)),this._triggerCloseEvent(e).isDefaultPrevented()||this._removeElement(e)},n.dispose=function(){e.removeData(this._element,"bs.alert"),this._element=null},n._getRootElement=function(t){var n=c.getSelectorFromElement(t),i=!1;return n&&(i=document.querySelector(n)),i||(i=e(t).closest(".alert")[0]),i},n._triggerCloseEvent=function(t){var n=e.Event("close.bs.alert");return e(t).trigger(n),n},n._removeElement=function(t){var n=this;if(e(t).removeClass("show"),e(t).hasClass("fade")){var i=c.getTransitionDurationFromElement(t);e(t).one(c.TRANSITION_END,(function(e){return n._destroyElement(t,e)})).emulateTransitionEnd(i)}else this._destroyElement(t)},n._destroyElement=function(t){e(t).detach().trigger("closed.bs.alert").remove()},t._jQueryInterface=function(n){return this.each((function(){var i=e(this),o=i.data("bs.alert");o||(o=new t(this),i.data("bs.alert",o)),"close"===n&&o[n](this)}))},t._handleDismiss=function(t){return function(e){e&&e.preventDefault(),t.close(this)}},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}}]),t}();e(document).on("click.bs.alert.data-api",'[data-dismiss="alert"]',d._handleDismiss(new d)),e.fn[h]=d._jQueryInterface,e.fn[h].Constructor=d,e.fn[h].noConflict=function(){return e.fn[h]=u,d._jQueryInterface};var f=e.fn.button,g=function(){function t(t){this._element=t}var n=t.prototype;return n.toggle=function(){var t=!0,n=!0,i=e(this._element).closest('[data-toggle="buttons"]')[0];if(i){var o=this._element.querySelector('input:not([type="hidden"])');if(o){if("radio"===o.type)if(o.checked&&this._element.classList.contains("active"))t=!1;else{var s=i.querySelector(".active");s&&e(s).removeClass("active")}t&&("checkbox"!==o.type&&"radio"!==o.type||(o.checked=!this._element.classList.contains("active")),e(o).trigger("change")),o.focus(),n=!1}}this._element.hasAttribute("disabled")||this._element.classList.contains("disabled")||(n&&this._element.setAttribute("aria-pressed",!this._element.classList.contains("active")),t&&e(this._element).toggleClass("active"))},n.dispose=function(){e.removeData(this._element,"bs.button"),this._element=null},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.button");i||(i=new t(this),e(this).data("bs.button",i)),"toggle"===n&&i[n]()}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}}]),t}();e(document).on("click.bs.button.data-api",'[data-toggle^="button"]',(function(t){var n=t.target,i=n;if(e(n).hasClass("btn")||(n=e(n).closest(".btn")[0]),!n||n.hasAttribute("disabled")||n.classList.contains("disabled"))t.preventDefault();else{var o=n.querySelector('input:not([type="hidden"])');if(o&&(o.hasAttribute("disabled")||o.classList.contains("disabled")))return void t.preventDefault();"LABEL"===i.tagName&&o&&"checkbox"===o.type&&t.preventDefault(),g._jQueryInterface.call(e(n),"toggle")}})).on("focus.bs.button.data-api blur.bs.button.data-api",'[data-toggle^="button"]',(function(t){var n=e(t.target).closest(".btn")[0];e(n).toggleClass("focus",/^focus(in)?$/.test(t.type))})),e(window).on("load.bs.button.data-api",(function(){for(var t=[].slice.call(document.querySelectorAll('[data-toggle="buttons"] .btn')),e=0,n=t.length;e0,this._pointerEvent=Boolean(window.PointerEvent||window.MSPointerEvent),this._addEventListeners()}var n=t.prototype;return n.next=function(){this._isSliding||this._slide("next")},n.nextWhenVisible=function(){!document.hidden&&e(this._element).is(":visible")&&"hidden"!==e(this._element).css("visibility")&&this.next()},n.prev=function(){this._isSliding||this._slide("prev")},n.pause=function(t){t||(this._isPaused=!0),this._element.querySelector(".carousel-item-next, .carousel-item-prev")&&(c.triggerTransitionEnd(this._element),this.cycle(!0)),clearInterval(this._interval),this._interval=null},n.cycle=function(t){t||(this._isPaused=!1),this._interval&&(clearInterval(this._interval),this._interval=null),this._config.interval&&!this._isPaused&&(this._interval=setInterval((document.visibilityState?this.nextWhenVisible:this.next).bind(this),this._config.interval))},n.to=function(t){var n=this;this._activeElement=this._element.querySelector(".active.carousel-item");var i=this._getItemIndex(this._activeElement);if(!(t>this._items.length-1||t<0))if(this._isSliding)e(this._element).one("slid.bs.carousel",(function(){return n.to(t)}));else{if(i===t)return this.pause(),void this.cycle();var o=t>i?"next":"prev";this._slide(o,this._items[t])}},n.dispose=function(){e(this._element).off(p),e.removeData(this._element,"bs.carousel"),this._items=null,this._config=null,this._element=null,this._interval=null,this._isPaused=null,this._isSliding=null,this._activeElement=null,this._indicatorsElement=null},n._getConfig=function(t){return t=a(a({},v),t),c.typeCheckConfig(m,t,b),t},n._handleSwipe=function(){var t=Math.abs(this.touchDeltaX);if(!(t<=40)){var e=t/this.touchDeltaX;this.touchDeltaX=0,e>0&&this.prev(),e<0&&this.next()}},n._addEventListeners=function(){var t=this;this._config.keyboard&&e(this._element).on("keydown.bs.carousel",(function(e){return t._keydown(e)})),"hover"===this._config.pause&&e(this._element).on("mouseenter.bs.carousel",(function(e){return t.pause(e)})).on("mouseleave.bs.carousel",(function(e){return t.cycle(e)})),this._config.touch&&this._addTouchEventListeners()},n._addTouchEventListeners=function(){var t=this;if(this._touchSupported){var n=function(e){t._pointerEvent&&y[e.originalEvent.pointerType.toUpperCase()]?t.touchStartX=e.originalEvent.clientX:t._pointerEvent||(t.touchStartX=e.originalEvent.touches[0].clientX)},i=function(e){t._pointerEvent&&y[e.originalEvent.pointerType.toUpperCase()]&&(t.touchDeltaX=e.originalEvent.clientX-t.touchStartX),t._handleSwipe(),"hover"===t._config.pause&&(t.pause(),t.touchTimeout&&clearTimeout(t.touchTimeout),t.touchTimeout=setTimeout((function(e){return t.cycle(e)}),500+t._config.interval))};e(this._element.querySelectorAll(".carousel-item img")).on("dragstart.bs.carousel",(function(t){return t.preventDefault()})),this._pointerEvent?(e(this._element).on("pointerdown.bs.carousel",(function(t){return n(t)})),e(this._element).on("pointerup.bs.carousel",(function(t){return i(t)})),this._element.classList.add("pointer-event")):(e(this._element).on("touchstart.bs.carousel",(function(t){return n(t)})),e(this._element).on("touchmove.bs.carousel",(function(e){return function(e){e.originalEvent.touches&&e.originalEvent.touches.length>1?t.touchDeltaX=0:t.touchDeltaX=e.originalEvent.touches[0].clientX-t.touchStartX}(e)})),e(this._element).on("touchend.bs.carousel",(function(t){return i(t)})))}},n._keydown=function(t){if(!/input|textarea/i.test(t.target.tagName))switch(t.which){case 37:t.preventDefault(),this.prev();break;case 39:t.preventDefault(),this.next()}},n._getItemIndex=function(t){return this._items=t&&t.parentNode?[].slice.call(t.parentNode.querySelectorAll(".carousel-item")):[],this._items.indexOf(t)},n._getItemByDirection=function(t,e){var n="next"===t,i="prev"===t,o=this._getItemIndex(e),s=this._items.length-1;if((i&&0===o||n&&o===s)&&!this._config.wrap)return e;var r=(o+("prev"===t?-1:1))%this._items.length;return-1===r?this._items[this._items.length-1]:this._items[r]},n._triggerSlideEvent=function(t,n){var i=this._getItemIndex(t),o=this._getItemIndex(this._element.querySelector(".active.carousel-item")),s=e.Event("slide.bs.carousel",{relatedTarget:t,direction:n,from:o,to:i});return e(this._element).trigger(s),s},n._setActiveIndicatorElement=function(t){if(this._indicatorsElement){var n=[].slice.call(this._indicatorsElement.querySelectorAll(".active"));e(n).removeClass("active");var i=this._indicatorsElement.children[this._getItemIndex(t)];i&&e(i).addClass("active")}},n._slide=function(t,n){var i,o,s,r=this,a=this._element.querySelector(".active.carousel-item"),l=this._getItemIndex(a),h=n||a&&this._getItemByDirection(t,a),u=this._getItemIndex(h),d=Boolean(this._interval);if("next"===t?(i="carousel-item-left",o="carousel-item-next",s="left"):(i="carousel-item-right",o="carousel-item-prev",s="right"),h&&e(h).hasClass("active"))this._isSliding=!1;else if(!this._triggerSlideEvent(h,s).isDefaultPrevented()&&a&&h){this._isSliding=!0,d&&this.pause(),this._setActiveIndicatorElement(h);var f=e.Event("slid.bs.carousel",{relatedTarget:h,direction:s,from:l,to:u});if(e(this._element).hasClass("slide")){e(h).addClass(o),c.reflow(h),e(a).addClass(i),e(h).addClass(i);var g=parseInt(h.getAttribute("data-interval"),10);g?(this._config.defaultInterval=this._config.defaultInterval||this._config.interval,this._config.interval=g):this._config.interval=this._config.defaultInterval||this._config.interval;var m=c.getTransitionDurationFromElement(a);e(a).one(c.TRANSITION_END,(function(){e(h).removeClass(i+" "+o).addClass("active"),e(a).removeClass("active "+o+" "+i),r._isSliding=!1,setTimeout((function(){return e(r._element).trigger(f)}),0)})).emulateTransitionEnd(m)}else e(a).removeClass("active"),e(h).addClass("active"),this._isSliding=!1,e(this._element).trigger(f);d&&this.cycle()}},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.carousel"),o=a(a({},v),e(this).data());"object"==typeof n&&(o=a(a({},o),n));var s="string"==typeof n?n:o.slide;if(i||(i=new t(this,o),e(this).data("bs.carousel",i)),"number"==typeof n)i.to(n);else if("string"==typeof s){if("undefined"==typeof i[s])throw new TypeError('No method named "'+s+'"');i[s]()}else o.interval&&o.ride&&(i.pause(),i.cycle())}))},t._dataApiClickHandler=function(n){var i=c.getSelectorFromElement(this);if(i){var o=e(i)[0];if(o&&e(o).hasClass("carousel")){var s=a(a({},e(o).data()),e(this).data()),r=this.getAttribute("data-slide-to");r&&(s.interval=!1),t._jQueryInterface.call(e(o),s),r&&e(o).data("bs.carousel").to(r),n.preventDefault()}}},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"Default",get:function(){return v}}]),t}();e(document).on("click.bs.carousel.data-api","[data-slide], [data-slide-to]",E._dataApiClickHandler),e(window).on("load.bs.carousel.data-api",(function(){for(var t=[].slice.call(document.querySelectorAll('[data-ride="carousel"]')),n=0,i=t.length;n0&&(this._selector=r,this._triggerArray.push(s))}this._parent=this._config.parent?this._getParent():null,this._config.parent||this._addAriaAndCollapsedClass(this._element,this._triggerArray),this._config.toggle&&this.toggle()}var n=t.prototype;return n.toggle=function(){e(this._element).hasClass("show")?this.hide():this.show()},n.show=function(){var n,i,o=this;if(!this._isTransitioning&&!e(this._element).hasClass("show")&&(this._parent&&0===(n=[].slice.call(this._parent.querySelectorAll(".show, .collapsing")).filter((function(t){return"string"==typeof o._config.parent?t.getAttribute("data-parent")===o._config.parent:t.classList.contains("collapse")}))).length&&(n=null),!(n&&(i=e(n).not(this._selector).data("bs.collapse"))&&i._isTransitioning))){var s=e.Event("show.bs.collapse");if(e(this._element).trigger(s),!s.isDefaultPrevented()){n&&(t._jQueryInterface.call(e(n).not(this._selector),"hide"),i||e(n).data("bs.collapse",null));var r=this._getDimension();e(this._element).removeClass("collapse").addClass("collapsing"),this._element.style[r]=0,this._triggerArray.length&&e(this._triggerArray).removeClass("collapsed").attr("aria-expanded",!0),this.setTransitioning(!0);var a="scroll"+(r[0].toUpperCase()+r.slice(1)),l=c.getTransitionDurationFromElement(this._element);e(this._element).one(c.TRANSITION_END,(function(){e(o._element).removeClass("collapsing").addClass("collapse show"),o._element.style[r]="",o.setTransitioning(!1),e(o._element).trigger("shown.bs.collapse")})).emulateTransitionEnd(l),this._element.style[r]=this._element[a]+"px"}}},n.hide=function(){var t=this;if(!this._isTransitioning&&e(this._element).hasClass("show")){var n=e.Event("hide.bs.collapse");if(e(this._element).trigger(n),!n.isDefaultPrevented()){var i=this._getDimension();this._element.style[i]=this._element.getBoundingClientRect()[i]+"px",c.reflow(this._element),e(this._element).addClass("collapsing").removeClass("collapse show");var o=this._triggerArray.length;if(o>0)for(var s=0;s0},i._getOffset=function(){var t=this,e={};return"function"==typeof this._config.offset?e.fn=function(e){return e.offsets=a(a({},e.offsets),t._config.offset(e.offsets,t._element)||{}),e}:e.offset=this._config.offset,e},i._getPopperConfig=function(){var t={placement:this._getPlacement(),modifiers:{offset:this._getOffset(),flip:{enabled:this._config.flip},preventOverflow:{boundariesElement:this._config.boundary}}};return"static"===this._config.display&&(t.modifiers.applyStyle={enabled:!1}),a(a({},t),this._config.popperConfig)},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.dropdown");if(i||(i=new t(this,"object"==typeof n?n:null),e(this).data("bs.dropdown",i)),"string"==typeof n){if("undefined"==typeof i[n])throw new TypeError('No method named "'+n+'"');i[n]()}}))},t._clearMenus=function(n){if(!n||3!==n.which&&("keyup"!==n.type||9===n.which))for(var i=[].slice.call(document.querySelectorAll('[data-toggle="dropdown"]')),o=0,s=i.length;o0&&r--,40===n.which&&rdocument.documentElement.clientHeight;!this._isBodyOverflowing&&t&&(this._element.style.paddingLeft=this._scrollbarWidth+"px"),this._isBodyOverflowing&&!t&&(this._element.style.paddingRight=this._scrollbarWidth+"px")},n._resetAdjustments=function(){this._element.style.paddingLeft="",this._element.style.paddingRight=""},n._checkScrollbar=function(){var t=document.body.getBoundingClientRect();this._isBodyOverflowing=Math.round(t.left+t.right)
    ',trigger:"hover focus",title:"",delay:0,html:!1,selector:!1,placement:"top",offset:0,container:!1,fallbackPlacement:"flip",boundary:"scrollParent",sanitize:!0,sanitizeFn:null,whiteList:F,popperConfig:null},Y={HIDE:"hide.bs.tooltip",HIDDEN:"hidden.bs.tooltip",SHOW:"show.bs.tooltip",SHOWN:"shown.bs.tooltip",INSERTED:"inserted.bs.tooltip",CLICK:"click.bs.tooltip",FOCUSIN:"focusin.bs.tooltip",FOCUSOUT:"focusout.bs.tooltip",MOUSEENTER:"mouseenter.bs.tooltip",MOUSELEAVE:"mouseleave.bs.tooltip"},$=function(){function t(t,e){if("undefined"==typeof n)throw new TypeError("Bootstrap's tooltips require Popper.js (https://popper.js.org/)");this._isEnabled=!0,this._timeout=0,this._hoverState="",this._activeTrigger={},this._popper=null,this.element=t,this.config=this._getConfig(e),this.tip=null,this._setListeners()}var i=t.prototype;return i.enable=function(){this._isEnabled=!0},i.disable=function(){this._isEnabled=!1},i.toggleEnabled=function(){this._isEnabled=!this._isEnabled},i.toggle=function(t){if(this._isEnabled)if(t){var n=this.constructor.DATA_KEY,i=e(t.currentTarget).data(n);i||(i=new this.constructor(t.currentTarget,this._getDelegateConfig()),e(t.currentTarget).data(n,i)),i._activeTrigger.click=!i._activeTrigger.click,i._isWithActiveTrigger()?i._enter(null,i):i._leave(null,i)}else{if(e(this.getTipElement()).hasClass("show"))return void this._leave(null,this);this._enter(null,this)}},i.dispose=function(){clearTimeout(this._timeout),e.removeData(this.element,this.constructor.DATA_KEY),e(this.element).off(this.constructor.EVENT_KEY),e(this.element).closest(".modal").off("hide.bs.modal",this._hideModalHandler),this.tip&&e(this.tip).remove(),this._isEnabled=null,this._timeout=null,this._hoverState=null,this._activeTrigger=null,this._popper&&this._popper.destroy(),this._popper=null,this.element=null,this.config=null,this.tip=null},i.show=function(){var t=this;if("none"===e(this.element).css("display"))throw new Error("Please use show on visible elements");var i=e.Event(this.constructor.Event.SHOW);if(this.isWithContent()&&this._isEnabled){e(this.element).trigger(i);var o=c.findShadowRoot(this.element),s=e.contains(null!==o?o:this.element.ownerDocument.documentElement,this.element);if(i.isDefaultPrevented()||!s)return;var r=this.getTipElement(),a=c.getUID(this.constructor.NAME);r.setAttribute("id",a),this.element.setAttribute("aria-describedby",a),this.setContent(),this.config.animation&&e(r).addClass("fade");var l="function"==typeof this.config.placement?this.config.placement.call(this,r,this.element):this.config.placement,h=this._getAttachment(l);this.addAttachmentClass(h);var u=this._getContainer();e(r).data(this.constructor.DATA_KEY,this),e.contains(this.element.ownerDocument.documentElement,this.tip)||e(r).appendTo(u),e(this.element).trigger(this.constructor.Event.INSERTED),this._popper=new n(this.element,r,this._getPopperConfig(h)),e(r).addClass("show"),"ontouchstart"in document.documentElement&&e(document.body).children().on("mouseover",null,e.noop);var d=function(){t.config.animation&&t._fixTransition();var n=t._hoverState;t._hoverState=null,e(t.element).trigger(t.constructor.Event.SHOWN),"out"===n&&t._leave(null,t)};if(e(this.tip).hasClass("fade")){var f=c.getTransitionDurationFromElement(this.tip);e(this.tip).one(c.TRANSITION_END,d).emulateTransitionEnd(f)}else d()}},i.hide=function(t){var n=this,i=this.getTipElement(),o=e.Event(this.constructor.Event.HIDE),s=function(){"show"!==n._hoverState&&i.parentNode&&i.parentNode.removeChild(i),n._cleanTipClass(),n.element.removeAttribute("aria-describedby"),e(n.element).trigger(n.constructor.Event.HIDDEN),null!==n._popper&&n._popper.destroy(),t&&t()};if(e(this.element).trigger(o),!o.isDefaultPrevented()){if(e(i).removeClass("show"),"ontouchstart"in document.documentElement&&e(document.body).children().off("mouseover",null,e.noop),this._activeTrigger.click=!1,this._activeTrigger.focus=!1,this._activeTrigger.hover=!1,e(this.tip).hasClass("fade")){var r=c.getTransitionDurationFromElement(i);e(i).one(c.TRANSITION_END,s).emulateTransitionEnd(r)}else s();this._hoverState=""}},i.update=function(){null!==this._popper&&this._popper.scheduleUpdate()},i.isWithContent=function(){return Boolean(this.getTitle())},i.addAttachmentClass=function(t){e(this.getTipElement()).addClass("bs-tooltip-"+t)},i.getTipElement=function(){return this.tip=this.tip||e(this.config.template)[0],this.tip},i.setContent=function(){var t=this.getTipElement();this.setElementContent(e(t.querySelectorAll(".tooltip-inner")),this.getTitle()),e(t).removeClass("fade show")},i.setElementContent=function(t,n){"object"!=typeof n||!n.nodeType&&!n.jquery?this.config.html?(this.config.sanitize&&(n=H(n,this.config.whiteList,this.config.sanitizeFn)),t.html(n)):t.text(n):this.config.html?e(n).parent().is(t)||t.empty().append(n):t.text(e(n).text())},i.getTitle=function(){var t=this.element.getAttribute("data-original-title");return t||(t="function"==typeof this.config.title?this.config.title.call(this.element):this.config.title),t},i._getPopperConfig=function(t){var e=this;return a(a({},{placement:t,modifiers:{offset:this._getOffset(),flip:{behavior:this.config.fallbackPlacement},arrow:{element:".arrow"},preventOverflow:{boundariesElement:this.config.boundary}},onCreate:function(t){t.originalPlacement!==t.placement&&e._handlePopperPlacementChange(t)},onUpdate:function(t){return e._handlePopperPlacementChange(t)}}),this.config.popperConfig)},i._getOffset=function(){var t=this,e={};return"function"==typeof this.config.offset?e.fn=function(e){return e.offsets=a(a({},e.offsets),t.config.offset(e.offsets,t.element)||{}),e}:e.offset=this.config.offset,e},i._getContainer=function(){return!1===this.config.container?document.body:c.isElement(this.config.container)?e(this.config.container):e(document).find(this.config.container)},i._getAttachment=function(t){return K[t.toUpperCase()]},i._setListeners=function(){var t=this;this.config.trigger.split(" ").forEach((function(n){if("click"===n)e(t.element).on(t.constructor.Event.CLICK,t.config.selector,(function(e){return t.toggle(e)}));else if("manual"!==n){var i="hover"===n?t.constructor.Event.MOUSEENTER:t.constructor.Event.FOCUSIN,o="hover"===n?t.constructor.Event.MOUSELEAVE:t.constructor.Event.FOCUSOUT;e(t.element).on(i,t.config.selector,(function(e){return t._enter(e)})).on(o,t.config.selector,(function(e){return t._leave(e)}))}})),this._hideModalHandler=function(){t.element&&t.hide()},e(this.element).closest(".modal").on("hide.bs.modal",this._hideModalHandler),this.config.selector?this.config=a(a({},this.config),{},{trigger:"manual",selector:""}):this._fixTitle()},i._fixTitle=function(){var t=typeof this.element.getAttribute("data-original-title");(this.element.getAttribute("title")||"string"!==t)&&(this.element.setAttribute("data-original-title",this.element.getAttribute("title")||""),this.element.setAttribute("title",""))},i._enter=function(t,n){var i=this.constructor.DATA_KEY;(n=n||e(t.currentTarget).data(i))||(n=new this.constructor(t.currentTarget,this._getDelegateConfig()),e(t.currentTarget).data(i,n)),t&&(n._activeTrigger["focusin"===t.type?"focus":"hover"]=!0),e(n.getTipElement()).hasClass("show")||"show"===n._hoverState?n._hoverState="show":(clearTimeout(n._timeout),n._hoverState="show",n.config.delay&&n.config.delay.show?n._timeout=setTimeout((function(){"show"===n._hoverState&&n.show()}),n.config.delay.show):n.show())},i._leave=function(t,n){var i=this.constructor.DATA_KEY;(n=n||e(t.currentTarget).data(i))||(n=new this.constructor(t.currentTarget,this._getDelegateConfig()),e(t.currentTarget).data(i,n)),t&&(n._activeTrigger["focusout"===t.type?"focus":"hover"]=!1),n._isWithActiveTrigger()||(clearTimeout(n._timeout),n._hoverState="out",n.config.delay&&n.config.delay.hide?n._timeout=setTimeout((function(){"out"===n._hoverState&&n.hide()}),n.config.delay.hide):n.hide())},i._isWithActiveTrigger=function(){for(var t in this._activeTrigger)if(this._activeTrigger[t])return!0;return!1},i._getConfig=function(t){var n=e(this.element).data();return Object.keys(n).forEach((function(t){-1!==V.indexOf(t)&&delete n[t]})),"number"==typeof(t=a(a(a({},this.constructor.Default),n),"object"==typeof t&&t?t:{})).delay&&(t.delay={show:t.delay,hide:t.delay}),"number"==typeof t.title&&(t.title=t.title.toString()),"number"==typeof t.content&&(t.content=t.content.toString()),c.typeCheckConfig(U,t,this.constructor.DefaultType),t.sanitize&&(t.template=H(t.template,t.whiteList,t.sanitizeFn)),t},i._getDelegateConfig=function(){var t={};if(this.config)for(var e in this.config)this.constructor.Default[e]!==this.config[e]&&(t[e]=this.config[e]);return t},i._cleanTipClass=function(){var t=e(this.getTipElement()),n=t.attr("class").match(W);null!==n&&n.length&&t.removeClass(n.join(""))},i._handlePopperPlacementChange=function(t){this.tip=t.instance.popper,this._cleanTipClass(),this.addAttachmentClass(this._getAttachment(t.placement))},i._fixTransition=function(){var t=this.getTipElement(),n=this.config.animation;null===t.getAttribute("x-placement")&&(e(t).removeClass("fade"),this.config.animation=!1,this.hide(),this.show(),this.config.animation=n)},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.tooltip"),o="object"==typeof n&&n;if((i||!/dispose|hide/.test(n))&&(i||(i=new t(this,o),e(this).data("bs.tooltip",i)),"string"==typeof n)){if("undefined"==typeof i[n])throw new TypeError('No method named "'+n+'"');i[n]()}}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"Default",get:function(){return X}},{key:"NAME",get:function(){return U}},{key:"DATA_KEY",get:function(){return"bs.tooltip"}},{key:"Event",get:function(){return Y}},{key:"EVENT_KEY",get:function(){return".bs.tooltip"}},{key:"DefaultType",get:function(){return z}}]),t}();e.fn[U]=$._jQueryInterface,e.fn[U].Constructor=$,e.fn[U].noConflict=function(){return e.fn[U]=M,$._jQueryInterface};var J="popover",G=e.fn[J],Z=new RegExp("(^|\\s)bs-popover\\S+","g"),tt=a(a({},$.Default),{},{placement:"right",trigger:"click",content:"",template:''}),et=a(a({},$.DefaultType),{},{content:"(string|element|function)"}),nt={HIDE:"hide.bs.popover",HIDDEN:"hidden.bs.popover",SHOW:"show.bs.popover",SHOWN:"shown.bs.popover",INSERTED:"inserted.bs.popover",CLICK:"click.bs.popover",FOCUSIN:"focusin.bs.popover",FOCUSOUT:"focusout.bs.popover",MOUSEENTER:"mouseenter.bs.popover",MOUSELEAVE:"mouseleave.bs.popover"},it=function(t){var n,i;function s(){return t.apply(this,arguments)||this}i=t,(n=s).prototype=Object.create(i.prototype),n.prototype.constructor=n,n.__proto__=i;var r=s.prototype;return r.isWithContent=function(){return this.getTitle()||this._getContent()},r.addAttachmentClass=function(t){e(this.getTipElement()).addClass("bs-popover-"+t)},r.getTipElement=function(){return this.tip=this.tip||e(this.config.template)[0],this.tip},r.setContent=function(){var t=e(this.getTipElement());this.setElementContent(t.find(".popover-header"),this.getTitle());var n=this._getContent();"function"==typeof n&&(n=n.call(this.element)),this.setElementContent(t.find(".popover-body"),n),t.removeClass("fade show")},r._getContent=function(){return this.element.getAttribute("data-content")||this.config.content},r._cleanTipClass=function(){var t=e(this.getTipElement()),n=t.attr("class").match(Z);null!==n&&n.length>0&&t.removeClass(n.join(""))},s._jQueryInterface=function(t){return this.each((function(){var n=e(this).data("bs.popover"),i="object"==typeof t?t:null;if((n||!/dispose|hide/.test(t))&&(n||(n=new s(this,i),e(this).data("bs.popover",n)),"string"==typeof t)){if("undefined"==typeof n[t])throw new TypeError('No method named "'+t+'"');n[t]()}}))},o(s,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"Default",get:function(){return tt}},{key:"NAME",get:function(){return J}},{key:"DATA_KEY",get:function(){return"bs.popover"}},{key:"Event",get:function(){return nt}},{key:"EVENT_KEY",get:function(){return".bs.popover"}},{key:"DefaultType",get:function(){return et}}]),s}($);e.fn[J]=it._jQueryInterface,e.fn[J].Constructor=it,e.fn[J].noConflict=function(){return e.fn[J]=G,it._jQueryInterface};var ot="scrollspy",st=e.fn[ot],rt={offset:10,method:"auto",target:""},at={offset:"number",method:"string",target:"(string|element)"},lt=function(){function t(t,n){var i=this;this._element=t,this._scrollElement="BODY"===t.tagName?window:t,this._config=this._getConfig(n),this._selector=this._config.target+" .nav-link,"+this._config.target+" .list-group-item,"+this._config.target+" .dropdown-item",this._offsets=[],this._targets=[],this._activeTarget=null,this._scrollHeight=0,e(this._scrollElement).on("scroll.bs.scrollspy",(function(t){return i._process(t)})),this.refresh(),this._process()}var n=t.prototype;return n.refresh=function(){var t=this,n=this._scrollElement===this._scrollElement.window?"offset":"position",i="auto"===this._config.method?n:this._config.method,o="position"===i?this._getScrollTop():0;this._offsets=[],this._targets=[],this._scrollHeight=this._getScrollHeight(),[].slice.call(document.querySelectorAll(this._selector)).map((function(t){var n,s=c.getSelectorFromElement(t);if(s&&(n=document.querySelector(s)),n){var r=n.getBoundingClientRect();if(r.width||r.height)return[e(n)[i]().top+o,s]}return null})).filter((function(t){return t})).sort((function(t,e){return t[0]-e[0]})).forEach((function(e){t._offsets.push(e[0]),t._targets.push(e[1])}))},n.dispose=function(){e.removeData(this._element,"bs.scrollspy"),e(this._scrollElement).off(".bs.scrollspy"),this._element=null,this._scrollElement=null,this._config=null,this._selector=null,this._offsets=null,this._targets=null,this._activeTarget=null,this._scrollHeight=null},n._getConfig=function(t){if("string"!=typeof(t=a(a({},rt),"object"==typeof t&&t?t:{})).target&&c.isElement(t.target)){var n=e(t.target).attr("id");n||(n=c.getUID(ot),e(t.target).attr("id",n)),t.target="#"+n}return c.typeCheckConfig(ot,t,at),t},n._getScrollTop=function(){return this._scrollElement===window?this._scrollElement.pageYOffset:this._scrollElement.scrollTop},n._getScrollHeight=function(){return this._scrollElement.scrollHeight||Math.max(document.body.scrollHeight,document.documentElement.scrollHeight)},n._getOffsetHeight=function(){return this._scrollElement===window?window.innerHeight:this._scrollElement.getBoundingClientRect().height},n._process=function(){var t=this._getScrollTop()+this._config.offset,e=this._getScrollHeight(),n=this._config.offset+e-this._getOffsetHeight();if(this._scrollHeight!==e&&this.refresh(),t>=n){var i=this._targets[this._targets.length-1];this._activeTarget!==i&&this._activate(i)}else{if(this._activeTarget&&t0)return this._activeTarget=null,void this._clear();for(var o=this._offsets.length;o--;){this._activeTarget!==this._targets[o]&&t>=this._offsets[o]&&("undefined"==typeof this._offsets[o+1]||t li > .active":".active";i=(i=e.makeArray(e(o).find(r)))[i.length-1]}var a=e.Event("hide.bs.tab",{relatedTarget:this._element}),l=e.Event("show.bs.tab",{relatedTarget:i});if(i&&e(i).trigger(a),e(this._element).trigger(l),!l.isDefaultPrevented()&&!a.isDefaultPrevented()){s&&(n=document.querySelector(s)),this._activate(this._element,o);var h=function(){var n=e.Event("hidden.bs.tab",{relatedTarget:t._element}),o=e.Event("shown.bs.tab",{relatedTarget:i});e(i).trigger(n),e(t._element).trigger(o)};n?this._activate(n,n.parentNode,h):h()}}},n.dispose=function(){e.removeData(this._element,"bs.tab"),this._element=null},n._activate=function(t,n,i){var o=this,s=(!n||"UL"!==n.nodeName&&"OL"!==n.nodeName?e(n).children(".active"):e(n).find("> li > .active"))[0],r=i&&s&&e(s).hasClass("fade"),a=function(){return o._transitionComplete(t,s,i)};if(s&&r){var l=c.getTransitionDurationFromElement(s);e(s).removeClass("show").one(c.TRANSITION_END,a).emulateTransitionEnd(l)}else a()},n._transitionComplete=function(t,n,i){if(n){e(n).removeClass("active");var o=e(n.parentNode).find("> .dropdown-menu .active")[0];o&&e(o).removeClass("active"),"tab"===n.getAttribute("role")&&n.setAttribute("aria-selected",!1)}if(e(t).addClass("active"),"tab"===t.getAttribute("role")&&t.setAttribute("aria-selected",!0),c.reflow(t),t.classList.contains("fade")&&t.classList.add("show"),t.parentNode&&e(t.parentNode).hasClass("dropdown-menu")){var s=e(t).closest(".dropdown")[0];if(s){var r=[].slice.call(s.querySelectorAll(".dropdown-toggle"));e(r).addClass("active")}t.setAttribute("aria-expanded",!0)}i&&i()},t._jQueryInterface=function(n){return this.each((function(){var i=e(this),o=i.data("bs.tab");if(o||(o=new t(this),i.data("bs.tab",o)),"string"==typeof n){if("undefined"==typeof o[n])throw new TypeError('No method named "'+n+'"');o[n]()}}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}}]),t}();e(document).on("click.bs.tab.data-api",'[data-toggle="tab"], [data-toggle="pill"], [data-toggle="list"]',(function(t){t.preventDefault(),ht._jQueryInterface.call(e(this),"show")})),e.fn.tab=ht._jQueryInterface,e.fn.tab.Constructor=ht,e.fn.tab.noConflict=function(){return e.fn.tab=ct,ht._jQueryInterface};var ut=e.fn.toast,dt={animation:"boolean",autohide:"boolean",delay:"number"},ft={animation:!0,autohide:!0,delay:500},gt=function(){function t(t,e){this._element=t,this._config=this._getConfig(e),this._timeout=null,this._setListeners()}var n=t.prototype;return n.show=function(){var t=this,n=e.Event("show.bs.toast");if(e(this._element).trigger(n),!n.isDefaultPrevented()){this._config.animation&&this._element.classList.add("fade");var i=function(){t._element.classList.remove("showing"),t._element.classList.add("show"),e(t._element).trigger("shown.bs.toast"),t._config.autohide&&(t._timeout=setTimeout((function(){t.hide()}),t._config.delay))};if(this._element.classList.remove("hide"),c.reflow(this._element),this._element.classList.add("showing"),this._config.animation){var o=c.getTransitionDurationFromElement(this._element);e(this._element).one(c.TRANSITION_END,i).emulateTransitionEnd(o)}else i()}},n.hide=function(){if(this._element.classList.contains("show")){var t=e.Event("hide.bs.toast");e(this._element).trigger(t),t.isDefaultPrevented()||this._close()}},n.dispose=function(){clearTimeout(this._timeout),this._timeout=null,this._element.classList.contains("show")&&this._element.classList.remove("show"),e(this._element).off("click.dismiss.bs.toast"),e.removeData(this._element,"bs.toast"),this._element=null,this._config=null},n._getConfig=function(t){return t=a(a(a({},ft),e(this._element).data()),"object"==typeof t&&t?t:{}),c.typeCheckConfig("toast",t,this.constructor.DefaultType),t},n._setListeners=function(){var t=this;e(this._element).on("click.dismiss.bs.toast",'[data-dismiss="toast"]',(function(){return t.hide()}))},n._close=function(){var t=this,n=function(){t._element.classList.add("hide"),e(t._element).trigger("hidden.bs.toast")};if(this._element.classList.remove("show"),this._config.animation){var i=c.getTransitionDurationFromElement(this._element);e(this._element).one(c.TRANSITION_END,n).emulateTransitionEnd(i)}else n()},t._jQueryInterface=function(n){return this.each((function(){var i=e(this),o=i.data("bs.toast");if(o||(o=new t(this,"object"==typeof n&&n),i.data("bs.toast",o)),"string"==typeof n){if("undefined"==typeof o[n])throw new TypeError('No method named "'+n+'"');o[n](this)}}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"DefaultType",get:function(){return dt}},{key:"Default",get:function(){return ft}}]),t}();e.fn.toast=gt._jQueryInterface,e.fn.toast.Constructor=gt,e.fn.toast.noConflict=function(){return e.fn.toast=ut,gt._jQueryInterface},t.Alert=d,t.Button=g,t.Carousel=E,t.Collapse=D,t.Dropdown=j,t.Modal=R,t.Popover=it,t.Scrollspy=lt,t.Tab=ht,t.Toast=gt,t.Tooltip=$,t.Util=c,Object.defineProperty(t,"__esModule",{value:!0})})); +//# sourceMappingURL=bootstrap.min.js.map \ No newline at end of file diff --git a/javascripts/bootstrap.min.js b/javascripts/bootstrap.min.js new file mode 100644 index 00000000..3ecf55f2 --- /dev/null +++ b/javascripts/bootstrap.min.js @@ -0,0 +1,7 @@ +/*! + * Bootstrap v4.5.0 (https://getbootstrap.com/) + * Copyright 2011-2020 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors) + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) + */ +!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?e(exports,require("jquery"),require("popper.js")):"function"==typeof define&&define.amd?define(["exports","jquery","popper.js"],e):e((t=t||self).bootstrap={},t.jQuery,t.Popper)}(this,(function(t,e,n){"use strict";function i(t,e){for(var n=0;n=4)throw new Error("Bootstrap's JavaScript requires at least jQuery v1.9.1 but less than v4.0.0")}};c.jQueryDetection(),e.fn.emulateTransitionEnd=l,e.event.special[c.TRANSITION_END]={bindType:"transitionend",delegateType:"transitionend",handle:function(t){if(e(t.target).is(this))return t.handleObj.handler.apply(this,arguments)}};var h="alert",u=e.fn[h],d=function(){function t(t){this._element=t}var n=t.prototype;return n.close=function(t){var e=this._element;t&&(e=this._getRootElement(t)),this._triggerCloseEvent(e).isDefaultPrevented()||this._removeElement(e)},n.dispose=function(){e.removeData(this._element,"bs.alert"),this._element=null},n._getRootElement=function(t){var n=c.getSelectorFromElement(t),i=!1;return n&&(i=document.querySelector(n)),i||(i=e(t).closest(".alert")[0]),i},n._triggerCloseEvent=function(t){var n=e.Event("close.bs.alert");return e(t).trigger(n),n},n._removeElement=function(t){var n=this;if(e(t).removeClass("show"),e(t).hasClass("fade")){var i=c.getTransitionDurationFromElement(t);e(t).one(c.TRANSITION_END,(function(e){return n._destroyElement(t,e)})).emulateTransitionEnd(i)}else this._destroyElement(t)},n._destroyElement=function(t){e(t).detach().trigger("closed.bs.alert").remove()},t._jQueryInterface=function(n){return this.each((function(){var i=e(this),o=i.data("bs.alert");o||(o=new t(this),i.data("bs.alert",o)),"close"===n&&o[n](this)}))},t._handleDismiss=function(t){return function(e){e&&e.preventDefault(),t.close(this)}},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}}]),t}();e(document).on("click.bs.alert.data-api",'[data-dismiss="alert"]',d._handleDismiss(new d)),e.fn[h]=d._jQueryInterface,e.fn[h].Constructor=d,e.fn[h].noConflict=function(){return e.fn[h]=u,d._jQueryInterface};var f=e.fn.button,g=function(){function t(t){this._element=t}var n=t.prototype;return n.toggle=function(){var t=!0,n=!0,i=e(this._element).closest('[data-toggle="buttons"]')[0];if(i){var o=this._element.querySelector('input:not([type="hidden"])');if(o){if("radio"===o.type)if(o.checked&&this._element.classList.contains("active"))t=!1;else{var s=i.querySelector(".active");s&&e(s).removeClass("active")}t&&("checkbox"!==o.type&&"radio"!==o.type||(o.checked=!this._element.classList.contains("active")),e(o).trigger("change")),o.focus(),n=!1}}this._element.hasAttribute("disabled")||this._element.classList.contains("disabled")||(n&&this._element.setAttribute("aria-pressed",!this._element.classList.contains("active")),t&&e(this._element).toggleClass("active"))},n.dispose=function(){e.removeData(this._element,"bs.button"),this._element=null},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.button");i||(i=new t(this),e(this).data("bs.button",i)),"toggle"===n&&i[n]()}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}}]),t}();e(document).on("click.bs.button.data-api",'[data-toggle^="button"]',(function(t){var n=t.target,i=n;if(e(n).hasClass("btn")||(n=e(n).closest(".btn")[0]),!n||n.hasAttribute("disabled")||n.classList.contains("disabled"))t.preventDefault();else{var o=n.querySelector('input:not([type="hidden"])');if(o&&(o.hasAttribute("disabled")||o.classList.contains("disabled")))return void t.preventDefault();"LABEL"===i.tagName&&o&&"checkbox"===o.type&&t.preventDefault(),g._jQueryInterface.call(e(n),"toggle")}})).on("focus.bs.button.data-api blur.bs.button.data-api",'[data-toggle^="button"]',(function(t){var n=e(t.target).closest(".btn")[0];e(n).toggleClass("focus",/^focus(in)?$/.test(t.type))})),e(window).on("load.bs.button.data-api",(function(){for(var t=[].slice.call(document.querySelectorAll('[data-toggle="buttons"] .btn')),e=0,n=t.length;e0,this._pointerEvent=Boolean(window.PointerEvent||window.MSPointerEvent),this._addEventListeners()}var n=t.prototype;return n.next=function(){this._isSliding||this._slide("next")},n.nextWhenVisible=function(){!document.hidden&&e(this._element).is(":visible")&&"hidden"!==e(this._element).css("visibility")&&this.next()},n.prev=function(){this._isSliding||this._slide("prev")},n.pause=function(t){t||(this._isPaused=!0),this._element.querySelector(".carousel-item-next, .carousel-item-prev")&&(c.triggerTransitionEnd(this._element),this.cycle(!0)),clearInterval(this._interval),this._interval=null},n.cycle=function(t){t||(this._isPaused=!1),this._interval&&(clearInterval(this._interval),this._interval=null),this._config.interval&&!this._isPaused&&(this._interval=setInterval((document.visibilityState?this.nextWhenVisible:this.next).bind(this),this._config.interval))},n.to=function(t){var n=this;this._activeElement=this._element.querySelector(".active.carousel-item");var i=this._getItemIndex(this._activeElement);if(!(t>this._items.length-1||t<0))if(this._isSliding)e(this._element).one("slid.bs.carousel",(function(){return n.to(t)}));else{if(i===t)return this.pause(),void this.cycle();var o=t>i?"next":"prev";this._slide(o,this._items[t])}},n.dispose=function(){e(this._element).off(p),e.removeData(this._element,"bs.carousel"),this._items=null,this._config=null,this._element=null,this._interval=null,this._isPaused=null,this._isSliding=null,this._activeElement=null,this._indicatorsElement=null},n._getConfig=function(t){return t=a(a({},v),t),c.typeCheckConfig(m,t,b),t},n._handleSwipe=function(){var t=Math.abs(this.touchDeltaX);if(!(t<=40)){var e=t/this.touchDeltaX;this.touchDeltaX=0,e>0&&this.prev(),e<0&&this.next()}},n._addEventListeners=function(){var t=this;this._config.keyboard&&e(this._element).on("keydown.bs.carousel",(function(e){return t._keydown(e)})),"hover"===this._config.pause&&e(this._element).on("mouseenter.bs.carousel",(function(e){return t.pause(e)})).on("mouseleave.bs.carousel",(function(e){return t.cycle(e)})),this._config.touch&&this._addTouchEventListeners()},n._addTouchEventListeners=function(){var t=this;if(this._touchSupported){var n=function(e){t._pointerEvent&&y[e.originalEvent.pointerType.toUpperCase()]?t.touchStartX=e.originalEvent.clientX:t._pointerEvent||(t.touchStartX=e.originalEvent.touches[0].clientX)},i=function(e){t._pointerEvent&&y[e.originalEvent.pointerType.toUpperCase()]&&(t.touchDeltaX=e.originalEvent.clientX-t.touchStartX),t._handleSwipe(),"hover"===t._config.pause&&(t.pause(),t.touchTimeout&&clearTimeout(t.touchTimeout),t.touchTimeout=setTimeout((function(e){return t.cycle(e)}),500+t._config.interval))};e(this._element.querySelectorAll(".carousel-item img")).on("dragstart.bs.carousel",(function(t){return t.preventDefault()})),this._pointerEvent?(e(this._element).on("pointerdown.bs.carousel",(function(t){return n(t)})),e(this._element).on("pointerup.bs.carousel",(function(t){return i(t)})),this._element.classList.add("pointer-event")):(e(this._element).on("touchstart.bs.carousel",(function(t){return n(t)})),e(this._element).on("touchmove.bs.carousel",(function(e){return function(e){e.originalEvent.touches&&e.originalEvent.touches.length>1?t.touchDeltaX=0:t.touchDeltaX=e.originalEvent.touches[0].clientX-t.touchStartX}(e)})),e(this._element).on("touchend.bs.carousel",(function(t){return i(t)})))}},n._keydown=function(t){if(!/input|textarea/i.test(t.target.tagName))switch(t.which){case 37:t.preventDefault(),this.prev();break;case 39:t.preventDefault(),this.next()}},n._getItemIndex=function(t){return this._items=t&&t.parentNode?[].slice.call(t.parentNode.querySelectorAll(".carousel-item")):[],this._items.indexOf(t)},n._getItemByDirection=function(t,e){var n="next"===t,i="prev"===t,o=this._getItemIndex(e),s=this._items.length-1;if((i&&0===o||n&&o===s)&&!this._config.wrap)return e;var r=(o+("prev"===t?-1:1))%this._items.length;return-1===r?this._items[this._items.length-1]:this._items[r]},n._triggerSlideEvent=function(t,n){var i=this._getItemIndex(t),o=this._getItemIndex(this._element.querySelector(".active.carousel-item")),s=e.Event("slide.bs.carousel",{relatedTarget:t,direction:n,from:o,to:i});return e(this._element).trigger(s),s},n._setActiveIndicatorElement=function(t){if(this._indicatorsElement){var n=[].slice.call(this._indicatorsElement.querySelectorAll(".active"));e(n).removeClass("active");var i=this._indicatorsElement.children[this._getItemIndex(t)];i&&e(i).addClass("active")}},n._slide=function(t,n){var i,o,s,r=this,a=this._element.querySelector(".active.carousel-item"),l=this._getItemIndex(a),h=n||a&&this._getItemByDirection(t,a),u=this._getItemIndex(h),d=Boolean(this._interval);if("next"===t?(i="carousel-item-left",o="carousel-item-next",s="left"):(i="carousel-item-right",o="carousel-item-prev",s="right"),h&&e(h).hasClass("active"))this._isSliding=!1;else if(!this._triggerSlideEvent(h,s).isDefaultPrevented()&&a&&h){this._isSliding=!0,d&&this.pause(),this._setActiveIndicatorElement(h);var f=e.Event("slid.bs.carousel",{relatedTarget:h,direction:s,from:l,to:u});if(e(this._element).hasClass("slide")){e(h).addClass(o),c.reflow(h),e(a).addClass(i),e(h).addClass(i);var g=parseInt(h.getAttribute("data-interval"),10);g?(this._config.defaultInterval=this._config.defaultInterval||this._config.interval,this._config.interval=g):this._config.interval=this._config.defaultInterval||this._config.interval;var m=c.getTransitionDurationFromElement(a);e(a).one(c.TRANSITION_END,(function(){e(h).removeClass(i+" "+o).addClass("active"),e(a).removeClass("active "+o+" "+i),r._isSliding=!1,setTimeout((function(){return e(r._element).trigger(f)}),0)})).emulateTransitionEnd(m)}else e(a).removeClass("active"),e(h).addClass("active"),this._isSliding=!1,e(this._element).trigger(f);d&&this.cycle()}},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.carousel"),o=a(a({},v),e(this).data());"object"==typeof n&&(o=a(a({},o),n));var s="string"==typeof n?n:o.slide;if(i||(i=new t(this,o),e(this).data("bs.carousel",i)),"number"==typeof n)i.to(n);else if("string"==typeof s){if("undefined"==typeof i[s])throw new TypeError('No method named "'+s+'"');i[s]()}else o.interval&&o.ride&&(i.pause(),i.cycle())}))},t._dataApiClickHandler=function(n){var i=c.getSelectorFromElement(this);if(i){var o=e(i)[0];if(o&&e(o).hasClass("carousel")){var s=a(a({},e(o).data()),e(this).data()),r=this.getAttribute("data-slide-to");r&&(s.interval=!1),t._jQueryInterface.call(e(o),s),r&&e(o).data("bs.carousel").to(r),n.preventDefault()}}},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"Default",get:function(){return v}}]),t}();e(document).on("click.bs.carousel.data-api","[data-slide], [data-slide-to]",E._dataApiClickHandler),e(window).on("load.bs.carousel.data-api",(function(){for(var t=[].slice.call(document.querySelectorAll('[data-ride="carousel"]')),n=0,i=t.length;n0&&(this._selector=r,this._triggerArray.push(s))}this._parent=this._config.parent?this._getParent():null,this._config.parent||this._addAriaAndCollapsedClass(this._element,this._triggerArray),this._config.toggle&&this.toggle()}var n=t.prototype;return n.toggle=function(){e(this._element).hasClass("show")?this.hide():this.show()},n.show=function(){var n,i,o=this;if(!this._isTransitioning&&!e(this._element).hasClass("show")&&(this._parent&&0===(n=[].slice.call(this._parent.querySelectorAll(".show, .collapsing")).filter((function(t){return"string"==typeof o._config.parent?t.getAttribute("data-parent")===o._config.parent:t.classList.contains("collapse")}))).length&&(n=null),!(n&&(i=e(n).not(this._selector).data("bs.collapse"))&&i._isTransitioning))){var s=e.Event("show.bs.collapse");if(e(this._element).trigger(s),!s.isDefaultPrevented()){n&&(t._jQueryInterface.call(e(n).not(this._selector),"hide"),i||e(n).data("bs.collapse",null));var r=this._getDimension();e(this._element).removeClass("collapse").addClass("collapsing"),this._element.style[r]=0,this._triggerArray.length&&e(this._triggerArray).removeClass("collapsed").attr("aria-expanded",!0),this.setTransitioning(!0);var a="scroll"+(r[0].toUpperCase()+r.slice(1)),l=c.getTransitionDurationFromElement(this._element);e(this._element).one(c.TRANSITION_END,(function(){e(o._element).removeClass("collapsing").addClass("collapse show"),o._element.style[r]="",o.setTransitioning(!1),e(o._element).trigger("shown.bs.collapse")})).emulateTransitionEnd(l),this._element.style[r]=this._element[a]+"px"}}},n.hide=function(){var t=this;if(!this._isTransitioning&&e(this._element).hasClass("show")){var n=e.Event("hide.bs.collapse");if(e(this._element).trigger(n),!n.isDefaultPrevented()){var i=this._getDimension();this._element.style[i]=this._element.getBoundingClientRect()[i]+"px",c.reflow(this._element),e(this._element).addClass("collapsing").removeClass("collapse show");var o=this._triggerArray.length;if(o>0)for(var s=0;s0},i._getOffset=function(){var t=this,e={};return"function"==typeof this._config.offset?e.fn=function(e){return e.offsets=a(a({},e.offsets),t._config.offset(e.offsets,t._element)||{}),e}:e.offset=this._config.offset,e},i._getPopperConfig=function(){var t={placement:this._getPlacement(),modifiers:{offset:this._getOffset(),flip:{enabled:this._config.flip},preventOverflow:{boundariesElement:this._config.boundary}}};return"static"===this._config.display&&(t.modifiers.applyStyle={enabled:!1}),a(a({},t),this._config.popperConfig)},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.dropdown");if(i||(i=new t(this,"object"==typeof n?n:null),e(this).data("bs.dropdown",i)),"string"==typeof n){if("undefined"==typeof i[n])throw new TypeError('No method named "'+n+'"');i[n]()}}))},t._clearMenus=function(n){if(!n||3!==n.which&&("keyup"!==n.type||9===n.which))for(var i=[].slice.call(document.querySelectorAll('[data-toggle="dropdown"]')),o=0,s=i.length;o0&&r--,40===n.which&&rdocument.documentElement.clientHeight;!this._isBodyOverflowing&&t&&(this._element.style.paddingLeft=this._scrollbarWidth+"px"),this._isBodyOverflowing&&!t&&(this._element.style.paddingRight=this._scrollbarWidth+"px")},n._resetAdjustments=function(){this._element.style.paddingLeft="",this._element.style.paddingRight=""},n._checkScrollbar=function(){var t=document.body.getBoundingClientRect();this._isBodyOverflowing=Math.round(t.left+t.right)
    ',trigger:"hover focus",title:"",delay:0,html:!1,selector:!1,placement:"top",offset:0,container:!1,fallbackPlacement:"flip",boundary:"scrollParent",sanitize:!0,sanitizeFn:null,whiteList:F,popperConfig:null},Y={HIDE:"hide.bs.tooltip",HIDDEN:"hidden.bs.tooltip",SHOW:"show.bs.tooltip",SHOWN:"shown.bs.tooltip",INSERTED:"inserted.bs.tooltip",CLICK:"click.bs.tooltip",FOCUSIN:"focusin.bs.tooltip",FOCUSOUT:"focusout.bs.tooltip",MOUSEENTER:"mouseenter.bs.tooltip",MOUSELEAVE:"mouseleave.bs.tooltip"},$=function(){function t(t,e){if("undefined"==typeof n)throw new TypeError("Bootstrap's tooltips require Popper.js (https://popper.js.org/)");this._isEnabled=!0,this._timeout=0,this._hoverState="",this._activeTrigger={},this._popper=null,this.element=t,this.config=this._getConfig(e),this.tip=null,this._setListeners()}var i=t.prototype;return i.enable=function(){this._isEnabled=!0},i.disable=function(){this._isEnabled=!1},i.toggleEnabled=function(){this._isEnabled=!this._isEnabled},i.toggle=function(t){if(this._isEnabled)if(t){var n=this.constructor.DATA_KEY,i=e(t.currentTarget).data(n);i||(i=new this.constructor(t.currentTarget,this._getDelegateConfig()),e(t.currentTarget).data(n,i)),i._activeTrigger.click=!i._activeTrigger.click,i._isWithActiveTrigger()?i._enter(null,i):i._leave(null,i)}else{if(e(this.getTipElement()).hasClass("show"))return void this._leave(null,this);this._enter(null,this)}},i.dispose=function(){clearTimeout(this._timeout),e.removeData(this.element,this.constructor.DATA_KEY),e(this.element).off(this.constructor.EVENT_KEY),e(this.element).closest(".modal").off("hide.bs.modal",this._hideModalHandler),this.tip&&e(this.tip).remove(),this._isEnabled=null,this._timeout=null,this._hoverState=null,this._activeTrigger=null,this._popper&&this._popper.destroy(),this._popper=null,this.element=null,this.config=null,this.tip=null},i.show=function(){var t=this;if("none"===e(this.element).css("display"))throw new Error("Please use show on visible elements");var i=e.Event(this.constructor.Event.SHOW);if(this.isWithContent()&&this._isEnabled){e(this.element).trigger(i);var o=c.findShadowRoot(this.element),s=e.contains(null!==o?o:this.element.ownerDocument.documentElement,this.element);if(i.isDefaultPrevented()||!s)return;var r=this.getTipElement(),a=c.getUID(this.constructor.NAME);r.setAttribute("id",a),this.element.setAttribute("aria-describedby",a),this.setContent(),this.config.animation&&e(r).addClass("fade");var l="function"==typeof this.config.placement?this.config.placement.call(this,r,this.element):this.config.placement,h=this._getAttachment(l);this.addAttachmentClass(h);var u=this._getContainer();e(r).data(this.constructor.DATA_KEY,this),e.contains(this.element.ownerDocument.documentElement,this.tip)||e(r).appendTo(u),e(this.element).trigger(this.constructor.Event.INSERTED),this._popper=new n(this.element,r,this._getPopperConfig(h)),e(r).addClass("show"),"ontouchstart"in document.documentElement&&e(document.body).children().on("mouseover",null,e.noop);var d=function(){t.config.animation&&t._fixTransition();var n=t._hoverState;t._hoverState=null,e(t.element).trigger(t.constructor.Event.SHOWN),"out"===n&&t._leave(null,t)};if(e(this.tip).hasClass("fade")){var f=c.getTransitionDurationFromElement(this.tip);e(this.tip).one(c.TRANSITION_END,d).emulateTransitionEnd(f)}else d()}},i.hide=function(t){var n=this,i=this.getTipElement(),o=e.Event(this.constructor.Event.HIDE),s=function(){"show"!==n._hoverState&&i.parentNode&&i.parentNode.removeChild(i),n._cleanTipClass(),n.element.removeAttribute("aria-describedby"),e(n.element).trigger(n.constructor.Event.HIDDEN),null!==n._popper&&n._popper.destroy(),t&&t()};if(e(this.element).trigger(o),!o.isDefaultPrevented()){if(e(i).removeClass("show"),"ontouchstart"in document.documentElement&&e(document.body).children().off("mouseover",null,e.noop),this._activeTrigger.click=!1,this._activeTrigger.focus=!1,this._activeTrigger.hover=!1,e(this.tip).hasClass("fade")){var r=c.getTransitionDurationFromElement(i);e(i).one(c.TRANSITION_END,s).emulateTransitionEnd(r)}else s();this._hoverState=""}},i.update=function(){null!==this._popper&&this._popper.scheduleUpdate()},i.isWithContent=function(){return Boolean(this.getTitle())},i.addAttachmentClass=function(t){e(this.getTipElement()).addClass("bs-tooltip-"+t)},i.getTipElement=function(){return this.tip=this.tip||e(this.config.template)[0],this.tip},i.setContent=function(){var t=this.getTipElement();this.setElementContent(e(t.querySelectorAll(".tooltip-inner")),this.getTitle()),e(t).removeClass("fade show")},i.setElementContent=function(t,n){"object"!=typeof n||!n.nodeType&&!n.jquery?this.config.html?(this.config.sanitize&&(n=H(n,this.config.whiteList,this.config.sanitizeFn)),t.html(n)):t.text(n):this.config.html?e(n).parent().is(t)||t.empty().append(n):t.text(e(n).text())},i.getTitle=function(){var t=this.element.getAttribute("data-original-title");return t||(t="function"==typeof this.config.title?this.config.title.call(this.element):this.config.title),t},i._getPopperConfig=function(t){var e=this;return a(a({},{placement:t,modifiers:{offset:this._getOffset(),flip:{behavior:this.config.fallbackPlacement},arrow:{element:".arrow"},preventOverflow:{boundariesElement:this.config.boundary}},onCreate:function(t){t.originalPlacement!==t.placement&&e._handlePopperPlacementChange(t)},onUpdate:function(t){return e._handlePopperPlacementChange(t)}}),this.config.popperConfig)},i._getOffset=function(){var t=this,e={};return"function"==typeof this.config.offset?e.fn=function(e){return e.offsets=a(a({},e.offsets),t.config.offset(e.offsets,t.element)||{}),e}:e.offset=this.config.offset,e},i._getContainer=function(){return!1===this.config.container?document.body:c.isElement(this.config.container)?e(this.config.container):e(document).find(this.config.container)},i._getAttachment=function(t){return K[t.toUpperCase()]},i._setListeners=function(){var t=this;this.config.trigger.split(" ").forEach((function(n){if("click"===n)e(t.element).on(t.constructor.Event.CLICK,t.config.selector,(function(e){return t.toggle(e)}));else if("manual"!==n){var i="hover"===n?t.constructor.Event.MOUSEENTER:t.constructor.Event.FOCUSIN,o="hover"===n?t.constructor.Event.MOUSELEAVE:t.constructor.Event.FOCUSOUT;e(t.element).on(i,t.config.selector,(function(e){return t._enter(e)})).on(o,t.config.selector,(function(e){return t._leave(e)}))}})),this._hideModalHandler=function(){t.element&&t.hide()},e(this.element).closest(".modal").on("hide.bs.modal",this._hideModalHandler),this.config.selector?this.config=a(a({},this.config),{},{trigger:"manual",selector:""}):this._fixTitle()},i._fixTitle=function(){var t=typeof this.element.getAttribute("data-original-title");(this.element.getAttribute("title")||"string"!==t)&&(this.element.setAttribute("data-original-title",this.element.getAttribute("title")||""),this.element.setAttribute("title",""))},i._enter=function(t,n){var i=this.constructor.DATA_KEY;(n=n||e(t.currentTarget).data(i))||(n=new this.constructor(t.currentTarget,this._getDelegateConfig()),e(t.currentTarget).data(i,n)),t&&(n._activeTrigger["focusin"===t.type?"focus":"hover"]=!0),e(n.getTipElement()).hasClass("show")||"show"===n._hoverState?n._hoverState="show":(clearTimeout(n._timeout),n._hoverState="show",n.config.delay&&n.config.delay.show?n._timeout=setTimeout((function(){"show"===n._hoverState&&n.show()}),n.config.delay.show):n.show())},i._leave=function(t,n){var i=this.constructor.DATA_KEY;(n=n||e(t.currentTarget).data(i))||(n=new this.constructor(t.currentTarget,this._getDelegateConfig()),e(t.currentTarget).data(i,n)),t&&(n._activeTrigger["focusout"===t.type?"focus":"hover"]=!1),n._isWithActiveTrigger()||(clearTimeout(n._timeout),n._hoverState="out",n.config.delay&&n.config.delay.hide?n._timeout=setTimeout((function(){"out"===n._hoverState&&n.hide()}),n.config.delay.hide):n.hide())},i._isWithActiveTrigger=function(){for(var t in this._activeTrigger)if(this._activeTrigger[t])return!0;return!1},i._getConfig=function(t){var n=e(this.element).data();return Object.keys(n).forEach((function(t){-1!==V.indexOf(t)&&delete n[t]})),"number"==typeof(t=a(a(a({},this.constructor.Default),n),"object"==typeof t&&t?t:{})).delay&&(t.delay={show:t.delay,hide:t.delay}),"number"==typeof t.title&&(t.title=t.title.toString()),"number"==typeof t.content&&(t.content=t.content.toString()),c.typeCheckConfig(U,t,this.constructor.DefaultType),t.sanitize&&(t.template=H(t.template,t.whiteList,t.sanitizeFn)),t},i._getDelegateConfig=function(){var t={};if(this.config)for(var e in this.config)this.constructor.Default[e]!==this.config[e]&&(t[e]=this.config[e]);return t},i._cleanTipClass=function(){var t=e(this.getTipElement()),n=t.attr("class").match(W);null!==n&&n.length&&t.removeClass(n.join(""))},i._handlePopperPlacementChange=function(t){this.tip=t.instance.popper,this._cleanTipClass(),this.addAttachmentClass(this._getAttachment(t.placement))},i._fixTransition=function(){var t=this.getTipElement(),n=this.config.animation;null===t.getAttribute("x-placement")&&(e(t).removeClass("fade"),this.config.animation=!1,this.hide(),this.show(),this.config.animation=n)},t._jQueryInterface=function(n){return this.each((function(){var i=e(this).data("bs.tooltip"),o="object"==typeof n&&n;if((i||!/dispose|hide/.test(n))&&(i||(i=new t(this,o),e(this).data("bs.tooltip",i)),"string"==typeof n)){if("undefined"==typeof i[n])throw new TypeError('No method named "'+n+'"');i[n]()}}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"Default",get:function(){return X}},{key:"NAME",get:function(){return U}},{key:"DATA_KEY",get:function(){return"bs.tooltip"}},{key:"Event",get:function(){return Y}},{key:"EVENT_KEY",get:function(){return".bs.tooltip"}},{key:"DefaultType",get:function(){return z}}]),t}();e.fn[U]=$._jQueryInterface,e.fn[U].Constructor=$,e.fn[U].noConflict=function(){return e.fn[U]=M,$._jQueryInterface};var J="popover",G=e.fn[J],Z=new RegExp("(^|\\s)bs-popover\\S+","g"),tt=a(a({},$.Default),{},{placement:"right",trigger:"click",content:"",template:''}),et=a(a({},$.DefaultType),{},{content:"(string|element|function)"}),nt={HIDE:"hide.bs.popover",HIDDEN:"hidden.bs.popover",SHOW:"show.bs.popover",SHOWN:"shown.bs.popover",INSERTED:"inserted.bs.popover",CLICK:"click.bs.popover",FOCUSIN:"focusin.bs.popover",FOCUSOUT:"focusout.bs.popover",MOUSEENTER:"mouseenter.bs.popover",MOUSELEAVE:"mouseleave.bs.popover"},it=function(t){var n,i;function s(){return t.apply(this,arguments)||this}i=t,(n=s).prototype=Object.create(i.prototype),n.prototype.constructor=n,n.__proto__=i;var r=s.prototype;return r.isWithContent=function(){return this.getTitle()||this._getContent()},r.addAttachmentClass=function(t){e(this.getTipElement()).addClass("bs-popover-"+t)},r.getTipElement=function(){return this.tip=this.tip||e(this.config.template)[0],this.tip},r.setContent=function(){var t=e(this.getTipElement());this.setElementContent(t.find(".popover-header"),this.getTitle());var n=this._getContent();"function"==typeof n&&(n=n.call(this.element)),this.setElementContent(t.find(".popover-body"),n),t.removeClass("fade show")},r._getContent=function(){return this.element.getAttribute("data-content")||this.config.content},r._cleanTipClass=function(){var t=e(this.getTipElement()),n=t.attr("class").match(Z);null!==n&&n.length>0&&t.removeClass(n.join(""))},s._jQueryInterface=function(t){return this.each((function(){var n=e(this).data("bs.popover"),i="object"==typeof t?t:null;if((n||!/dispose|hide/.test(t))&&(n||(n=new s(this,i),e(this).data("bs.popover",n)),"string"==typeof t)){if("undefined"==typeof n[t])throw new TypeError('No method named "'+t+'"');n[t]()}}))},o(s,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"Default",get:function(){return tt}},{key:"NAME",get:function(){return J}},{key:"DATA_KEY",get:function(){return"bs.popover"}},{key:"Event",get:function(){return nt}},{key:"EVENT_KEY",get:function(){return".bs.popover"}},{key:"DefaultType",get:function(){return et}}]),s}($);e.fn[J]=it._jQueryInterface,e.fn[J].Constructor=it,e.fn[J].noConflict=function(){return e.fn[J]=G,it._jQueryInterface};var ot="scrollspy",st=e.fn[ot],rt={offset:10,method:"auto",target:""},at={offset:"number",method:"string",target:"(string|element)"},lt=function(){function t(t,n){var i=this;this._element=t,this._scrollElement="BODY"===t.tagName?window:t,this._config=this._getConfig(n),this._selector=this._config.target+" .nav-link,"+this._config.target+" .list-group-item,"+this._config.target+" .dropdown-item",this._offsets=[],this._targets=[],this._activeTarget=null,this._scrollHeight=0,e(this._scrollElement).on("scroll.bs.scrollspy",(function(t){return i._process(t)})),this.refresh(),this._process()}var n=t.prototype;return n.refresh=function(){var t=this,n=this._scrollElement===this._scrollElement.window?"offset":"position",i="auto"===this._config.method?n:this._config.method,o="position"===i?this._getScrollTop():0;this._offsets=[],this._targets=[],this._scrollHeight=this._getScrollHeight(),[].slice.call(document.querySelectorAll(this._selector)).map((function(t){var n,s=c.getSelectorFromElement(t);if(s&&(n=document.querySelector(s)),n){var r=n.getBoundingClientRect();if(r.width||r.height)return[e(n)[i]().top+o,s]}return null})).filter((function(t){return t})).sort((function(t,e){return t[0]-e[0]})).forEach((function(e){t._offsets.push(e[0]),t._targets.push(e[1])}))},n.dispose=function(){e.removeData(this._element,"bs.scrollspy"),e(this._scrollElement).off(".bs.scrollspy"),this._element=null,this._scrollElement=null,this._config=null,this._selector=null,this._offsets=null,this._targets=null,this._activeTarget=null,this._scrollHeight=null},n._getConfig=function(t){if("string"!=typeof(t=a(a({},rt),"object"==typeof t&&t?t:{})).target&&c.isElement(t.target)){var n=e(t.target).attr("id");n||(n=c.getUID(ot),e(t.target).attr("id",n)),t.target="#"+n}return c.typeCheckConfig(ot,t,at),t},n._getScrollTop=function(){return this._scrollElement===window?this._scrollElement.pageYOffset:this._scrollElement.scrollTop},n._getScrollHeight=function(){return this._scrollElement.scrollHeight||Math.max(document.body.scrollHeight,document.documentElement.scrollHeight)},n._getOffsetHeight=function(){return this._scrollElement===window?window.innerHeight:this._scrollElement.getBoundingClientRect().height},n._process=function(){var t=this._getScrollTop()+this._config.offset,e=this._getScrollHeight(),n=this._config.offset+e-this._getOffsetHeight();if(this._scrollHeight!==e&&this.refresh(),t>=n){var i=this._targets[this._targets.length-1];this._activeTarget!==i&&this._activate(i)}else{if(this._activeTarget&&t0)return this._activeTarget=null,void this._clear();for(var o=this._offsets.length;o--;){this._activeTarget!==this._targets[o]&&t>=this._offsets[o]&&("undefined"==typeof this._offsets[o+1]||t li > .active":".active";i=(i=e.makeArray(e(o).find(r)))[i.length-1]}var a=e.Event("hide.bs.tab",{relatedTarget:this._element}),l=e.Event("show.bs.tab",{relatedTarget:i});if(i&&e(i).trigger(a),e(this._element).trigger(l),!l.isDefaultPrevented()&&!a.isDefaultPrevented()){s&&(n=document.querySelector(s)),this._activate(this._element,o);var h=function(){var n=e.Event("hidden.bs.tab",{relatedTarget:t._element}),o=e.Event("shown.bs.tab",{relatedTarget:i});e(i).trigger(n),e(t._element).trigger(o)};n?this._activate(n,n.parentNode,h):h()}}},n.dispose=function(){e.removeData(this._element,"bs.tab"),this._element=null},n._activate=function(t,n,i){var o=this,s=(!n||"UL"!==n.nodeName&&"OL"!==n.nodeName?e(n).children(".active"):e(n).find("> li > .active"))[0],r=i&&s&&e(s).hasClass("fade"),a=function(){return o._transitionComplete(t,s,i)};if(s&&r){var l=c.getTransitionDurationFromElement(s);e(s).removeClass("show").one(c.TRANSITION_END,a).emulateTransitionEnd(l)}else a()},n._transitionComplete=function(t,n,i){if(n){e(n).removeClass("active");var o=e(n.parentNode).find("> .dropdown-menu .active")[0];o&&e(o).removeClass("active"),"tab"===n.getAttribute("role")&&n.setAttribute("aria-selected",!1)}if(e(t).addClass("active"),"tab"===t.getAttribute("role")&&t.setAttribute("aria-selected",!0),c.reflow(t),t.classList.contains("fade")&&t.classList.add("show"),t.parentNode&&e(t.parentNode).hasClass("dropdown-menu")){var s=e(t).closest(".dropdown")[0];if(s){var r=[].slice.call(s.querySelectorAll(".dropdown-toggle"));e(r).addClass("active")}t.setAttribute("aria-expanded",!0)}i&&i()},t._jQueryInterface=function(n){return this.each((function(){var i=e(this),o=i.data("bs.tab");if(o||(o=new t(this),i.data("bs.tab",o)),"string"==typeof n){if("undefined"==typeof o[n])throw new TypeError('No method named "'+n+'"');o[n]()}}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}}]),t}();e(document).on("click.bs.tab.data-api",'[data-toggle="tab"], [data-toggle="pill"], [data-toggle="list"]',(function(t){t.preventDefault(),ht._jQueryInterface.call(e(this),"show")})),e.fn.tab=ht._jQueryInterface,e.fn.tab.Constructor=ht,e.fn.tab.noConflict=function(){return e.fn.tab=ct,ht._jQueryInterface};var ut=e.fn.toast,dt={animation:"boolean",autohide:"boolean",delay:"number"},ft={animation:!0,autohide:!0,delay:500},gt=function(){function t(t,e){this._element=t,this._config=this._getConfig(e),this._timeout=null,this._setListeners()}var n=t.prototype;return n.show=function(){var t=this,n=e.Event("show.bs.toast");if(e(this._element).trigger(n),!n.isDefaultPrevented()){this._config.animation&&this._element.classList.add("fade");var i=function(){t._element.classList.remove("showing"),t._element.classList.add("show"),e(t._element).trigger("shown.bs.toast"),t._config.autohide&&(t._timeout=setTimeout((function(){t.hide()}),t._config.delay))};if(this._element.classList.remove("hide"),c.reflow(this._element),this._element.classList.add("showing"),this._config.animation){var o=c.getTransitionDurationFromElement(this._element);e(this._element).one(c.TRANSITION_END,i).emulateTransitionEnd(o)}else i()}},n.hide=function(){if(this._element.classList.contains("show")){var t=e.Event("hide.bs.toast");e(this._element).trigger(t),t.isDefaultPrevented()||this._close()}},n.dispose=function(){clearTimeout(this._timeout),this._timeout=null,this._element.classList.contains("show")&&this._element.classList.remove("show"),e(this._element).off("click.dismiss.bs.toast"),e.removeData(this._element,"bs.toast"),this._element=null,this._config=null},n._getConfig=function(t){return t=a(a(a({},ft),e(this._element).data()),"object"==typeof t&&t?t:{}),c.typeCheckConfig("toast",t,this.constructor.DefaultType),t},n._setListeners=function(){var t=this;e(this._element).on("click.dismiss.bs.toast",'[data-dismiss="toast"]',(function(){return t.hide()}))},n._close=function(){var t=this,n=function(){t._element.classList.add("hide"),e(t._element).trigger("hidden.bs.toast")};if(this._element.classList.remove("show"),this._config.animation){var i=c.getTransitionDurationFromElement(this._element);e(this._element).one(c.TRANSITION_END,n).emulateTransitionEnd(i)}else n()},t._jQueryInterface=function(n){return this.each((function(){var i=e(this),o=i.data("bs.toast");if(o||(o=new t(this,"object"==typeof n&&n),i.data("bs.toast",o)),"string"==typeof n){if("undefined"==typeof o[n])throw new TypeError('No method named "'+n+'"');o[n](this)}}))},o(t,null,[{key:"VERSION",get:function(){return"4.5.0"}},{key:"DefaultType",get:function(){return dt}},{key:"Default",get:function(){return ft}}]),t}();e.fn.toast=gt._jQueryInterface,e.fn.toast.Constructor=gt,e.fn.toast.noConflict=function(){return e.fn.toast=ut,gt._jQueryInterface},t.Alert=d,t.Button=g,t.Carousel=E,t.Collapse=D,t.Dropdown=j,t.Modal=R,t.Popover=it,t.Scrollspy=lt,t.Tab=ht,t.Toast=gt,t.Tooltip=$,t.Util=c,Object.defineProperty(t,"__esModule",{value:!0})})); +//# sourceMappingURL=bootstrap.min.js.map \ No newline at end of file diff --git a/javascripts/extra.js b/javascripts/extra.js new file mode 100644 index 00000000..660453d3 --- /dev/null +++ b/javascripts/extra.js @@ -0,0 +1,30 @@ +/* + Time-stamp: + Extra JS configuration of the ULHPC Technical Documentation website + */ + +// Arithmatex / MathJax +// See https://squidfunk.github.io/mkdocs-material/extensions/pymdown/#arithmatex-mathjax +window.MathJax = { + options: { + ignoreHtmlClass: 'tex2jax_ignore', + processHtmlClass: 'tex2jax_process', + renderActions: { + find: [10, function (doc) { + for (const node of document.querySelectorAll('script[type^="math/tex"]')) { + const display = !!node.type.match(/; *mode=display/); + const math = new doc.options.MathItem(node.textContent, doc.inputJax[0], display); + const text = document.createTextNode(''); + const sibling = node.previousElementSibling; + node.parentNode.replaceChild(text, node); + math.start = {node: text, delim: '', n: 0}; + math.end = {node: text, delim: '', n: 0}; + doc.math.push(math); + if (sibling && sibling.matches('.MathJax_Preview')) { + sibling.parentNode.removeChild(sibling); + } + } + }, ''] + } + } +}; diff --git a/javascripts/tables.js b/javascripts/tables.js new file mode 100644 index 00000000..717b3ba0 --- /dev/null +++ b/javascripts/tables.js @@ -0,0 +1,8 @@ +// See https://squidfunk.github.io/mkdocs-material/reference/data-tables/#sortable-tables + +app.document$.subscribe(function() { + var tables = document.querySelectorAll("article table") + tables.forEach(function(table) { + new Tablesort(table) + }) +}) diff --git a/jobs/best-effort/index.html b/jobs/best-effort/index.html new file mode 100644 index 00000000..71b8daf2 --- /dev/null +++ b/jobs/best-effort/index.html @@ -0,0 +1,2851 @@ + + + + + + + + + + + + + + + + + + + + + + + + Best-effort Jobs - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Best-effort Jobs

    + + + + + + + + + + + + + + + + + + + + + +
    Node TypeSlurm command
    regularsbatch [-A <project>] -p batch --qos besteffort [-C {broadwell,skylake}] [...]
    gpusbatch [-A <project>] -p gpu --qos besteffort [-C volta[32]] -G 1 [...]
    bigmemsbatch [-A <project>] -p bigmem --qos besteffort [...]
    +

    Best-effort (preemptible) jobs allow an efficient usage of the platform by filling available computing nodes until regular jobs are submitted.

    +
    sbatch -p {batch | gpu | bigmem} --qos besteffort [...]
    +
    + +
    What means job preemption?

    Job preemption is the the act of "stopping" one or more "low-priority" jobs to let a "high-priority" job run. Job preemption is implemented as a variation of Slurm's Gang Scheduling logic.

    +

    When a non-best-effort job is allocated resources that are already allocated to one or more best-effort jobs, the preemptable job(s) (thus on QOS besteffort) are preempted. +On ULHPC facilities, the preempted job(s) can be requeued (if possible) or canceling them. +**For jobs to be requeued, they MUST have the "--requeue" sbatch option set.

    +
    +

    The besteffort QOS have less constraints than the other QOS (for instance, you can submit more jobs etc. )

    +

    As a general rule users should ensure that they track successful completion of best-effort jobs (which may be interrupted by other jobs at any time) and use them in combination with mechanisms such as Checkpoint-Restart that allow applications to stop and resume safely.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/jobs/billing/index.html b/jobs/billing/index.html new file mode 100644 index 00000000..5d23ede2 --- /dev/null +++ b/jobs/billing/index.html @@ -0,0 +1,2926 @@ + + + + + + + + + + + + + + + + + + + + + + + + Job Accounting and Billing - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Job Accounting and Billing

    +

    Usage Charging Policy + ULHPC Resource Allocation Policy (PDF)

    +

    Billing rates

    + + + + +

    Trackable RESources (TRES) Billing Weights

    +

    The above policy is in practice implemented through the Slurm Trackable RESources +(TRES) and remains an important factor for the Fairsharing score calculation.

    + + +

    As explained in the ULHPC Usage Charging +Policy, we set TRES for CPU, GPU, and Memory +usage according to weights defined as follows:

    + + + + + + + + + + + + + + + + + + + + + +
    WeightDescription
    \alpha_{cpu}Normalized relative performance of CPU processor core (ref.: skylake 73.6 GFlops/core)
    \alpha_{mem}Inverse of the average available memory size per core
    \alpha_{GPU}Weight per GPU accelerator
    +

    Each partition has its own weights +(combined into TRESBillingWeight) you can check with

    +
    # /!\ ADAPT <partition> accordingly
    +scontrol show partition <partition>
    +
    + + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/jobs/gpu/index.html b/jobs/gpu/index.html new file mode 100644 index 00000000..b74d9ce2 --- /dev/null +++ b/jobs/gpu/index.html @@ -0,0 +1,2845 @@ + + + + + + + + + + + + + + + + + + + + + + + + GPU Jobs - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ULHPC GPU Nodes

    +

    Each GPU node provided as part of the gpu partition feature 4x Nvidia V100 SXM2 (with either 16G or 32G memory) interconnected by the NVLink 2.0 architecture

    +

    NVlink was designed as an alternative solution to PCI Express with higher bandwidth and additional features (e.g., shared memory) specifically designed to be compatible with Nvidia's own GPU ISA for multi-GPU systems -- see wikichip article.

    +

    +

    Because of the hardware organization, you MUST follow the below recommendations:

    +
      +
    1. Do not run jobs on GPU nodes if you have no use of GPU accelerators!, i.e. if you are not using any of the software compiled against the {foss,intel}cuda toolchain.
    2. +
    3. Avoid using more than 4 GPUs, ideally within the same node
    4. +
    5. Dedicated ¼ of the available CPU cores for the management of each GPU card reserved.
    6. +
    +

    Thus your typical GPU launcher would match the AI/DL launcher example:

    +
    #!/bin/bash -l
    +### Request one GPU tasks for 4 hours - dedicate 1/4 of available cores for its management
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 7
    +#SBATCH -G 1
    +#SBATCH --time=04:00:00
    +#SBATCH -p gpu
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load numlib/cuDNN   # Example with cuDNN
    +
    +[...]
    +
    + +

    You can quickly access a GPU node for interactive jobs using si-gpu.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/jobs/images/nvlink.png b/jobs/images/nvlink.png new file mode 100644 index 00000000..cdf282bc Binary files /dev/null and b/jobs/images/nvlink.png differ diff --git a/jobs/images/understanding-cpu-load-1-core.png b/jobs/images/understanding-cpu-load-1-core.png new file mode 100644 index 00000000..a7e413df Binary files /dev/null and b/jobs/images/understanding-cpu-load-1-core.png differ diff --git a/jobs/interactive/index.html b/jobs/interactive/index.html new file mode 100644 index 00000000..4a1945b6 --- /dev/null +++ b/jobs/interactive/index.html @@ -0,0 +1,2914 @@ + + + + + + + + + + + + + + + + + + + + + + + + Interactive Job - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Interactive Jobs

    +

    The interactive (floating) partition (exclusively associated to the debug QOS) is to be used for code development, testing, and debugging.

    +
    +

    Important

    +

    Production runs are not permitted in interactive jobs. +User accounts are subject to suspension if they are determined to be using the interactive partition and the debug QOS for production computing. In particular, interactive job "chaining" is not allowed. +Chaining is defined as using a batch script to submit another batch script.

    +
    +

    You can access the different node classes available using the -C <class> flag (see also List of Slurm features on ULHPC nodes), or (better) through the custom helper functions defined for each category of nodes, i.e. si, si-gpu or si-bigmem:

    +
    +
    +
    ### Quick interative job for the default time
    +$ si
    +# salloc -p interactive --qos debug -C batch
    +
    +### Explicitly ask for a skylake node
    +$ si -C skylake
    +# salloc -p interactive --qos debug -C batch -C skylake
    +
    +### Use 1 full node for 28 tasks
    +$ si --ntasks-per-node 28
    +# salloc -p interactive --qos debug -C batch --ntasks-per-node 28
    +
    +### interactive job for 2 hours
    +$ si -t 02:00:00
    +# salloc -p interactive --qos debug -C batch -t 02:00:00
    +
    +### interactive job on 2 nodes, 1 multithreaded tasks per node
    +$ si -N 2 --ntasks-per-node 1 -c 4
    +si -N 2 --ntasks-per-node 1 -c 4
    +# salloc -p interactive --qos debug -C batch -N 2 --ntasks-per-node 1 -c 4
    +
    + +
    +
    +
    +
    +
    ### Quick interative job for the default time
    +$ si-gpu
    +# /!\ WARNING: append -G 1 to really reserve a GPU
    +# salloc -p interactive --qos debug -C gpu -G 1
    +
    +### (Better) Allocate 1/4 of available CPU cores per GPU to manage
    +$ si-gpu -G 1 -c 7
    +$ si-gpu -G 2 -c 14
    +$ si-gpu -G 4 -c 28
    +
    + +
    +
    +
    +
    +
    ### Quick interative job for the default time
    +$ si-bigmem
    +# salloc -p interactive --qos debug -C bigmem
    +
    +### interactive job with 1 multithreaded task per socket available (4 in total)
    +$ si-bigmem --ntasks-per-node 4 --ntasks-per-socket 1 -c 28
    +# salloc -p interactive --qos debug -C bigmem --ntasks-per-node 4 --ntasks-per-socket 1 -c 4
    +
    +### interactive job for 1 task but 512G of memory
    +$ si-bigmem --mem 512G
    +# salloc -p interactive --qos debug -C bigmem --mem 512G
    +
    + +
    +
    +
    +

    If you prefer to rely on the regular srun, the below table proposes the equivalent commands run by the helper scripts si*:

    + + + + + + + + + + + + + + + + + + + + + +
    Node TypeSlurm command
    regular
    si [...]
    salloc -p interactive --qos debug -C batch [...]
    salloc -p interactive --qos debug -C batch,broadwell [...]
    salloc -p interactive --qos debug -C batch,skylake [...]
    gpu
    si-gpu [...]
    salloc -p interactive --qos debug -C gpu [-C volta[32]] -G 1 [...]
    bigmem
    si-bigmem [...]
    salloc -p interactive --qos debug -C bigmem [...]
    +
    +

    Impact of Interactive jobs implementation over a floating partition

    +

    We have recently changed the way interactive jobs are served. +Since the interactive partition is no longer dedicated but floating above the other partitions, there is NO guarantee to have an interactive job running if the surrounding partition (batch, gpu or bigmem) is full.

    +

    However, the backfill scheduling in place together with the partition priority set ensure that interactive jobs will be first served upon resource release.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/jobs/long/index.html b/jobs/long/index.html new file mode 100644 index 00000000..c3fdf7d4 --- /dev/null +++ b/jobs/long/index.html @@ -0,0 +1,2852 @@ + + + + + + + + + + + + + + + + + + + + + + + + Long Jobs - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Long Jobs

    +

    If you are confident that your jobs will last more than 2 days while efficiently using the allocated resources, you can use --qos long QOS. +

    sbatch -p {batch | gpu | bigmem} --qos long [...]
    +

    +

    Following EuroHPC/PRACE Recommendations, the long QOS allow for an extended Max walltime (MaxWall) set to 14 days.

    + + + + + + + + + + + + + + + + + + + + + +
    Node TypeSlurm command
    regularsbatch [-A <project>] -p batch --qos long [-C {broadwell,skylake}] [...]
    gpusbatch [-A <project>] -p gpu --qos long [-C volta[32]] -G 1 [...]
    bigmemsbatch [-A <project>] -p bigmem --qos long [...]
    +
    +

    Important

    +

    Be aware however that special restrictions applies for this kind of jobs.

    +
      +
    • There is a limit to the maximum number of concurrent nodes involved in long jobs (see sqos for details).
    • +
    • No more than 4 long jobs per User (MaxJobsPU) are allowed, using no more than 2 nodes per jobs.
    • +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/jobs/priority/index.html b/jobs/priority/index.html new file mode 100644 index 00000000..537c3755 --- /dev/null +++ b/jobs/priority/index.html @@ -0,0 +1,2945 @@ + + + + + + + + + + + + + + + + + + + + + + + + Job Priority and Backfilling - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ULHPC Job Prioritization Factors

    +

    The ULHPC Slurm configuration rely on the Multifactor Priority Plugin and the Fair tree algorithm to preform Fairsharing among the users1

    +

    Priority Factors

    +

    There are several factors enabled on ULHPC supercomputers that influence job priority:

    +
      +
    • Age: length of time a job has been waiting (PD state) in the queue
    • +
    • Fairshare: difference between the portion of the computing resource +that has been promised and the amount of resources that has been +consumed - see Fairsharing.
    • +
    • Partition: factor associated with each node partition, for instance to privilege interactive over batch partitions
    • +
    • QOS A factor associated with each Quality Of Service (low \longrightarrow urgent)
    • +
    +

    The job's priority at any given time will be a weighted sum of all the factors that have been enabled. +Job priority can be expressed as:

    +
    Job_priority =
    +    PriorityWeightAge       * age_factor +
    +    PriorityWeightFairshare * fair-share_factor+
    +    PriorityWeightPartition * partition_factor +
    +    PriorityWeightQOS       * QOS_factor +
    +    - nice_factor
    +
    + +

    All of the factors in this formula are floating point numbers that range from 0.0 to 1.0. +The weights are unsigned, 32 bit integers, you can get with: +

    $ sprio -w
    +# OR, from slurm.conf
    +$ scontrol show config | grep -i PriorityWeight
    +
    +You can use the sprio to view the factors that comprise a job's scheduling priority and were your (pending) jobs stands in the priority queue.

    +
    +

    sprio Utility usage

    +

    Show current weights +

    sprio -w
    +
    +List pending jobs, sorted by jobid +
    sprio [-n]     # OR: sp
    +
    +List pending jobs, sorted by priority +
    sprio [-n] -S+Y
    +sprio [-n] | sort -k 3 -n
    +sprio [-n] -l | sort -k 4 -n
    +

    +
    +

    Getting the priority given to a job can be done either with squeue:

    +
    # /!\ ADAPT <jobid> accordingly
    +squeue -o %Q -j <jobid>
    +
    + +

    Backfill Scheduling

    +

    Backfill is a mechanism by which lower priority jobs can start earlier to fill +the idle slots provided they are finished before the next high priority jobs is +expected to start based on resource availability.

    +
    +

    If your job is sufficiently small, it can be backfilled and scheduled in the shadow of a larger, higher-priority job

    +
    +

    For more details, see official Slurm documentation

    +
    +
    +
      +
    1. +

      All users from a higher priority account receive a higher fair share factor than all users from a lower priority account 

      +
    2. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/jobs/reason-codes/index.html b/jobs/reason-codes/index.html new file mode 100644 index 00000000..ed8a9bfb --- /dev/null +++ b/jobs/reason-codes/index.html @@ -0,0 +1,3251 @@ + + + + + + + + + + + + + + + + + + + + + + + + Job State and Reason Code - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Job Status and Reason Codes

    +

    The squeue command details a variety of information on an active +job’s status with state and reason codes. Job state +codes describe a job’s current state in queue (e.g. pending, +completed). Job reason codes describe the reason why the job is +in its current state.

    +

    The following tables outline a variety of job state and reason codes you +may encounter when using squeue to check on your jobs.

    +

    Job State Codes

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    StatusCodeExplaination
    CANCELLEDCAThe job was explicitly cancelled by the user or system administrator.
    COMPLETEDCDThe job has completed successfully.
    COMPLETINGCGThe job is finishing but some processes are still active.
    DEADLINEDLThe job terminated on deadline
    FAILEDFThe job terminated with a non-zero exit code and failed to execute.
    NODE_FAILNFThe job terminated due to failure of one or more allocated nodes
    OUT_OF_MEMORYOOMThe Job experienced an out of memory error.
    PENDINGPDThe job is waiting for resource allocation. It will eventually run.
    PREEMPTEDPRThe job was terminated because of preemption by another job.
    RUNNINGRThe job currently is allocated to a node and is running.
    SUSPENDEDSA running job has been stopped with its cores released to other jobs.
    STOPPEDSTA running job has been stopped with its cores retained.
    TIMEOUTTOJob terminated upon reaching its time limit.
    +

    A full list of these Job State codes can be found in squeue +documentation. or sacct documentation.

    +

    Job Reason Codes

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Reason CodeExplaination
    PriorityOne or more higher priority jobs is in queue for running. Your job will eventually run.
    DependencyThis job is waiting for a dependent job to complete and will run afterwards.
    ResourcesThe job is waiting for resources to become available and will eventually run.
    InvalidAccountThe job’s account is invalid. Cancel the job and rerun with correct account.
    InvaldQoSThe job’s QoS is invalid. Cancel the job and rerun with correct account.
    QOSGrpCpuLimitAll CPUs assigned to your job’s specified QoS are in use; job will run eventually.
    QOSGrpMaxJobsLimitMaximum number of jobs for your job’s QoS have been met; job will run eventually.
    QOSGrpNodeLimitAll nodes assigned to your job’s specified QoS are in use; job will run eventually.
    PartitionCpuLimitAll CPUs assigned to your job’s specified partition are in use; job will run eventually.
    PartitionMaxJobsLimitMaximum number of jobs for your job’s partition have been met; job will run eventually.
    PartitionNodeLimitAll nodes assigned to your job’s specified partition are in use; job will run eventually.
    AssociationCpuLimitAll CPUs assigned to your job’s specified association are in use; job will run eventually.
    AssociationMaxJobsLimitMaximum number of jobs for your job’s association have been met; job will run eventually.
    AssociationNodeLimitAll nodes assigned to your job’s specified association are in use; job will run eventually.
    +

    A full list of these Job Reason Codes can be found in Slurm’s +documentation.

    +

    Running Job Statistics Metrics

    +

    The sstat command allows users to +easily pull up status information about their currently running jobs. +This includes information about CPU usage, +task information, node information, resident set size +(RSS), and virtual memory (VM). We can invoke the sstat +command as such:

    +
    # /!\ ADAPT <jobid> accordingly
    +$ sstat --jobs=<jobid>
    +
    + +

    By default, sstat will pull up significantly more information than +what would be needed in the commands default output. To remedy this, +we can use the --format flag to choose what we want in our +output. A chart of some these variables are listed in the table below:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    VariableDescription
    avecpuAverage CPU time of all tasks in job.
    averssAverage resident set size of all tasks.
    avevmsizeAverage virtual memory of all tasks in a job.
    jobidThe id of the Job.
    maxrssMaximum number of bytes read by all tasks in the job.
    maxvsizeMaximum number of bytes written by all tasks in the job.
    ntasksNumber of tasks in a job.
    +

    For an example, let's print out a job's average job id, cpu time, max +rss, and number of tasks. We can do this by typing out the command:

    +
    # /!\ ADAPT <jobid> accordingly
    +sstat --jobs=<jobid> --format=jobid,cputime,maxrss,ntasks
    +
    + +

    A full list of variables that specify data handled by sstat can be +found with the --helpformat flag or by visiting the slurm page on +sstat.

    +

    Past Job Statistics Metrics

    +

    You can use the custom susage function in /etc/profile.d/slurm.sh to collect statistics information.

    +
    $ susage -h
    +Usage: susage [-m] [-Y] [-S YYYY-MM-DD] [-E YYYT-MM-DD]
    +  For a specific user (if accounting rights granted):    susage [...] -u <user>
    +  For a specific account (if accounting rights granted): susage [...] -A <account>
    +Display past job usage summary
    +
    + +

    But by default, you should use the +sacct command allows users to pull up +status information about past jobs. +This command is very similar to sstat, but is used on jobs +that have been previously run on the system instead of currently +running jobs.

    +
    # /!\ ADAPT <jobid> accordingly
    +$ sacct [-X] --jobs=<jobid> [--format=metric1,...]
    +# OR, for a user, eventually between a Start and End date
    +$ sacct [-X] -u $USER  [-S YYYY-MM-DD] [-E YYYY-MM-DD] [--format=metric1,...]
    +# OR, for an account - ADAPT <account> accordingly
    +$ sacct [-X] -A <account> [--format=metric1,...]
    +
    + +

    Use -X to aggregate the statistics relevant to the job allocation itself, not +taking job steps into consideration.

    +

    The main metrics code you may be interested to review are listed below.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    VariableDescription
    accountAccount the job ran under.
    avecpuAverage CPU time of all tasks in job.
    averssAverage resident set size of all tasks in the job.
    cputimeFormatted (Elapsed time * CPU) count used by a job or step.
    elapsedJobs elapsed time formated as DD-HH:MM:SS.
    exitcodeThe exit code returned by the job script or salloc.
    jobidThe id of the Job.
    jobnameThe name of the Job.
    maxdiskreadMaximum number of bytes read by all tasks in the job.
    maxdiskwriteMaximum number of bytes written by all tasks in the job.
    maxrssMaximum resident set size of all tasks in the job.
    ncpusAmount of allocated CPUs.
    nnodesThe number of nodes used in a job.
    ntasksNumber of tasks in a job.
    prioritySlurm priority.
    qosQuality of service.
    reqcpuRequired number of CPUs
    reqmemRequired amount of memory for a job.
    reqtresRequired Trackable RESources (TRES)
    userUserna
    +

    A full list of variables that specify data handled by sacct can be +found with the --helpformat flag or by visiting the slurm page on +sacct.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/jobs/submit/index.html b/jobs/submit/index.html new file mode 100644 index 00000000..3c01eec1 --- /dev/null +++ b/jobs/submit/index.html @@ -0,0 +1,3705 @@ + + + + + + + + + + + + + + + + + + + + + + + + Passive/Batch Job - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Regular Jobs

    + + + + + + + + + + + + + + + + + + + + + +
    Node TypeSlurm command
    regularsbatch [-A <project>] -p batch [--qos {high,urgent}] [-C {broadwell,skylake}] [...]
    gpusbatch [-A <project>] -p gpu [--qos {high,urgent}] [-C volta[32]] -G 1 [...]
    bigmemsbatch [-A <project>] -p bigmem [--qos {high,urgent}] [...]
    +

    Main Slurm commands + Resource Allocation guide

    +

    sbatch [...] /path/to/launcher

    + + +

    sbatch is used to submit a batch launcher script for later execution, corresponding to batch/passive submission mode. +The script will typically contain one or more srun commands to launch parallel tasks. +Upon submission with sbatch, Slurm will:

    +
      +
    • allocate resources (nodes, tasks, partition, constraints, etc.)
    • +
    • runs a single copy of the batch script on the first allocated node
        +
      • in particular, if you depend on other scripts, ensure you have refer to them with the complete path toward them.
      • +
      +
    • +
    +

    When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm.

    +
    # /!\ ADAPT path to launcher accordingly
    +$ sbatch <path/to/launcher>.sh
    +Submitted batch job 864933
    +
    + + + +

    Job Submission Option

    + + +

    There are several useful environment variables set be Slurm within an allocated job. +The most important ones are detailed in the below table which summarizes the main job submission options offered with {sbatch | srun | salloc} [...]:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Command-line optionDescriptionExample
    -N <N><N> Nodes request-N 2
    --ntasks-per-node=<n><n> Tasks-per-node request--ntasks-per-node=28
    --ntasks-per-socket=<s><s> Tasks-per-socket request--ntasks-per-socket=14
    -c <c><c> Cores-per-task request (multithreading)-c 1
    --mem=<m>GB<m>GB memory per node request--mem 0
    -t [DD-]HH[:MM:SS]>Walltime request-t 4:00:00
    -G <gpu><gpu> GPU(s) request-G 4
    -C <feature>Feature request (broadwell,skylake...)-C skylake
    -p <partition>Specify job partition/queue
    --qos <qos>Specify job qos
    -A <account>Specify account
    -J <name>Job name-J MyApp
    -d <specification>Job dependency-d singleton
    --mail-user=<email>Specify email address
    --mail-type=<type>Notify user by email when certain event types occur.--mail-type=END,FAIL
    +

    At a minimum a job submission script must include number of nodes, time, type of partition and nodes (resource allocation constraint and features), and quality of service (QOS). +If a script does not specify any of these options then a default may be applied. +The full list of directives is documented in the man pages for the sbatch command (see. man sbatch).

    + + + + +

    Within a job, you aim at running a certain number of tasks, and Slurm allow for a fine-grain control of the resource allocation that must be satisfied for each task.

    +
    +

    Beware of Slurm terminology in Multicore Architecture!

    +

    +
      +
    • Slurm Node = Physical node, specified with -N <#nodes>
        +
      • Advice: always explicit number of expected number of tasks per node using --ntasks-per-node <n>. This way you control the node footprint of your job.
      • +
      +
    • +
    • Slurm Socket = Physical Socket/CPU/Processor
        +
      • Advice: if possible, explicit also the number of expected number of tasks per socket (processor) using --ntasks-per-socket <s>.
          +
        • relations between <s> and <n> must be aligned with the physical NUMA characteristics of the node.
        • +
        • For instance on aion nodes, <n> = 8*<s>
        • +
        • For instance on iris regular nodes, <n>=2*<s> when on iris bigmem nodes, <n>=4*<s>.
        • +
        +
      • +
      +
    • +
    • (the most confusing): Slurm CPU = Physical CORE
        +
      • use -c <#threads> to specify the number of cores reserved per task.
      • +
      • Hyper-Threading (HT) Technology is disabled on all ULHPC compute nodes. In particular:
          +
        • assume #cores = #threads, thus when using -c <threads>, you can safely set +
          OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1} # Default to 1 if SLURM_CPUS_PER_TASK not set
          +
          +to automatically abstract from the job context
        • +
        • you have interest to match the physical NUMA characteristics of the compute node you're running at (Ex: target 16 threads per socket on Aion nodes (as there are 8 virtual sockets per nodes, 14 threads per socket on Iris regular nodes).
        • +
        +
      • +
      +
    • +
    +
    +

    The total number of tasks defined in a given job is stored in the $SLURM_NTASKS environment variable.

    +
    +

    The --cpus-per-task option of srun in Slurm 23.11 and later

    +

    In the latest versions of Slurm srun inherits the --cpus-per-task value requested by salloc or sbatch by reading the value of SLURM_CPUS_PER_TASK, as for any other option. This behavior may differ from some older versions where special handling was required to propagate the --cpus-per-task option to srun.

    +
    +

    In case you would like to launch multiple programs in a single allocation/batch script, divide the resources accordingly by requesting resources with srun when launching the process, for instance: +

    srun --cpus-per-task <some of the SLURM_CPUS_PER_TASK> --ntasks <some of the SLURM_NTASKS> [...] <program>
    +

    +

    We encourage you to always explicitly specify upon resource allocation the number of tasks you want per node/socket (--ntasks-per-node <n> --ntasks-per-socket <s>), to easily scale on multiple nodes with -N <N>. Adapt the number of threads and the settings to match the physical NUMA characteristics of the nodes

    +
    +

    16 cores per socket and 8 (virtual) sockets (CPUs) per aion node.

    +
      +
    • {sbatch|srun|salloc|si} [-N <N>] --ntasks-per-node <8n> --ntasks-per-socket <n> -c <thread>
        +
      • Total: <N>\times 8\times<n> tasks, each on <thread> threads
      • +
      • Ensure <n>\times<thread>= 16
      • +
      • Ex: -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 (Total: 64 tasks)
      • +
      +
    • +
    +
    +
    +

    14 cores per socket and 2 sockets (physical CPUs) per regular iris.

    +
      +
    • {sbatch|srun|salloc|si} [-N <N>] --ntasks-per-node <2n> --ntasks-per-socket <n> -c <thread>
        +
      • Total: <N>\times 2\times<n> tasks, each on <thread> threads
      • +
      • Ensure <n>\times<thread>= 14
      • +
      • Ex: -N 2 --ntasks-per-node 4 --ntasks-per-socket 2 -c 7 (Total: 8 tasks)
      • +
      +
    • +
    +
    +
    +

    28 cores per socket and 4 sockets (physical CPUs) per bigmem iris

    +
      +
    • {sbatch|srun|salloc|si} [-N <N>] --ntasks-per-node <4n> --ntasks-per-socket <n> -c <thread>
        +
      • Total: <N>\times 4\times<n> tasks, each on <thread> threads
      • +
      • Ensure <n>\times<thread>= 28
      • +
      • Ex: -N 2 --ntasks-per-node 8 --ntasks-per-socket 2 -c 14 (Total: 16 tasks)
      • +
      +
    • +
    +
    +
    + + +

    Careful Monitoring of your Jobs

    +
    +

    Bug

    +

    DON'T LEAVE your jobs running WITHOUT monitoring them and ensure they are not abusing of the computational resources allocated for you!!!

    +
    +

    ULHPC Tutorial / Getting Started

    +

    You will find below several ways to monitor the effective usage of the resources allocated (for running jobs) as well as the general efficiency (Average Walltime Accuracy, CPU/Memory efficiency etc.) for past jobs.

    +

    Joining/monitoring running jobs

    +

    sjoin

    +

    At any moment of time, you can join a running job using the +custom helper functions +sjoin in another terminal (or another screen/tmux tab/window). The format is as follows:

    +
    sjoin <jobid> [-w <node>]    # Use <tab> to automatically complete <jobid> among your jobs
    +
    + +
    +

    Using sjoin to htop your processes

    +
    # check your running job
    +(access)$> sq
    +# squeue -u $(whoami)
    +   JOBID PARTIT       QOS                 NAME       USER NODE  CPUS ST         TIME    TIME_LEFT PRIORITY NODELIST(REASON)
    + 2171206  [...]
    +# Connect to your running job, identified by its Job ID
    +(access)$> sjoin 2171206     # /!\ ADAPT <jobid> accordingly, use <TAB> to have it autocatically completed
    +# Equivalent of: srun --jobid 2171206 --gres=gpu:0 --pty bash -i
    +(node)$> htop # view of all processes
    +#               F5: tree view
    +#               u <name>: filter by process of <name>
    +#               q: quit
    +
    + +
    +
    +

    On the [impossibility] to monitor passive GPU jobs over sjoin

    +

    If you use sjoin to join a GPU job, you WON'T be able to see the allocated GPU activity with nvidia-smi and all the monitoring tools provided by NVidia. +The reason is that currently, there is no way to perform an over-allocation of a Slurm Generic Resource (GRES) as our GPU cards, that means you can't create (e.g. with sjoin or srun --jobid [...]) job steps with access to GPUs which are bound to another step. +To keep sjoin working with gres job, you MUST add "--gres=none"

    +

    You can use a direct connection with ssh <node> or clush -w @job:<jobid> for that (see below) but be aware that confined context is NOT maintained that way and that you will see the GPU processes on all 4 GPU cards.

    +
    +

    ClusterShell

    +
    +

    Danger

    +

    Only for VERY Advanced users!!!. +You should know what you are doing when using ClusterShell as you can mistakenly generate a huge amount of remote commands across the cluster which, while they will likely fail, still induce an unexpected load that may disturb the system.

    +
    +

    ClusterShell is a useful Python package for executing arbitrary commands across multiple hosts. +On the ULHPC clusters, it provides a relatively simple way for you to run commands on nodes your jobs are running on, and collect the results.

    +
    +

    Info

    +

    You can only ssh to, and therefore run clush on, nodes where you have active/running jobs.

    +
    +

    nodeset

    +

    The nodeset command enables the easy manipulation of node sets, as well as node groups, at the command line level. +It uses sinfo underneath but has slightly different syntax. You can use it to ask about node states and nodes your job is running on.

    +

    The nice difference is you can ask for folded (e.g. iris-[075,078,091-092]) or expanded (e.g. iris-075 iris-078 iris-091 iris-092) forms of the node lists.

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    Commanddescription
    nodeset -L[LL]List all groups available
    nodeset -c [...]show number of nodes in nodeset(s)
    nodeset -e [...]expand nodeset(s) to separate nodes
    nodeset -f [...]fold nodeset(s) (or separate nodes) into one nodeset
    +
    Nodeset expansion and folding
    +
    # Get list of nodes with issues
    +$ sinfo -R --noheader -o "%N"
    +iris-[005-008,017,161-162]
    +# ... and expand that list
    +$ sinfo -R --noheader -o "%N" | nodeset -e
    +iris-005 iris-006 iris-007 iris-008 iris-017 iris-161 iris-162
    +
    +# Actually equivalent of (see below)
    +$ nodeset -e @state:drained
    +
    + +
    +
    +
    # List nodes in IDLE state
    +$> sinfo -t IDLE --noheader
    +interactive    up    4:00:00      4   idle iris-[003-005,007]
    +long           up 30-00:00:0      2   idle iris-[015-016]
    +batch*         up 5-00:00:00      1   idle iris-134
    +gpu            up 5-00:00:00      9   idle iris-[170,173,175-178,181]
    +bigmem         up 5-00:00:00      0    n/a
    +
    +# make out a synthetic list
    +$> sinfo -t IDLE --noheader | awk '{ print $6 }' | nodeset -f
    +iris-[003-005,007,015-016,134,170,173,175-178,181]
    +
    +# ... actually done when restricting the column to nodelist only
    +$> sinfo -t IDLE --noheader -o "%N"
    +iris-[003-005,007,015-016,134,170,173,175-178,181]
    +
    +# Actually equivalent of (see below)
    +$ nodeset -f @state:idle
    +
    + +
    +
    +
    +
    Exclusion / intersection of nodeset + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    -x <nodeset>exclude from working set <nodeset>
    -i <nodeset>intersection from working set with <nodeset>
    -X <nodeset> (--xor)elements that are in exactly one of the working set and <nodeset>
    +
    # Exclusion
    +$> nodeset -f iris-[001-010] -x iris-[003-005,007,015-016]
    +iris-[001-002,006,008-010]
    +# Intersection
    +$> nodeset -f iris-[001-010] -i iris-[003-005,007,015-016]
    +iris-[003-005,007]
    +# "XOR" (one occurrence only)
    +$> nodeset -f iris-[001-010] -x iris-006 -X iris-[005-007]
    +iris-[001-004,006,008-010]
    +
    + +
    +

    The groups useful to you that we have configured are @user, @job and @state.

    +
    +
    $ nodeset -LLL
    +# convenient partition groups
    +@batch  iris-[001-168] 168
    +@bigmem iris-[187-190] 4
    +@gpu    iris-[169-186,191-196] 24
    +@interactive iris-[001-196] 196
    +# conveniente state groups
    +@state:allocated [...]
    +@state:idle      [...]
    +@state:mixed     [...]
    +@state:reserved  [...]
    +# your individual jobs
    +@job:2252046 iris-076 1
    +@job:2252050 iris-[191-196] 6
    +# all the jobs under your username
    +@user:svarrette iris-[076,191-196] 7
    +
    + +
    +
    +

    List expanded node names where you have jobs running +

    # Similar to: squeue -h -u $USER -o "%N"|nodeset -e
    +$ nodeset -e @user:$USER
    +

    +
    +
    +

    List folded nodes where your job 1234567 is running (use sq to quickly list your jobs): +

    $ similar to squeue -h -j 1234567 -o "%N"
    +nodeset -f @job:1234567
    +

    +
    +
    +

    List expanded node names that are idle according to slurm +

    # Similar to: sinfo -t IDLE -o "%N"
    +nodeset -e @state:idle
    +

    +
    +
    +

    clush

    +

    clush can run commands on multiple nodes at once for instance to monitor you jobs. It uses the node grouping syntax from [nodeset]((https://clustershell.readthedocs.io/en/latest/tools/nodeset.html) to allow you to run commands on those nodes.

    +

    clush uses ssh to connect to each of these nodes. +You can use the -b option to gather output from nodes with same output into the same lines. Leaving this out will report on each node separately.

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    -bgathering output (as when piping to dshbak -c)
    -w <nodelist>specify remote hosts, incl. node groups with @group special syntax
    -g <group>similar to -w @<group>, restrict commands to the hosts group <group>
    --diffshow differences between common outputs
    +
    +

    Show %cpu, memory usage, and command for all nodes running any of your jobs. +

    clush -bw @user:$USER ps -u$USER -o%cpu,rss,cmd
    +
    +As above, but only for the nodes reserved with your job <jobid> +
    clush -bw @job:<jobid> ps -u$USER -o%cpu,rss,cmd
    +

    +
    +
    +

    Show what's running on all the GPUs on the nodes associated with your job 654321. +

    clush -bw @job:654321 bash -l -c 'nvidia-smi --format=csv --query-compute-apps=process_name,used_gpu_memory'
    +
    +As above but for all your jobs (assuming you have only GPU nodes with all GPUs) +
    clush -bw @user:$USER bash -l -c 'nvidia-smi --format=csv --query-compute-apps=process_name,used_gpu_memory'
    +

    +

    This may be convenient for passive jobs since the sjoin utility does NOT permit to run nvidia-smi (see explaination). +However that way you will see unfortunately ALL processes running on the 4 GPU cards -- including from other users sharing your nodes. It's a known bug, not a feature.

    +
    +
    +

    pestat: CPU/Mem usage report

    +

    We have deployed the (excellent) Slurm tool pestat (Processor Element status) of Ole Holm Nielsen that you can use to quickly check the CPU/Memory usage of your jobs. +Information deserving investigation (too low/high CPU or Memory usage compared to allocation) will be flagged in Red or Magenta

    +
    pestat [-p <partition>] [-G] [-f]
    +
    + +
    pestat output (official sample output)

    +
    +

    General Guidelines

    +

    As mentionned before, always check your node activity with at least htop on the all allocated nodes to ensure you use them as expected. Several cases might apply to your job workflow:

    +
    +

    You are dealing with an embarrasingly parallel job campaign and this approach is bad and overload the scheduler unnecessarily. +You will also quickly cross the limits set in terms of maximum number of jobs. +You must aggregate multiples tasks within a single job to exploit fully a complete node. +In particular, you MUST consider using GNU Parallel and our generic GNU launcher launcher.parallel.sh.

    +

    ULHPC Tutorial / HPC Management of Embarrassingly Parallel Jobs

    +
    +
    +

    If you asked for more than a core in your job (> 1 tasks, -c <threads> where <threads> > 1), there are 3 typical situations you MUST analysed (and pestat or htop are of great help for that):

    +
      +
    1. You cannot see the expected activity (only 1 core seems to be active at 100%), then you should review your workflow as you are under-exploiting (and thus probably waste) the allocated resources.
    2. +
    3. +

      you have the expected activity on the requested cores (Ex: the 28 cores were requested, and htop reports a significant usage of all cores) BUT the CPU load of the system exceed the core capacity of the computing node. That means you are forking too many processes and overloading/harming the systems.

      +
        +
      • For instance on regular iris (resp. aion) node, a CPU load above 28 (resp. 128) is suspect. +
      • +
      • An analogy for a single core load with the amont of cars possible in a single-lane brige or tunnel is illustrated below (source). Like the bridge/tunnel operator, you'd like your cars/processes to never be waiting, otherwise you are harming the system. Imagine this analogy for the amount of cores available on a computing node to better reporesent the situtation on a single core.
      • +
      +

      +
    4. +
    5. +

      you have the expected activity on the requested cores and the load match your allocation without harming the system: you're good to go!

      +
    6. +
    +
    +
    +

    If you asked for more than ONE node, ensure that you have consider the following questions.

    +
      +
    1. You are running an MPI job: you generally know what you're doing, YET ensure your followed the single node monitoring checks (htop etc. yet across all nodes) to review your core activity on ALL nodes (see 3. below) . +Consider also parallel profilers like Arm Forge
    2. +
    3. You are running an embarrasingly parallel job campaign. You should first ensure you correctly exploit a single node using GNU Parallel before attempting to cross multiple nodes
    4. +
    5. You run a distributed framework able to exploit multiple nodes (typically with a master/slave model as for Spark cluster). You MUST assert that your [slave] processes are really run on the over nodes using
    6. +
    +
    # check you running job
    +$ sq
    +# Join **another** node than the first one listed
    +$ sjoin <jobid> -w <node>
    +$ htop  # view of all processes
    +#               F5: tree view
    +#               u <name>: filter by process of <name>
    +#               q: quit
    +
    + +
    +
    +

    Monitoring past jobs efficiency

    +
    +

    Walltime estimation and Job efficiency

    +

    By default, none of the regular jobs you submit can exceed a walltime of 2 days (2-00:00:00). +You have a strong interest to estimate accurately the walltime of your jobs. +While it is not always possible, or quite hard to guess at the beginning of a given job campaign where you'll probably ask for the maximum walltime possible, you should look back as your historical usage for the past efficiency and elapsed time of your previously completed jobs using seff or susage utilities. +Update the time constraint [#SBATCH] -t [...] of your jobs accordingly. +There are two immediate benefits for you:

    +
      +
    1. Short jobs are scheduled faster, and may even be elligible for backfilling
    2. +
    3. You will be more likely elligible for a raw share upgrade of your user account -- see Fairsharing
    4. +
    +
    +

    The below utilities will help you track the CPU/Memory efficiency (seff) or the Average Walltime Accuracy (susage, sacct) of your past jobs

    +

    seff

    + + +

    Use seff to double check a past job CPU/Memory efficiency. Below examples should be self-speaking:

    +
    +
    $ seff 2171749
    +Job ID: 2171749
    +Cluster: iris
    +User/Group: <login>/clusterusers
    +State: COMPLETED (exit code 0)
    +Nodes: 1
    +Cores per node: 28
    +CPU Utilized: 41-01:38:14
    +CPU Efficiency: 99.64% of 41-05:09:44 core-walltime
    +Job Wall-clock time: 1-11:19:38
    +Memory Utilized: 2.73 GB
    +Memory Efficiency: 2.43% of 112.00 GB
    +
    + +
    +
    +
    $ seff 2117620
    +Job ID: 2117620
    +Cluster: iris
    +User/Group: <login>/clusterusers
    +State: COMPLETED (exit code 0)
    +Nodes: 1
    +Cores per node: 16
    +CPU Utilized: 14:24:49
    +CPU Efficiency: 23.72% of 2-12:46:24 core-walltime
    +Job Wall-clock time: 03:47:54
    +Memory Utilized: 193.04 GB
    +Memory Efficiency: 80.43% of 240.00 GB
    +
    + +
    +
    +
    $ seff 2138087
    +Job ID: 2138087
    +Cluster: iris
    +User/Group: <login>/clusterusers
    +State: COMPLETED (exit code 0)
    +Nodes: 1
    +Cores per node: 64
    +CPU Utilized: 87-16:58:22
    +CPU Efficiency: 86.58% of 101-07:16:16 core-walltime
    +Job Wall-clock time: 1-13:59:19
    +Memory Utilized: 1.64 TB
    +Memory Efficiency: 99.29% of 1.65 TB
    +
    + +
    +
    +

    This illustrates a very bad job in terms of CPU/memory efficiency (below 4%), which illustrate a case where basically the user wasted 4 hours of computation while mobilizing a full node and its 28 cores. +

    $ seff 2199497
    +Job ID: 2199497
    +Cluster: iris
    +User/Group: <login>/clusterusers
    +State: COMPLETED (exit code 0)
    +Nodes: 1
    +Cores per node: 28
    +CPU Utilized: 00:08:33
    +CPU Efficiency: 3.55% of 04:00:48 core-walltime
    +Job Wall-clock time: 00:08:36
    +Memory Utilized: 55.84 MB
    +Memory Efficiency: 0.05% of 112.00 GB
    +
    + This is typical of a single-core task can could be drastically improved via GNU Parallel.

    +
    +
    +

    Note however that demonstrating a CPU good efficiency with seff may not be enough! +You may still induce an abnormal load on the reserved nodes if you spawn more processes than allowed by the Slurm reservation. +To avoid that, always try to prefix your executions with srun within your launchers. See also Specific Resource Allocations.

    + + +

    susage

    + + +

    Use susage to check your past jobs walltime accuracy (Timelimit vs. Elapsed)

    +
    $ susage -h
    +Usage: susage [-m] [-Y] [-S YYYY-MM-DD] [-E YYYT-MM-DD]
    +  For a specific user (if accounting rights granted):    susage [...] -u <user>
    +  For a specific account (if accounting rights granted): susage [...] -A <account>
    +Display past job usage summary
    +
    + + + +

    In all cases, if you are confident that your jobs will last more than 2 days while efficiently using the allocated resources, you can use --qos long QOS. Be aware that special restrictions applies for this kind of jobs.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/layout/index.html b/layout/index.html new file mode 100644 index 00000000..f1fa6b2b --- /dev/null +++ b/layout/index.html @@ -0,0 +1,2766 @@ + + + + + + + + + + + + + + + + + + + + + + + + Layout - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Layout

    + +

    This repository is organized as follows (use tree -L 2 to complete):

    +
    .
    +├── Makefile        # GNU Make configuration
    +├── README.md       # Project README
    +├── VERSION         # /!\ DO NOT EDIT. Current repository version
    +├── docs/           # [MkDocs](mkdocs.org) main directory
    +├── mkdocs.yml      # [MkDocs](mkdocs.org) configuration
    +├── .envrc           # Local direnv configuration -- see https://direnv.net/
    +│                    # Assumes you have installed in ~/.config/direnv/direnvrc
    +│                    # the version proposed on
    +│                    #    https://raw.githubusercontent.com/Falkor/dotfiles/master/direnv/direnvrc
    +├── .python-{version,virtualenv}  # Pyenv/Virtualenv configuration
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/policies/2021_ULHPC_Usage_Charging.xlsx b/policies/2021_ULHPC_Usage_Charging.xlsx new file mode 100644 index 00000000..9869be0e Binary files /dev/null and b/policies/2021_ULHPC_Usage_Charging.xlsx differ diff --git a/policies/2022_ULHPC_Usage_Charging.xlsx b/policies/2022_ULHPC_Usage_Charging.xlsx new file mode 100644 index 00000000..12e23716 Binary files /dev/null and b/policies/2022_ULHPC_Usage_Charging.xlsx differ diff --git a/policies/aup/index.html b/policies/aup/index.html new file mode 100644 index 00000000..d9d80433 --- /dev/null +++ b/policies/aup/index.html @@ -0,0 +1,2927 @@ + + + + + + + + + + + + + + + + + + + + + + + + Acceptable Use Policy (AUP) - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Acceptable Use Policy (AUP) 2.1

    +

    The University of Luxembourg operates since 2007 a large academic HPC facility which remains the reference implementation within the country, offering a cutting-edge research infrastructure to Luxembourg public research while serving as edge access to the upcoming Euro-HPC Luxembourg supercomputer. +The University extends access to its HPC resources (including facilities, services and HPC experts) to its students, staff, research partners (including scientific staff of national public organizations and external partners for the duration of joint research projects) and to industrial partners.

    +

    UL HPC AUP

    + + +

    There are a number of policies which apply to ULHPC users.

    +

    UL HPC Acceptable Use Policy (AUP) [pdf]

    +
    +

    Important

    +

    All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP). +You should read and keep a signed copy of this document before using the facility.

    +

    Access and/or usage of any ULHPC system assumes the tacit acknowledgement to this policy.

    +
    + + +

    The purpose of this document is to define the rules and terms governing acceptable use of resources (core hours, license hours, data storage capacity as well as network connectivity and technical support), including access, utilization and security of the resources and data.

    +

    Crediting ULHPC in your research

    +

    One of the requirements stemming from the AUP, is to credit and acknowle the usage of the University of Luxembourg HPC facility for ALL publications and contributions having results and/or contents obtained or derived from that usage.

    +

    Publication tagging

    +

    You are also requested to tag the publication(s) you have produced thanks to the usage of the UL HPC platform upon their registration on Orbilu:

    +
      +
    • Login on MyOrbiLu
    • +
    • Select your publication entry and click on the "Edit" button
    • +
    • Select the "2. Enrich" category at the top of the page
    • +
    • In the "Research center" field, enter "ulhpc" and select the proposition
    • +
    +

    +
    +

    This tag is a very important indicator for us to quantify the concrete impact of the HPC facility on the research performed at the University.

    +
    +

    List of publications generated thanks to the UL HPC Platform

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/policies/images/downtime.jpg b/policies/images/downtime.jpg new file mode 100644 index 00000000..cdf801a0 Binary files /dev/null and b/policies/images/downtime.jpg differ diff --git a/policies/images/orbilu_ulhpc_research_center.png b/policies/images/orbilu_ulhpc_research_center.png new file mode 100644 index 00000000..0483b1a8 Binary files /dev/null and b/policies/images/orbilu_ulhpc_research_center.png differ diff --git a/policies/maintenance/index.html b/policies/maintenance/index.html new file mode 100644 index 00000000..6ec42f92 --- /dev/null +++ b/policies/maintenance/index.html @@ -0,0 +1,2946 @@ + + + + + + + + + + + + + + + + + + + + + + + + Downtime and Maintenance - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Maintenance and Downtime Policy

    +

    +

    Scheduled Maintenance

    +

    The ULHPC team will schedule maintenance in one of three manners:

    +
      +
    1. Rolling reboots + Whenever possible, ULHPC will apply updates and do other maintenance in a rolling fashion in such a manner as to have either no or as little impact as possible to ULHPC services
    2. +
    3. Partial outages + We will do these as needed but in a manner that impacts only some ULHPC services at a time
    4. +
    5. Full outages + These are outages that will affect all ULHPC services, such as outages of core datacenter networking services, datacenter power of HVAC/cooling system maintenance or global GPFS/Spectrumscale filesystem updates. + Such maintenance windows typically happen on a quarterly basis. + It should be noted that we are not always able to anticipate when these outages are needed.
    6. +
    +

    ULHPC's goal for these downtimes is to have them completed as fast as possible. + However, validation and qualification of the full platform takes typically one working day, and unforeseen or unusual circumstances may occur. So count for such outages a multiple-day downtime.

    +

    Notifications

    +

    We normally inform users of cluster maintenance at least 3 weeks in advance by mail using the HPC User community mailing list (moderated): hpc-users@uni.lu. +A second reminder is sent a few days prior to actual downtime.

    +

    The news of the downtimes is also posted on the Live status page.

    +

    Finally, a colored "message of the day" (motd) banner is displayed on all access/login servers such that you can quickly be informed of any incoming maintenance operation upon connection to the cluster. +You can see this when you login or (again),any time by issuing the command:

    +
    cat /etc/motd
    +
    + +
    +

    Detecting maintenance... During the maintenance

    +
      +
    • During the maintenance period, access to the involved cluster access/login serveur is DENIED and any users still logged-in are disconnected at the beginning of the maintenance
        +
      • you will receive a written message in your terminal
      • +
      • if for some reason during the maintenance you urgently need to collect data from your account, please contact the UL HPC Team by sending a mail to: hpc-team@uni.lu.
      • +
      +
    • +
    • We will notify you of the end of the maintenance with a summary of the performed operations.
    • +
    +
    +

    Exceptional "EMERGENCY" maintenance

    +

    Unscheduled downtimes can occur for any number of reasons, including:

    +
      +
    • Loss of cooling and/or power in the data center.
    • +
    • Loss of supporting infrastructure (i.e. hardware).
    • +
    • Critical need to make changes to hardware +or software that negatively impacts performance or access.
    • +
    • Application of critical patches that can't wait until the next scheduled maintenance.
    • +
    • For safety or security issues that require immediate action.
    • +
    +

    We will try to notify users in the advent of such event by email.

    +
    +

    Danger

    +

    The ULHPC team reserves the right to intervene in user activity without notice when such activity may destabilize the platform and/or is at the expense of other users, and/or to monitor/verify/debug ongoing system activity.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/policies/passwords/index.html b/policies/passwords/index.html new file mode 100644 index 00000000..ae49605d --- /dev/null +++ b/policies/passwords/index.html @@ -0,0 +1,3036 @@ + + + + + + + + + + + + + + + + + + + + + + + + Password Policy - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    + +
    + + +
    +
    + + + + + + + + + + +

    Password Policy

    + +

    Password and Account Protection

    +

    A user is given a username (also known as a login name) and associated +password that permits her/him to access ULHPC resources. This +username/password pair may be used by a single individual only: +passwords must not be shared with any other person. Users who +share their passwords will have their access to ULHPC disabled.

    +
    +

    Do not confuse your UL[HPC] password/passphrase and your SSH passphrase

    +

    We sometimes receive requests to reset your SSH passphrase, which is something you control upon SSH key generation - see SSH documentation.

    +
    +

    Passwords must be changed as soon as possible after exposure or +suspected compromise. Exposure of passwords and suspected compromises +must immediately be reported to ULHPC and the University CISO (see below). +In all cases, recommendations for the creation of strong passwords is proposed below.

    +

    Password Manager

    +

    You are strongly encouraged also to rely on password manager applications to store your different passwords. You may want to use your browser embedded solution but it's not the safest option. +Here is a list of recommended applications:

    + +

    Forgotten Passwords

    +

    If you forget your password or if it has recently expired, you can simply contact us to initiate the process of resetting your password.

    +

    Login Failures

    +

    Your login privileges will be disabled if you have several login failures +while entering your password on a ULHPC resource. You do not need +a new password in this situation. The login failures will be +automatically cleared after a couple of minutes. No additional actions are +necessary.

    +

    How To Change Your Password on IPA

    +

    See IPA documentation

    +
    +

    Tip

    +

    Passwords must be changed under any one of the following circumstances:

    +
      +
    • Immediately after someone else has obtained your password (do NOT give your password to anyone else).
    • +
    • As soon as possible, but at least within one business day after a password has been compromised or after you suspect that a password has been compromised.
    • +
    • On direction from ULHPC staff, or by IPA password policy requesting to frequently change your password.
    • +
    +
    +

    Your new password must adhere to ULHPC's password requirements.

    +

    Password Requirements and Guidelines

    +

    One of the potentially weakest links in computer security is the individual password. Despite the University's and ULHPC's efforts to keep hackers out of your personal files and away from University resources (e.g., email, web files, licensed software), easily-guessed passwords are still a big problem so you should really pay attention to the following guidelines and recommendations.

    +

    Recently, the National Institute of Standards and Technology (NIST) has updated +their Digital Identity Guidelines in Special Publication +800-63B. +We have updated our password policy to bring it in closer alignment with this guidelines. In particular, the updated guidance is counter to the long-held philosophy that passwords must be long and complex. In contrast, the new guidelines recommend that passwords should be "easy to remember" but "hard to guess", allowing for usability and security to go hand-in-hand. +Inpired with other password policies and guidelines (Stanford, NERSC), ULHPC thus recommends the usage of "pass phrases" instead of passwords. Pass phrases are longer, but easier to remember than complex passwords, and if well-chosen can provide better protection against hackers. +In addition, the following rules based on password length and usage of Multi-Factor Authentication (MFA) must be satisfied:

    +
      +
    • The enforced minimum length for accounts with MFA enabled is 8 characters. If MFA is not enabled for your account the minimum password length is 14 characters.
    • +
    • The ability to use all special characters according to the following guidelines (see also the Stanford Password Requirements Quick Guide) depending on the password length:
        +
      • 8-11: mixed case letters, numbers, & symbols
      • +
      • 12-15: mixed case letters & numbers
      • +
      • 16-19: mixed case letters
      • +
      • 20+: no restrictions
      • +
      • illustrating image
      • +
      +
    • +
    • Restrict sequential and repetitive characters (e.g. 12345 or aaaaaa)
    • +
    • Restrict context specific passwords (e.g. the name of the site, etc.)
    • +
    • Restrict commonly used passwords (e.g. p@ssw0rd, etc.) and dictionary words
    • +
    • Restrict passwords obtained from previous breach corpuses
    • +
    • Passwords must be changed every six months.
    • +
    +

    If you are struggling to come up with a good password, you can inspire from the following approach:

    +
    Creating a pass phrase (source: Stanford password policy)

    A pass phrase is basically just a series of words, which can include spaces, that you employ instead of a single pass "word." Pass phrases should be at least 16 to 25 characters in length (spaces count as characters), but no less. Longer is better because, though pass phrases look simple, the increased length provides so many possible permutations that a standard password-cracking program will not be effective. It is always a good thing to disguise that simplicity by throwing in elements of weirdness, nonsense, or randomness. Here, for example, are a couple pass phrase candidates:

    +
    +

    pizza with crispy spaniels

    +

    mangled persimmon therapy

    +
    +

    Punctuate and capitalize your phrase:

    +
    +

    Pizza with crispy Spaniels!

    +

    mangled Persimmon Therapy?

    +
    +

    Toss in a few numbers or symbols from the top row of the keyboard, plus some deliberately misspelled words, and you'll create an almost unguessable key to your account:

    +
    +

    Pizza w/ 6 krispy Spaniels!

    +

    mangl3d Persimmon Th3rapy?

    +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/policies/usage-charging/index.html b/policies/usage-charging/index.html new file mode 100644 index 00000000..f15b5345 --- /dev/null +++ b/policies/usage-charging/index.html @@ -0,0 +1,3072 @@ + + + + + + + + + + + + + + + + + + + + + + + + Usage Charging Policy - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    ULHPC Usage Charging Policy

    +
    +

    The advertised prices are for internal partners only

    +

    The price list and all other information of this page are meant for internal partners, i.e., not for external companies. +If you are not an internal partner, please contact us at hpc-partnership@uni.lu. Alternatively, you can contact LuxProvide, the national HPC center which aims at serving the private sector for HPC needs.

    +
    +

    How to estimate HPC costs for projects?

    +

    You can use the following excel document to estimate the cost of your HPC usage:

    +

    UL HPC Cost Estimates for Project Proposals [xlsx]

    +

    Note that there are two sheets offering two ways to estimate based on your specific situation. Please read the red sections to ensure that you are using the correct estimation sheet.

    +

    Note that even if you plan for large-scale experiments on PRACE/EuroHPC supercomputers through computing credits granted by Call for Proposals for Project Access, you should plan for ULHPC costs since you will have to demonstrate the scalability of your code -- the University's facility is ideal for that. You can contact hpc-partnership@uni.lu for more details about this.

    +

    HPC price list - 2022-10-01

    +

    Note that ULHPC price list has been updated, see below.

    +

    Compute

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Compute typeDescription€ (excl. VAT) / node-hour
    CPU - small28 cores, 128 GB RAM0.25€
    CPU - regular128 cores, 256 GB RAM1.25€
    CPU - big mem112 cores, 3 TB RAM6.00€
    GPU4 V100, 28 cores, 768 GB RAM5.00€
    +

    The prices above correspond to a full-node cost. However, jobs can use a fraction of a node and the price of the job will be computed based on that fraction. Please find below the core-hour / GPU-hour costs and how we compute how much to charge:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Compute typeUnit€ (excl. VAT)
    CPU - smallCore-hour0.0089€
    CPU - regularCore-hour0.0097€
    CPU - big memCore-hour0.0535€
    GPUGPU-hour1.25€
    +

    For CPU nodes, the fraction correspond to the number of requested cores, e.g. 64 cores on a CPU - regular node corresponds to 50% of the available cores and thus will be charged 50% of 1.25€.

    +

    Regarding the RAM of a job, if you do not override the default behaviour, you will receive a percentage of the RAM corresponding to the amount of requested cores, e.g, 128G of RAM for the 64 cores example from above (50% of a CPU - regular node). If you override the default behaviour and request more RAM, we will re-compute the equivalent number of cores, e.g. if you request 256G of RAM and 64 cores, we will charge 128 cores.

    +

    For GPU nodes, the fraction considers the number of GPUs. There are 4 GPUs, 28 cores and 768G of RAM on one machine. This means that for each GPU, you can have up to 7 cores and 192G of RAM. If you request more than those default, we will re-compute the GPU equivalent, e.g. if you request 1 GPU and 8 cores, we will charge 2 GPUs.

    +

    Storage

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    Storage type€ (excl. VAT) / GB / MonthAdditional information
    HomeFree500 GB
    Project0.02€1 TB free
    ScratchFree10 TB
    +

    Note that for project storage, we charge the quota and not the used storage.

    +

    HPC Resource allocation for UL internal R&D and training

    + + +

    ULHPC resources are free of charge for UL staff for their internal work and training activities. +Principal Investigators (PI) will nevertheless receive on a regular basis a usage report of their team activities on the UL HPC platform. +The corresponding accumulated price will be provided even if this amount is purely indicative and won't be charged back.

    +

    Any other activities will be reviewed with the rectorate and are a priori subjected to be billed.

    + + + +

    To allow the ULHPC team to keep track of the jobs related to a project, use the -A <projectname> flag in Slurm, either in the Slurm directives preamble of your script, e.g.,

    +
    #SBATCH -A myproject
    +
    + +

    or on the command line when you submit your job, e.g., sbatch -A myproject /path/to/launcher.sh

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..9e804121 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Uni.lu HPC Technical Documentation \u00b6 hpc-docs.uni.lu is a resource with the technical details for users to make effective use of the Uni.lu High Performance Computing (ULHPC) Facility 's resources. ULHPC Supercomputers Getting Started New: monthly HPC trainings for beginners, see our dedicated page . ULHPC Web Portals \u00b6 ULHPC Platform status ULHPC Tutorials - tutorials for many HPC topics Helpdesk / Ticket Portal - open tickets, make requests ULHPC Discourse - forum-like community portal ULHPC Home page - center news and information: hpc.uni.lu Yearly HPC School - our big ULHPC event Popular documentation pages \u00b6 SSH Management on ULHPC Identity Management Portal (IdM/IPA) Usage Charging Policy Job Status and Reason Codes Job Prioritization Factors Example of Job Launchers - currated example of job launcher scripts Slurm overview - Slurm commands, job script basics, submitting, updating jobs Join and Monitor Jobs File permissions - Unix file permissions ULHPC Software/Modules Environment Compiling/Building your own software About this site The ULHPC Technical Documentation is based on MkDocs and the mkdocs-material theme, and inspired by the (excellent) NERSC documentation site. These pages are hosted from a git repository and contributions are welcome!","title":"Home"},{"location":"#unilu-hpc-technical-documentation","text":"hpc-docs.uni.lu is a resource with the technical details for users to make effective use of the Uni.lu High Performance Computing (ULHPC) Facility 's resources. ULHPC Supercomputers Getting Started New: monthly HPC trainings for beginners, see our dedicated page .","title":"Uni.lu HPC Technical Documentation"},{"location":"#ulhpc-web-portals","text":"ULHPC Platform status ULHPC Tutorials - tutorials for many HPC topics Helpdesk / Ticket Portal - open tickets, make requests ULHPC Discourse - forum-like community portal ULHPC Home page - center news and information: hpc.uni.lu Yearly HPC School - our big ULHPC event","title":"ULHPC Web Portals"},{"location":"#popular-documentation-pages","text":"SSH Management on ULHPC Identity Management Portal (IdM/IPA) Usage Charging Policy Job Status and Reason Codes Job Prioritization Factors Example of Job Launchers - currated example of job launcher scripts Slurm overview - Slurm commands, job script basics, submitting, updating jobs Join and Monitor Jobs File permissions - Unix file permissions ULHPC Software/Modules Environment Compiling/Building your own software About this site The ULHPC Technical Documentation is based on MkDocs and the mkdocs-material theme, and inspired by the (excellent) NERSC documentation site. These pages are hosted from a git repository and contributions are welcome!","title":"Popular documentation pages"},{"location":"getting-started/","text":"Getting Started on ULHPC Facilities \u00b6 Welcome to the High Performance Computing (HPC) Facility of the University of Luxembourg (ULHPC)! This page will guide you through the basics of using ULHPC's supercomputers, storage systems, and services. What is ULHPC ? \u00b6 HPC is crucial in academic environments to achieve high-quality results in all application areas. All world-class universities require this type of facility to accelerate its research and ensure cutting-edge results in time to face the global competition. What is High Performance Computing? If you're new to all of this, this is probably the first question you have in mind. Here is a possible definition: \" High Performance Computing (HPC) most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business. \" Indeed, with the advent of the technological revolution and the digital transformation that made all scientific disciplines becoming computational nowadays, High-Performance Computing (HPC) is increasingly identified as a strategic asset and enabler to accelerate the research performed in all areas requiring intensive computing and large-scale Big Data analytic capabilities. Tasks which would typically require several years or centuries to be computed on a typical desktop computer may only require a couple of hours, days or weeks over an HPC system. For more details, you may want to refer to this Inside HPC article . Since 2007, the University of Luxembourg (UL) has invested tens of millions of euros into its own HPC facilities to responds to the growing needs for increased computing and storage. ULHPC (sometimes referred to as Uni.lu HPC) is the entity providing High Performance Computing and Big Data Storage services and support for UL researchers and its external partners. The University manages several research computing facilities located on the Belval campus , offering a cutting-edge research infrastructure to Luxembourg public research while serving as edge access to bigger systems from PRACE or EuroHPC, such as the Euro-HPC Luxembourg supercomputer \" MeluXina \". Warning In particular, the ULHPC is NOT the national HPC center of Luxembourg, but simply one of its strategic partner operating the second largest HPC facility of the country. The HPC facility is one element of the extensive digital research infrastructure and expertise developed by the University over the last years. It also supports the University\u2019s ambitious digital strategy and in particular the creation of a Facility for Data and HPC Sciences. This facility aims to provide a world-class user-driven digital infrastructure and services for fostering the development of collaborative activities related to frontier research and teaching in the fields of Computational and Data Sciences, including High Performance Computing, Data Analytics, Big Data Applications, Artificial Intelligence and Machine Learning. Reference ULHPC Article to cite If you want to get a good overview of the way our facility is setup, managed and evaluated, you can refer to the reference article you are in all cases entitled to refer to when crediting the ULHPC facility as per AUP (see also the publication page instructions ). ACM Reference Format | ORBilu entry | ULHPC blog post | slides : Sebastien Varrette, Hyacinthe Cartiaux, Sarah Peter, Emmanuel Kieffer, Teddy Valette, and Abatcha Olloh. 2022. Management of an Academic HPC & Research Computing Facility: The ULHPC Experience 2.0. In 6 th High Performance Computing and Cluster Technologies Conference (HPCCT 2022), July 08-10, 2022, Fuzhou, China. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3560442.3560445 Supercomputing and Storage Resources at a glance \u00b6 ULHPC is a strategic asset of the university and an important factor for the scientific and therefore also economic competitiveness of the Grand Duchy of Luxembourg. We provide a key research infrastructure featuring state-of-the-art computing and storage resources serving the UL HPC community primarily composed by UL researchers. The UL HPC platform has kept growing over time thanks to the continuous efforts of the core HPC / Digital Platform team - contact: hpc-team@uni.lu , recently completed with the EuroHPC Competence Center Task force (A. Vandeventer (Project Manager), L. Koutsantonis). ULHPC Computing and Storage Capacity (2022) Installed in the premises of the University\u2019s Centre de Calcul (CDC), the UL HPC facilities provides a total computing capacity of 2.76 PetaFlops and a shared storage capacity of around 10 PetaBytes . How big is 1 PetaFlops? 1 PetaByte? 1 PetaFlops = 10 15 floating-point operations per second (PFlops or PF for short), corresponds to the cumulative performance of more than 3510 Macbook Pro 13\" laptops 1 , or 7420 iPhone XS 2 1 PetaByte = 10 15 bytes = 8*10 15 bits, corresponding to the cumulative raw capacity of more than 1950 SSDs 512GB. This places the HPC center of the University of Luxembourg as one of the major actors in HPC and Big Data for the Greater Region Saar-Lor-Lux. In practice, the UL HPC Facility features 3 types of computing resources: \" regular \" nodes: Dual CPU, no accelerators, 128 to 256 GB of RAM \" gpu \" nodes: Dual CPU, 4 Nvidia accelerators, 768 GB RAM \" bigmem \" nodes: Quad-CPU, no accelerators, 3072 GB RAM These resources can be reserved and allocated for the execution of jobs scheduled on the platform thanks to a Resource and Job Management Systems (RJMS) - Slurm in practice. This tool allows for a fine-grain analysis and accounting of the used resources, facilitating the generation of activity reports for a given time period. Iris \u00b6 iris , in production since June 2017, is a Dell/Intel supercomputer with a theoretical peak performance of 1082 TFlop/s , featuring 196 computing nodes (totalling 5824 computing cores) and 96 GPU accelerators (NVidia V100). Iris Detailed system specifications Aion \u00b6 aion , in production since October 2020, is a Bull Sequana XH2000 /AMD supercomputer offering a peak performance of 1692 TFlop/s , featuring 318 compute nodes (totalling 40704 computing cores). Aion Detailed system specifications GPFS/SpectrumScale File System ( $HOME , project) \u00b6 IBM Spectrum Scale , formerly known as the General Parallel File System (GPFS), is global high -performance clustered file system available on all ULHPC computational systems through a DDN GridScaler/GS7K system. It allows sharing homedirs and project data between users, systems, and eventually (i.e. if needed) with the \"outside world\". GPFS/Spectrumscale Detailed specifications Lustre File System ( $SCRATCH ) \u00b6 The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. It is available as a global high -performance file system on all ULHPC computational systems through a DDN ExaScaler and is meant to host temporary scratch data . Lustre Detailed specifications OneFS File System (project, backup, archival) \u00b6 In 2014, the SIU, the UL HPC and the LCSB join their forces (and their funding) to acquire a scalable and modular NAS solution able to sustain the need for an internal big data storage, i.e. provides space for centralized data and backups of all devices used by the UL staff and all research-related data, including the one proceed on the UL HPC platform. A global low -performance Dell/EMC Isilon system is available on all ULHPC computational systems. It is intended for long term storage of data that is not frequently accessed. For more details, see Isilon specifications . Fast Infiniband Network \u00b6 High Performance Computing (HPC) encompasses advanced computation over parallel processing, enabling faster execution of highly compute intensive tasks. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores and their utilisation factor and the interconnect performance, efficiency, and scalability. InfiniBand is the fast interconnect technology implemented within all ULHPC supercomputers , more specifically: Iris relies on a EDR Infiniband (IB) Fabric in a Fat-Tree Topology Aion relies on a HDR100 Infiniband (IB) Fabric in a Fat-Tree Topology For more details, see ULHPC IB Network Detailed specifications . Acceptable Use Policy (AUP) \u00b6 There are a number of policies which apply to ULHPC users. UL HPC Acceptable Use Policy (AUP) [pdf] Important All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP) . You should read and keep a signed copy of this document before using the facility. Access and/or usage of any ULHPC system assumes the tacit acknowledgement to this policy. ULHPC Accounts \u00b6 In order to use the ULHPC facilities, you need to have a user account with an associated user login name (also called username) placed under an account hierarchy. Get a ULHPC account Understanding Slurm account hierarchy and accounting rules ULHPC Identity Management (IPA portal) Password policy Usage Charging Policy Connecting to ULHPC supercomputers \u00b6 MFA is strongly encouraged for all ULHPC users It will be soon become mandatory - detailed instructions will be provided soon. SSH Open On Demand Portal ULHPC Login/Access servers Troubleshooting connection problems Data Management \u00b6 Global Directory Structure Transferring data : Tools and recommendations to transfer data both inside and outside of ULHPC. Quotas Understanding Unix File Permissions User Environment \u00b6 Info $HOME , Project and $SCRATCH directories are shared across all ULHPC systems, meaning that every file/directory pushed or created on the front-end is available on the computing nodes every file/directory pushed or created on the computing nodes is available on the front-end ULHPC User Environment Computing Software Environment \u00b6 The ULHPC Team supplies a large variety of HPC utilities, scientific applications and programming libraries to its user community. The user software environment is generated using Easybuild (EB) and is made available as environment modules through LMod . ULHPC Modules Environment ULHPC Supported Software List . Available modules are reachable from the compute nodes only via module avail ULHPC Easybuild Configuration Running Containers Software building support If you need help to build / develop software, we encourage you to first try using Easybuild as a recipe probably exist for the software you consider. You can then open a ticket on HPC Help Desk Portal and we will evaluate the cost and effort required. You may also ask the help of other ULHPC users using the HPC User community mailing list: (moderated): `hpc-users@uni.lu . Running Jobs \u00b6 Typical usage of the ULHPC supercomputers involves the reservation and allocation of computing resources for the execution of jobs (submitted via launcher scripts ) and scheduled on the platform thanks to a Resource and Job Management Systems (RJMS) - Slurm in our case. Slurm on ULHPC clusters Convenient Slurm Commands Rich set of launcher scripts examples Fairshare Job Priority and Backfilling Job Accounting and Billing Interactive Computing \u00b6 ULHPC also supports interactive computing. Interactive jobs Jupyter Notebook Getting Help \u00b6 ULHPC places a very strong emphasis on enabling science and providing user-oriented systems and services. Documentation \u00b6 We have always maintained an extensive documentation and HPC tutorials available online, which aims at being the most up-to-date and comprehensive while covering many (many) topics. ULHPC Technical Documentation ULHPC Tutorials The ULHPC Team welcomes your contributions These pages are hosted from a git repository and contributions are welcome! Fork this repo Support \u00b6 ULHPC Support Overview Service Now HPC Support Portal Availability and Response Time HPC support is provided on a volunteer basis by UL HPC staff and associated UL experts working at normal business hours. We offer no guarantee on response time except with paid support contracts. The best MacBook Pro 13\" in 2020 is equiped with Ice Lake 2 GHz Intel Quad-Core i5 processors with an estimated computing performance of 284.3 Gflops as measured by the Geekbench 4 multi-core benchmarks platform, with SGEMM \u21a9 Apple A12 Bionic, the 64-bit ARM-based system on a chip (SoC) proposed on the iPhone XS has an estimated performance of 134.7 GFlops as measured by the Geekbench 4 multi-core benchmarks platform, with SGEMM \u21a9","title":"Getting Started"},{"location":"getting-started/#getting-started-on-ulhpc-facilities","text":"Welcome to the High Performance Computing (HPC) Facility of the University of Luxembourg (ULHPC)! This page will guide you through the basics of using ULHPC's supercomputers, storage systems, and services.","title":"Getting Started on ULHPC Facilities"},{"location":"getting-started/#what-is-ulhpc","text":"HPC is crucial in academic environments to achieve high-quality results in all application areas. All world-class universities require this type of facility to accelerate its research and ensure cutting-edge results in time to face the global competition. What is High Performance Computing? If you're new to all of this, this is probably the first question you have in mind. Here is a possible definition: \" High Performance Computing (HPC) most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business. \" Indeed, with the advent of the technological revolution and the digital transformation that made all scientific disciplines becoming computational nowadays, High-Performance Computing (HPC) is increasingly identified as a strategic asset and enabler to accelerate the research performed in all areas requiring intensive computing and large-scale Big Data analytic capabilities. Tasks which would typically require several years or centuries to be computed on a typical desktop computer may only require a couple of hours, days or weeks over an HPC system. For more details, you may want to refer to this Inside HPC article . Since 2007, the University of Luxembourg (UL) has invested tens of millions of euros into its own HPC facilities to responds to the growing needs for increased computing and storage. ULHPC (sometimes referred to as Uni.lu HPC) is the entity providing High Performance Computing and Big Data Storage services and support for UL researchers and its external partners. The University manages several research computing facilities located on the Belval campus , offering a cutting-edge research infrastructure to Luxembourg public research while serving as edge access to bigger systems from PRACE or EuroHPC, such as the Euro-HPC Luxembourg supercomputer \" MeluXina \". Warning In particular, the ULHPC is NOT the national HPC center of Luxembourg, but simply one of its strategic partner operating the second largest HPC facility of the country. The HPC facility is one element of the extensive digital research infrastructure and expertise developed by the University over the last years. It also supports the University\u2019s ambitious digital strategy and in particular the creation of a Facility for Data and HPC Sciences. This facility aims to provide a world-class user-driven digital infrastructure and services for fostering the development of collaborative activities related to frontier research and teaching in the fields of Computational and Data Sciences, including High Performance Computing, Data Analytics, Big Data Applications, Artificial Intelligence and Machine Learning. Reference ULHPC Article to cite If you want to get a good overview of the way our facility is setup, managed and evaluated, you can refer to the reference article you are in all cases entitled to refer to when crediting the ULHPC facility as per AUP (see also the publication page instructions ). ACM Reference Format | ORBilu entry | ULHPC blog post | slides : Sebastien Varrette, Hyacinthe Cartiaux, Sarah Peter, Emmanuel Kieffer, Teddy Valette, and Abatcha Olloh. 2022. Management of an Academic HPC & Research Computing Facility: The ULHPC Experience 2.0. In 6 th High Performance Computing and Cluster Technologies Conference (HPCCT 2022), July 08-10, 2022, Fuzhou, China. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3560442.3560445","title":"What is ULHPC ?"},{"location":"getting-started/#supercomputing-and-storage-resources-at-a-glance","text":"ULHPC is a strategic asset of the university and an important factor for the scientific and therefore also economic competitiveness of the Grand Duchy of Luxembourg. We provide a key research infrastructure featuring state-of-the-art computing and storage resources serving the UL HPC community primarily composed by UL researchers. The UL HPC platform has kept growing over time thanks to the continuous efforts of the core HPC / Digital Platform team - contact: hpc-team@uni.lu , recently completed with the EuroHPC Competence Center Task force (A. Vandeventer (Project Manager), L. Koutsantonis). ULHPC Computing and Storage Capacity (2022) Installed in the premises of the University\u2019s Centre de Calcul (CDC), the UL HPC facilities provides a total computing capacity of 2.76 PetaFlops and a shared storage capacity of around 10 PetaBytes . How big is 1 PetaFlops? 1 PetaByte? 1 PetaFlops = 10 15 floating-point operations per second (PFlops or PF for short), corresponds to the cumulative performance of more than 3510 Macbook Pro 13\" laptops 1 , or 7420 iPhone XS 2 1 PetaByte = 10 15 bytes = 8*10 15 bits, corresponding to the cumulative raw capacity of more than 1950 SSDs 512GB. This places the HPC center of the University of Luxembourg as one of the major actors in HPC and Big Data for the Greater Region Saar-Lor-Lux. In practice, the UL HPC Facility features 3 types of computing resources: \" regular \" nodes: Dual CPU, no accelerators, 128 to 256 GB of RAM \" gpu \" nodes: Dual CPU, 4 Nvidia accelerators, 768 GB RAM \" bigmem \" nodes: Quad-CPU, no accelerators, 3072 GB RAM These resources can be reserved and allocated for the execution of jobs scheduled on the platform thanks to a Resource and Job Management Systems (RJMS) - Slurm in practice. This tool allows for a fine-grain analysis and accounting of the used resources, facilitating the generation of activity reports for a given time period.","title":"Supercomputing and Storage Resources at a glance"},{"location":"getting-started/#iris","text":"iris , in production since June 2017, is a Dell/Intel supercomputer with a theoretical peak performance of 1082 TFlop/s , featuring 196 computing nodes (totalling 5824 computing cores) and 96 GPU accelerators (NVidia V100). Iris Detailed system specifications","title":"Iris"},{"location":"getting-started/#aion","text":"aion , in production since October 2020, is a Bull Sequana XH2000 /AMD supercomputer offering a peak performance of 1692 TFlop/s , featuring 318 compute nodes (totalling 40704 computing cores). Aion Detailed system specifications","title":"Aion"},{"location":"getting-started/#gpfsspectrumscale-file-system-home-project","text":"IBM Spectrum Scale , formerly known as the General Parallel File System (GPFS), is global high -performance clustered file system available on all ULHPC computational systems through a DDN GridScaler/GS7K system. It allows sharing homedirs and project data between users, systems, and eventually (i.e. if needed) with the \"outside world\". GPFS/Spectrumscale Detailed specifications","title":"GPFS/SpectrumScale File System ($HOME, project)"},{"location":"getting-started/#lustre-file-system-scratch","text":"The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. It is available as a global high -performance file system on all ULHPC computational systems through a DDN ExaScaler and is meant to host temporary scratch data . Lustre Detailed specifications","title":"Lustre File System ($SCRATCH)"},{"location":"getting-started/#onefs-file-system-project-backup-archival","text":"In 2014, the SIU, the UL HPC and the LCSB join their forces (and their funding) to acquire a scalable and modular NAS solution able to sustain the need for an internal big data storage, i.e. provides space for centralized data and backups of all devices used by the UL staff and all research-related data, including the one proceed on the UL HPC platform. A global low -performance Dell/EMC Isilon system is available on all ULHPC computational systems. It is intended for long term storage of data that is not frequently accessed. For more details, see Isilon specifications .","title":"OneFS File System (project, backup, archival)"},{"location":"getting-started/#fast-infiniband-network","text":"High Performance Computing (HPC) encompasses advanced computation over parallel processing, enabling faster execution of highly compute intensive tasks. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores and their utilisation factor and the interconnect performance, efficiency, and scalability. InfiniBand is the fast interconnect technology implemented within all ULHPC supercomputers , more specifically: Iris relies on a EDR Infiniband (IB) Fabric in a Fat-Tree Topology Aion relies on a HDR100 Infiniband (IB) Fabric in a Fat-Tree Topology For more details, see ULHPC IB Network Detailed specifications .","title":"Fast Infiniband Network"},{"location":"getting-started/#acceptable-use-policy-aup","text":"There are a number of policies which apply to ULHPC users. UL HPC Acceptable Use Policy (AUP) [pdf] Important All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP) . You should read and keep a signed copy of this document before using the facility. Access and/or usage of any ULHPC system assumes the tacit acknowledgement to this policy.","title":"Acceptable Use Policy (AUP)"},{"location":"getting-started/#ulhpc-accounts","text":"In order to use the ULHPC facilities, you need to have a user account with an associated user login name (also called username) placed under an account hierarchy. Get a ULHPC account Understanding Slurm account hierarchy and accounting rules ULHPC Identity Management (IPA portal) Password policy Usage Charging Policy","title":"ULHPC Accounts"},{"location":"getting-started/#connecting-to-ulhpc-supercomputers","text":"MFA is strongly encouraged for all ULHPC users It will be soon become mandatory - detailed instructions will be provided soon. SSH Open On Demand Portal ULHPC Login/Access servers Troubleshooting connection problems","title":"Connecting to ULHPC supercomputers"},{"location":"getting-started/#data-management","text":"Global Directory Structure Transferring data : Tools and recommendations to transfer data both inside and outside of ULHPC. Quotas Understanding Unix File Permissions","title":"Data Management"},{"location":"getting-started/#user-environment","text":"Info $HOME , Project and $SCRATCH directories are shared across all ULHPC systems, meaning that every file/directory pushed or created on the front-end is available on the computing nodes every file/directory pushed or created on the computing nodes is available on the front-end ULHPC User Environment","title":"User Environment"},{"location":"getting-started/#computing-software-environment","text":"The ULHPC Team supplies a large variety of HPC utilities, scientific applications and programming libraries to its user community. The user software environment is generated using Easybuild (EB) and is made available as environment modules through LMod . ULHPC Modules Environment ULHPC Supported Software List . Available modules are reachable from the compute nodes only via module avail ULHPC Easybuild Configuration Running Containers Software building support If you need help to build / develop software, we encourage you to first try using Easybuild as a recipe probably exist for the software you consider. You can then open a ticket on HPC Help Desk Portal and we will evaluate the cost and effort required. You may also ask the help of other ULHPC users using the HPC User community mailing list: (moderated): `hpc-users@uni.lu .","title":"Computing Software Environment"},{"location":"getting-started/#running-jobs","text":"Typical usage of the ULHPC supercomputers involves the reservation and allocation of computing resources for the execution of jobs (submitted via launcher scripts ) and scheduled on the platform thanks to a Resource and Job Management Systems (RJMS) - Slurm in our case. Slurm on ULHPC clusters Convenient Slurm Commands Rich set of launcher scripts examples Fairshare Job Priority and Backfilling Job Accounting and Billing","title":"Running Jobs"},{"location":"getting-started/#interactive-computing","text":"ULHPC also supports interactive computing. Interactive jobs Jupyter Notebook","title":"Interactive Computing"},{"location":"getting-started/#getting-help","text":"ULHPC places a very strong emphasis on enabling science and providing user-oriented systems and services.","title":"Getting Help"},{"location":"getting-started/#documentation","text":"We have always maintained an extensive documentation and HPC tutorials available online, which aims at being the most up-to-date and comprehensive while covering many (many) topics. ULHPC Technical Documentation ULHPC Tutorials The ULHPC Team welcomes your contributions These pages are hosted from a git repository and contributions are welcome! Fork this repo","title":"Documentation"},{"location":"getting-started/#support","text":"ULHPC Support Overview Service Now HPC Support Portal Availability and Response Time HPC support is provided on a volunteer basis by UL HPC staff and associated UL experts working at normal business hours. We offer no guarantee on response time except with paid support contracts. The best MacBook Pro 13\" in 2020 is equiped with Ice Lake 2 GHz Intel Quad-Core i5 processors with an estimated computing performance of 284.3 Gflops as measured by the Geekbench 4 multi-core benchmarks platform, with SGEMM \u21a9 Apple A12 Bionic, the 64-bit ARM-based system on a chip (SoC) proposed on the iPhone XS has an estimated performance of 134.7 GFlops as measured by the Geekbench 4 multi-core benchmarks platform, with SGEMM \u21a9","title":"Support"},{"location":"hpc-schools/","text":"On-site HPC trainings and tutorials \u00b6 We propose periodical on-site events for our users. They are free of charge and can be attended by anyone from the University of Luxembourg faculties and interdisciplinary centers. Additionally, we also accept users from LIST, LISER and LIH. If you are part of another public research center, please contact us . Forthcoming events \u00b6 HPC School for beginners - July 2024, 1 st -2 nd , 1.040, MNO - Belval Campus Python HPC School - March 2024, 27-28 th , 1.030 MNO - Belval Campus HPC School for beginners \u00b6 This event aims to equip you with essential skills and knowledge to embark on your High-Performance Computing journey. The event is organized monthly and is composed of two half days (usually 9am-12pm). Feel free to only attend the second day session if: You can connect to the ULHPC You are comfortable with the command line interface Limited spots available per session (usually 30 max). Upcoming sessions: \u00b6 Date: July 2024, 1 st -2 nd Time: 9am to 12pm (both days). Location: 1.040, MNO - Belval Campus. Morning 1 - Accessing the Cluster and Command Line Introduction \u00b6 Learn how to access the HPC cluster, set up your machine, and navigate the command line interface effectively. Gain confidence in interacting with the cluster environment. Morning 2 - Understanding HPC Workflow: Job Submission and Monitoring \u00b6 Explore the inner workings of HPC systems. Discover the process of submitting and managing computational tasks. Learn how to monitor and optimize job performance. Python HPC School \u00b6 In this workshop, we will explore the process of improving Python code for efficient execution. Chances are, you 're already familiar with Python and Numpy. However, we will start by mastering profiling and efficient NumPy usage as these are crucial steps before venturing into parallelization. Once your code is fine-tuned with Numpy we will explore the utilization of Python's parallel libraries to unlock the potential of using multiple CPU cores. By the end, you will be well equipped to harness Python's potential for high-performance tasks on the HPC infrastructure. Target Audience Description \u00b6 The workshop is designed for individuals who are interested in advancing their skills and knowledge in Python-based scientific and data computing. The ideal participants would typically possess basic to intermediate Python and Numpy skills, along with some familiarity with parallel programming. This workshop will give a good starting point to leverage the usage of the HPC computing power to speed up your Python programs. Upcoming sessions \u00b6 Limited spots available per session (usually 30 max). Date: March, 2024, 27 th and 28 th . Time: 10h to 12h and 14h to 16h (both days). Location: MNO 1.030. - Belval campus First day \u2013 Jupyter notebook on ULHPC / profiling efficient usage of Numpy \u00b6 Program \u00b6 Setting up a Jupyter notebook on an HPC node - 10am to 11am Taking time and profiling python code - 11am to 12pm Lunch break - 12pm to 2pm Numpy basics for replacing python loops for efficient computations - 2pm to 4pm Requirements \u00b6 Having an HPC account to access the cluster. Basic knowledge on SLURM (beginners HPC school). A basic understanding of Python programming. Familiarity with Jupyter Notebook (installed and configured). A basic understanding of Numpy and linear algebra. Second day \u2013 Improving performance with python parallel packages \u00b6 Program \u00b6 Use case understanding and Python implementation - 10am to 10:30am Numpy implementation - 10:30am to 11am Python\u2019s Multiprocessing - 11am to 12pm Lunch break - 12pm to 2pm PyMP - 2pm to 2:30pm Cython - 2:30pm to 3pm Numba and final remarks- 3pm to 4pm Requirements \u00b6 Having an HPC account to access the cluster. Basic knowledge on SLURM (beginners HPC school). A basic understanding of Python programming. Familiarity with Jupyter Notebook (installed and configured). A basic understanding of Numpy and linear algebra. Familiarity with parallel programming. Conda environment management for Python and R \u00b6 The creation of Conda environments is supported in the University of Luxembourg HPC systems. But when Conda environments are needed and what tools are available to create Conda environments? Attend this tutorial if your projects involve R or Python and you need support with installing packages. The topics that will be covered include: how to install packages using the facilities available in R and Python, how to document and exchange environment setups, when a Conda environment is required for a project, and what tools are available for the creation of Conda environments. Upcoming sessions \u00b6 Mar. 2024 (please await further announcements regarding specific dates) Introduction to numerical methods with BLAS \u00b6 This seminar covers basic principles of numerical library usage with BLAS as an example. The library mechanisms for organizing software are studied in detail, covering topics such as the differences between static and dynamic libraries. The practical sessions will demonstrate the generation of library files from source code, and how programs can use library functions. After an overview of software libraries, the BLAS library is presented, including the available operations and the organization of the code. The attendees will have the opportunity to use functions of BLAS in a few practical examples. The effects of caches in numerical library performance are then studied in detail. In the practical sessions the attendees will have the opportunity to try cache aware programming techniques that better exploit the performance of the available hardware. Overall in this seminar you learn how to: compile libraries from source code, compile and link code that uses numerical libraries, understand the effects of caches in numerical library performance, and exploit caches to leverage better performance. Upcoming sessions \u00b6 No sessions are planned at the moment. Future sessions will be announced here, please wait for announcements or contact the HPC team via email to express your interest.","title":"HPC Trainings and Tutorials"},{"location":"hpc-schools/#on-site-hpc-trainings-and-tutorials","text":"We propose periodical on-site events for our users. They are free of charge and can be attended by anyone from the University of Luxembourg faculties and interdisciplinary centers. Additionally, we also accept users from LIST, LISER and LIH. If you are part of another public research center, please contact us .","title":"On-site HPC trainings and tutorials"},{"location":"hpc-schools/#forthcoming-events","text":"HPC School for beginners - July 2024, 1 st -2 nd , 1.040, MNO - Belval Campus Python HPC School - March 2024, 27-28 th , 1.030 MNO - Belval Campus","title":"Forthcoming events"},{"location":"hpc-schools/#hpc-school-for-beginners","text":"This event aims to equip you with essential skills and knowledge to embark on your High-Performance Computing journey. The event is organized monthly and is composed of two half days (usually 9am-12pm). Feel free to only attend the second day session if: You can connect to the ULHPC You are comfortable with the command line interface Limited spots available per session (usually 30 max).","title":"HPC School for beginners"},{"location":"hpc-schools/#upcoming-sessions","text":"Date: July 2024, 1 st -2 nd Time: 9am to 12pm (both days). Location: 1.040, MNO - Belval Campus.","title":"Upcoming sessions:"},{"location":"hpc-schools/#morning-1-accessing-the-cluster-and-command-line-introduction","text":"Learn how to access the HPC cluster, set up your machine, and navigate the command line interface effectively. Gain confidence in interacting with the cluster environment.","title":"Morning 1 - Accessing the Cluster and Command Line Introduction"},{"location":"hpc-schools/#morning-2-understanding-hpc-workflow-job-submission-and-monitoring","text":"Explore the inner workings of HPC systems. Discover the process of submitting and managing computational tasks. Learn how to monitor and optimize job performance.","title":"Morning 2 - Understanding HPC Workflow: Job Submission and Monitoring"},{"location":"hpc-schools/#python-hpc-school","text":"In this workshop, we will explore the process of improving Python code for efficient execution. Chances are, you 're already familiar with Python and Numpy. However, we will start by mastering profiling and efficient NumPy usage as these are crucial steps before venturing into parallelization. Once your code is fine-tuned with Numpy we will explore the utilization of Python's parallel libraries to unlock the potential of using multiple CPU cores. By the end, you will be well equipped to harness Python's potential for high-performance tasks on the HPC infrastructure.","title":"Python HPC School"},{"location":"hpc-schools/#target-audience-description","text":"The workshop is designed for individuals who are interested in advancing their skills and knowledge in Python-based scientific and data computing. The ideal participants would typically possess basic to intermediate Python and Numpy skills, along with some familiarity with parallel programming. This workshop will give a good starting point to leverage the usage of the HPC computing power to speed up your Python programs.","title":"Target Audience Description"},{"location":"hpc-schools/#upcoming-sessions_1","text":"Limited spots available per session (usually 30 max). Date: March, 2024, 27 th and 28 th . Time: 10h to 12h and 14h to 16h (both days). Location: MNO 1.030. - Belval campus","title":"Upcoming sessions"},{"location":"hpc-schools/#first-day-jupyter-notebook-on-ulhpc-profiling-efficient-usage-of-numpy","text":"","title":"First day \u2013 Jupyter notebook on ULHPC / profiling efficient usage of Numpy"},{"location":"hpc-schools/#program","text":"Setting up a Jupyter notebook on an HPC node - 10am to 11am Taking time and profiling python code - 11am to 12pm Lunch break - 12pm to 2pm Numpy basics for replacing python loops for efficient computations - 2pm to 4pm","title":"Program"},{"location":"hpc-schools/#requirements","text":"Having an HPC account to access the cluster. Basic knowledge on SLURM (beginners HPC school). A basic understanding of Python programming. Familiarity with Jupyter Notebook (installed and configured). A basic understanding of Numpy and linear algebra.","title":"Requirements"},{"location":"hpc-schools/#second-day-improving-performance-with-python-parallel-packages","text":"","title":"Second day \u2013 Improving performance with python parallel packages"},{"location":"hpc-schools/#program_1","text":"Use case understanding and Python implementation - 10am to 10:30am Numpy implementation - 10:30am to 11am Python\u2019s Multiprocessing - 11am to 12pm Lunch break - 12pm to 2pm PyMP - 2pm to 2:30pm Cython - 2:30pm to 3pm Numba and final remarks- 3pm to 4pm","title":"Program"},{"location":"hpc-schools/#requirements_1","text":"Having an HPC account to access the cluster. Basic knowledge on SLURM (beginners HPC school). A basic understanding of Python programming. Familiarity with Jupyter Notebook (installed and configured). A basic understanding of Numpy and linear algebra. Familiarity with parallel programming.","title":"Requirements"},{"location":"hpc-schools/#conda-environment-management-for-python-and-r","text":"The creation of Conda environments is supported in the University of Luxembourg HPC systems. But when Conda environments are needed and what tools are available to create Conda environments? Attend this tutorial if your projects involve R or Python and you need support with installing packages. The topics that will be covered include: how to install packages using the facilities available in R and Python, how to document and exchange environment setups, when a Conda environment is required for a project, and what tools are available for the creation of Conda environments.","title":"Conda environment management for Python and R"},{"location":"hpc-schools/#upcoming-sessions_2","text":"Mar. 2024 (please await further announcements regarding specific dates)","title":"Upcoming sessions"},{"location":"hpc-schools/#introduction-to-numerical-methods-with-blas","text":"This seminar covers basic principles of numerical library usage with BLAS as an example. The library mechanisms for organizing software are studied in detail, covering topics such as the differences between static and dynamic libraries. The practical sessions will demonstrate the generation of library files from source code, and how programs can use library functions. After an overview of software libraries, the BLAS library is presented, including the available operations and the organization of the code. The attendees will have the opportunity to use functions of BLAS in a few practical examples. The effects of caches in numerical library performance are then studied in detail. In the practical sessions the attendees will have the opportunity to try cache aware programming techniques that better exploit the performance of the available hardware. Overall in this seminar you learn how to: compile libraries from source code, compile and link code that uses numerical libraries, understand the effects of caches in numerical library performance, and exploit caches to leverage better performance.","title":"Introduction to numerical methods with BLAS"},{"location":"hpc-schools/#upcoming-sessions_3","text":"No sessions are planned at the moment. Future sessions will be announced here, please wait for announcements or contact the HPC team via email to express your interest.","title":"Upcoming sessions"},{"location":"layout/","text":"This repository is organized as follows (use tree -L 2 to complete): . \u251c\u2500\u2500 Makefile # GNU Make configuration \u251c\u2500\u2500 README.md # Project README \u251c\u2500\u2500 VERSION # /!\\ DO NOT EDIT. Current repository version \u251c\u2500\u2500 docs/ # [MkDocs](mkdocs.org) main directory \u251c\u2500\u2500 mkdocs.yml # [MkDocs](mkdocs.org) configuration \u251c\u2500\u2500 .envrc # Local direnv configuration -- see https://direnv.net/ \u2502 # Assumes you have installed in ~/.config/direnv/direnvrc \u2502 # the version proposed on \u2502 # https://raw.githubusercontent.com/Falkor/dotfiles/master/direnv/direnvrc \u251c\u2500\u2500 .python- { version,virtualenv } # Pyenv/Virtualenv configuration","title":"Layout"},{"location":"setup/","text":"Pre-Requisites and Laptop Setup \u00b6 You should follow the instructions provided on the ULHPC Tutorials: Pre-requisites page. For those not familiar with Linux Shell, kindly refer to the \"Introducing the UNIX/Linux Shell\" tutorial.","title":"Pre-Requisites and Laptop Setup"},{"location":"setup/#pre-requisites-and-laptop-setup","text":"You should follow the instructions provided on the ULHPC Tutorials: Pre-requisites page. For those not familiar with Linux Shell, kindly refer to the \"Introducing the UNIX/Linux Shell\" tutorial.","title":"Pre-Requisites and Laptop Setup"},{"location":"teaching-with-the-ulhpc/","text":"Teaching with the ULHPC \u00b6 If you plan to use the ULHPC to teach for groups of students, we highly recommend that you contact us (the HPC team) for the following reasons: When possible, we can plan our maintenance sessions outside of your planned teaching / training dates. We can help with the reservation of HPC ressources (e.g., GPU or big memory nodes) as some are highly booked and may not be available on-demand the day of your teaching or training session. We can provide temporary ULHPC account for your students / attendees. Resource reservation \u00b6 The ULHPC offers different types of computing nodes and their availability can vary greatly throughout the year. In particular, GPU and big memory nodes are rare and intensively used. If you plan to use them for a teaching session, please contact our team at hpc-team@uni.lu . Temporary student accounts \u00b6 For hands-on sessions involving students or trainees who don't necessarily have an ULHPC account, we can provide temporary accesses. As a teacher / trainer, your account will also have access to all the students / trainees accounts to simplify interactions and troubleshooting during your sessions. Please contact our team at hpc-team@uni.lu to help you in the preparation of your teaching / training session.","title":"Teaching with the ULHPC"},{"location":"teaching-with-the-ulhpc/#teaching-with-the-ulhpc","text":"If you plan to use the ULHPC to teach for groups of students, we highly recommend that you contact us (the HPC team) for the following reasons: When possible, we can plan our maintenance sessions outside of your planned teaching / training dates. We can help with the reservation of HPC ressources (e.g., GPU or big memory nodes) as some are highly booked and may not be available on-demand the day of your teaching or training session. We can provide temporary ULHPC account for your students / attendees.","title":"Teaching with the ULHPC"},{"location":"teaching-with-the-ulhpc/#resource-reservation","text":"The ULHPC offers different types of computing nodes and their availability can vary greatly throughout the year. In particular, GPU and big memory nodes are rare and intensively used. If you plan to use them for a teaching session, please contact our team at hpc-team@uni.lu .","title":"Resource reservation"},{"location":"teaching-with-the-ulhpc/#temporary-student-accounts","text":"For hands-on sessions involving students or trainees who don't necessarily have an ULHPC account, we can provide temporary accesses. As a teacher / trainer, your account will also have access to all the students / trainees accounts to simplify interactions and troubleshooting during your sessions. Please contact our team at hpc-team@uni.lu to help you in the preparation of your teaching / training session.","title":"Temporary student accounts"},{"location":"accounts/","text":"Get an Account \u00b6 In order to use the ULHPC facilities, you need to have a user account with an associated user login name (also called username) placed under an account hierarchy. Conditions of acceptance \u00b6 Acceptable Use Policy (AUP) \u00b6 There are a number of policies which apply to ULHPC users. UL HPC Acceptable Use Policy (AUP) [pdf] Important All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP) . You should read and keep a signed copy of this document before using the facility. Access and/or usage of any ULHPC system assumes the tacit acknowledgement to this policy. Remember that you are expected to acknowledge ULHPC in your publications . See Acceptable Use Policy for more details. ULHPC Platforms are meant ONLY for R&D! The ULHPC facility is made for Research and Development and it is NOT a full production computing center -- for such needs, consider using the National HPC center . In particular, we cannot make any guarantees of cluster availability or timely job completion even if we target a minimum compute node availability above 95% which is typically met - for instance, past KPI statistics in 2019 report a computing node availability above 97%. Resource allocation policies \u00b6 ULHPC Usage Charging and Resource allocation policy UL internal R&D and training \u00b6 ULHPC resources are free of charge for UL staff for their internal work and training activities . Principal Investigators (PI) will nevertheless receive on a regular basis a usage report of their team activities on the UL HPC platform. The corresponding accumulated price will be provided even if this amount is purely indicative and won't be charged back. Any other activities will be reviewed with the rectorate and are a priori subjected to be billed. Research Projects \u00b6 Externals and private partners \u00b6 How to Get a New User account? \u00b6 Account Request Form University staff - you can submit a request for a new ULHPC account by using the ServiceNow portal (Research > HPC > User access & accounts > New HPC account request) . Students - submit your account request on the Student Service Portal . Externals - a University staff member must request the account for you, using the section New HPC account for external . Enter the professional data (organization and institutional email address). Specify the line manager / project PI if needed. If you need to access a specific project directory, ask the project directory owner to open a ticket using the section Add user within project . Your account will undergo user checks, in accordance with ULHPC policies, to verify your identity and the information proposed. Under some circumstances, there could be a delay while this vetting takes place. After vetting has completed, you will receive a welcome email with your login information, and a unique link to a PrivateBin 1 holding a random temporary password. That link will expire if not used within 24 hours. The PI and PI Proxies for the project will be notified when applicable. Finally, you will need to log into the HPC IPA Portal to set up your initial password and Multi-Factor Authentication (MFA) for your account. Your new password must adhere to ULHPC's password requirements see Password policy and guidelines ULHPC Identity Management (IPA portal) documentation UL HPC \\neq \\neq University credentials Be aware that the source of authentication for the HPC services based on RedHat IdM/IPA DIFFERS from the University credentials (based on UL Active Directory). ULHPC credentials are maintained by the HPC team; associated portal: https://hpc-ipa.uni.lu/ipa/ui/ authentication service for: UL HPC University credentials are maintained by the IT team of the University authentication service for Service Now and all other UL services Managing User Accounts \u00b6 ULHPC user accounts are managed in through the HPC IPA web portal . Security Incidents \u00b6 If you think there has been a computer security incident, you should contact the ULHPC Team and the University CISO team as soon as possible: To: hpc-team@uni.lu,laurent.weber@uni.lu Subject: Security Incident for HPC account ' ' ( ADAPT accordingly ) Please save any evidence of the break-in and include as many details as possible in your communication with us. How to Get a New Project account? \u00b6 Projects are defined for accounting purposes and are associated to a set of user accounts allowed by the project PI to access its data and submit jobs on behalf of the project account. See Slurm Account Hierarchy . You can request (or be automatically added) to project accounts for accounting purposes. For more information, please see the Project Account documentation FAQ \u00b6 Can I share an account? \u2013 Account Security Policies \u00b6 Danger The sharing of passwords or login credentials is not allowed under UL HPC and University information security policies. Please bear in mind that this policy also protects the end-user. Sharing credentials removes the ability to audit and accountability for the account holder in case of account misuse. Accounts which are in violation of this policy may be disabled or otherwise limited. Accounts knowingly skirting this policy may be banned. If you find that you need to share resources among multiple individuals, shared projects are just the way to go, and remember that the University extends access to its HPC resources ( i.e. , facility and expert HPC consultants) to the scientific staff of national public organizations and external partners for the duration of joint research projects under the conditions defined above. When in doubt, please contact us and we will be happy to assist you with finding a safe and secure way to do so. PrivateBin is a minimalist, open source online pastebin where the server has zero knowledge of pasted data. Data is encrypted / decrypted in the browser using 256bit AES in Galois Counter mode. \u21a9","title":"Get an Account"},{"location":"accounts/#get-an-account","text":"In order to use the ULHPC facilities, you need to have a user account with an associated user login name (also called username) placed under an account hierarchy.","title":"Get an Account"},{"location":"accounts/#conditions-of-acceptance","text":"","title":"Conditions of acceptance"},{"location":"accounts/#acceptable-use-policy-aup","text":"There are a number of policies which apply to ULHPC users. UL HPC Acceptable Use Policy (AUP) [pdf] Important All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP) . You should read and keep a signed copy of this document before using the facility. Access and/or usage of any ULHPC system assumes the tacit acknowledgement to this policy. Remember that you are expected to acknowledge ULHPC in your publications . See Acceptable Use Policy for more details. ULHPC Platforms are meant ONLY for R&D! The ULHPC facility is made for Research and Development and it is NOT a full production computing center -- for such needs, consider using the National HPC center . In particular, we cannot make any guarantees of cluster availability or timely job completion even if we target a minimum compute node availability above 95% which is typically met - for instance, past KPI statistics in 2019 report a computing node availability above 97%.","title":"Acceptable Use Policy (AUP)"},{"location":"accounts/#resource-allocation-policies","text":"ULHPC Usage Charging and Resource allocation policy","title":"Resource allocation policies"},{"location":"accounts/#ul-internal-rd-and-training","text":"ULHPC resources are free of charge for UL staff for their internal work and training activities . Principal Investigators (PI) will nevertheless receive on a regular basis a usage report of their team activities on the UL HPC platform. The corresponding accumulated price will be provided even if this amount is purely indicative and won't be charged back. Any other activities will be reviewed with the rectorate and are a priori subjected to be billed.","title":"UL internal R&D and training"},{"location":"accounts/#research-projects","text":"","title":"Research Projects"},{"location":"accounts/#externals-and-private-partners","text":"","title":"Externals and private partners"},{"location":"accounts/#how-to-get-a-new-user-account","text":"Account Request Form University staff - you can submit a request for a new ULHPC account by using the ServiceNow portal (Research > HPC > User access & accounts > New HPC account request) . Students - submit your account request on the Student Service Portal . Externals - a University staff member must request the account for you, using the section New HPC account for external . Enter the professional data (organization and institutional email address). Specify the line manager / project PI if needed. If you need to access a specific project directory, ask the project directory owner to open a ticket using the section Add user within project . Your account will undergo user checks, in accordance with ULHPC policies, to verify your identity and the information proposed. Under some circumstances, there could be a delay while this vetting takes place. After vetting has completed, you will receive a welcome email with your login information, and a unique link to a PrivateBin 1 holding a random temporary password. That link will expire if not used within 24 hours. The PI and PI Proxies for the project will be notified when applicable. Finally, you will need to log into the HPC IPA Portal to set up your initial password and Multi-Factor Authentication (MFA) for your account. Your new password must adhere to ULHPC's password requirements see Password policy and guidelines ULHPC Identity Management (IPA portal) documentation UL HPC \\neq \\neq University credentials Be aware that the source of authentication for the HPC services based on RedHat IdM/IPA DIFFERS from the University credentials (based on UL Active Directory). ULHPC credentials are maintained by the HPC team; associated portal: https://hpc-ipa.uni.lu/ipa/ui/ authentication service for: UL HPC University credentials are maintained by the IT team of the University authentication service for Service Now and all other UL services","title":"How to Get a New User account?"},{"location":"accounts/#managing-user-accounts","text":"ULHPC user accounts are managed in through the HPC IPA web portal .","title":"Managing User Accounts"},{"location":"accounts/#security-incidents","text":"If you think there has been a computer security incident, you should contact the ULHPC Team and the University CISO team as soon as possible: To: hpc-team@uni.lu,laurent.weber@uni.lu Subject: Security Incident for HPC account ' ' ( ADAPT accordingly ) Please save any evidence of the break-in and include as many details as possible in your communication with us.","title":"Security Incidents"},{"location":"accounts/#how-to-get-a-new-project-account","text":"Projects are defined for accounting purposes and are associated to a set of user accounts allowed by the project PI to access its data and submit jobs on behalf of the project account. See Slurm Account Hierarchy . You can request (or be automatically added) to project accounts for accounting purposes. For more information, please see the Project Account documentation","title":"How to Get a New Project account?"},{"location":"accounts/#faq","text":"","title":"FAQ"},{"location":"accounts/#can-i-share-an-account-account-security-policies","text":"Danger The sharing of passwords or login credentials is not allowed under UL HPC and University information security policies. Please bear in mind that this policy also protects the end-user. Sharing credentials removes the ability to audit and accountability for the account holder in case of account misuse. Accounts which are in violation of this policy may be disabled or otherwise limited. Accounts knowingly skirting this policy may be banned. If you find that you need to share resources among multiple individuals, shared projects are just the way to go, and remember that the University extends access to its HPC resources ( i.e. , facility and expert HPC consultants) to the scientific staff of national public organizations and external partners for the duration of joint research projects under the conditions defined above. When in doubt, please contact us and we will be happy to assist you with finding a safe and secure way to do so. PrivateBin is a minimalist, open source online pastebin where the server has zero knowledge of pasted data. Data is encrypted / decrypted in the browser using 256bit AES in Galois Counter mode. \u21a9","title":"Can I share an account? \u2013 Account Security Policies"},{"location":"accounts/collaboration_accounts/","text":"All ULHPC login accounts are associated with specific individuals and must not be shared. In some HPC centers, you may be able to request Collaboration Accounts designed to handle the following use cases: Collaborative Data Management : Large scale experimental and simulation data are typically read or written by multiple collaborators and are kept on disk for long periods. Collaborative Software Management Collaborative Job Management Info By default, we DO NOT provide Collaboration Accounts and encourage the usage of shared research projects stored on the Global project directory to enable the group members to manipulate project data with the appropriate use of unix groups and file permissions. For dedicated job billing and accounting purposes, you should also request the creation of a project account (this will be done for all accepted funded projects). For more details, see Project Accounts documentation . We are aware nevertheless that a problem that often arises is that the files are owned by the collaborator who did the work and if that collaborator changes roles the default unix file permissions usually are such that the files cannot be managed (deleted) by other members of the collaboration and system administrators must be contacted. Similarly, for some use cases, Collaboration Accounts would enable members of the team to manipulate jobs submitted by other team members as necessary. Justified and argued use cases can be submitted to the HPC team to find the appropriate solution by opening a ticket on the HPC Helpdesk Portal .","title":"Collaboration Accounts"},{"location":"accounts/projects/","text":"Projects Accounts \u00b6 Shared project in the Global project directory. \u00b6 We can setup for you a dedicated project directory on the GPFS/SpectrumScale Filesystem for sharing research data with other colleagues. Whether to create a new project directory or to add/remove members to the group set to access the project data, use the Service Now HPC Support Portal . Service Now HPC Support Portal Data Storage Charging \u00b6 Slurm Project Account \u00b6 As explained in the Slurm Account Hierarchy , projects account can be created at the L3 level of the association tree. To quickly list a given project accounts and the users attached to it, you can use the sassoc helper function : # /!\\ ADAPT project acronym/name accordingly sassoc project_ Alternatively, you can rely on sacctmgr , typically coupled with the withassoc attribute: # /!\\ ADAPT project acronym/name accordingly sacctmgr show account where name = project_ format = \"account%20,user%20,Share,QOS%50\" withassoc As per HPC Resource Allocations for Research Project , creation of such project accounts is mandatory for funded research projects , since usage charging may occur when a detailed reporting will be provided for auditing purposes. With the help of the University Research Support department, we will create automatically project accounts from the list of accepted project which acknowledge the need of computing resources. Feel free nethertheless to use the Service Now HPC Support Portal to request the creation of a new project account or to add/remove members to the group - this might be pertinent for internal research projects or specific collaboration with external partners requiring a separate usage monitoring . Important Project account is a natural way to access the higher priority QOS not granted by default to your personnal account on the ULHPC. For instance, the high QOS is automatically granted as soon as a contribution to the HPC budget line is performed by the project.","title":"Projects Accounts"},{"location":"accounts/projects/#projects-accounts","text":"","title":"Projects Accounts"},{"location":"accounts/projects/#shared-project-in-the-global-project-directory","text":"We can setup for you a dedicated project directory on the GPFS/SpectrumScale Filesystem for sharing research data with other colleagues. Whether to create a new project directory or to add/remove members to the group set to access the project data, use the Service Now HPC Support Portal . Service Now HPC Support Portal","title":"Shared project in the Global project directory."},{"location":"accounts/projects/#data-storage-charging","text":"","title":"Data Storage Charging"},{"location":"accounts/projects/#slurm-project-account","text":"As explained in the Slurm Account Hierarchy , projects account can be created at the L3 level of the association tree. To quickly list a given project accounts and the users attached to it, you can use the sassoc helper function : # /!\\ ADAPT project acronym/name accordingly sassoc project_ Alternatively, you can rely on sacctmgr , typically coupled with the withassoc attribute: # /!\\ ADAPT project acronym/name accordingly sacctmgr show account where name = project_ format = \"account%20,user%20,Share,QOS%50\" withassoc As per HPC Resource Allocations for Research Project , creation of such project accounts is mandatory for funded research projects , since usage charging may occur when a detailed reporting will be provided for auditing purposes. With the help of the University Research Support department, we will create automatically project accounts from the list of accepted project which acknowledge the need of computing resources. Feel free nethertheless to use the Service Now HPC Support Portal to request the creation of a new project account or to add/remove members to the group - this might be pertinent for internal research projects or specific collaboration with external partners requiring a separate usage monitoring . Important Project account is a natural way to access the higher priority QOS not granted by default to your personnal account on the ULHPC. For instance, the high QOS is automatically granted as soon as a contribution to the HPC budget line is performed by the project.","title":"Slurm Project Account"},{"location":"aws/connection/","text":"Connection to the AWS Cluster \u00b6 Access to the frontend \u00b6 The ULHPC team will create specific access for the AWS Cluster and send to all project members a ssh key in order to connect to the cluster frontend. Once your account has been enabled, you can connect to the cluster using ssh. Computers based on Linux or Mac usually have ssh installed by default. To create a direct connection, use the command below (using your specific cluster name if it differs from workshop-cluster). ssh -i id_rsa username@ec2-52-5-167-162.compute-1.amazonaws.com This will open a direct, non-graphical connection in the terminal. To exit the remote terminal session, use the standard Linux command \u201cexit\u201d. Alternatively, you may want to save the configuration of this connection (and create an alias for it) by editing the file ~/.ssh/config (create it if it does not already exist) and adding the following entries: Host aws-ulhpc-access User username Hostname ec2-52-5-167-162.compute-1.amazonaws.com IdentityFile ~/.ssh/id_rsa IdentitiesOnly yes For additionnal information about ssh connection , please refer to the following page . Data storage HOME storage is limited to 500GB for all users. The ULHPC team will also create for you a project directory located at /shared/projects/ . All members of the project will have the possibility to read, write and execute only in this directory. We strongly advise you to use the project directory to store data and install softwares.","title":"Connection to the AWS Cluster"},{"location":"aws/connection/#connection-to-the-aws-cluster","text":"","title":"Connection to the AWS Cluster"},{"location":"aws/connection/#access-to-the-frontend","text":"The ULHPC team will create specific access for the AWS Cluster and send to all project members a ssh key in order to connect to the cluster frontend. Once your account has been enabled, you can connect to the cluster using ssh. Computers based on Linux or Mac usually have ssh installed by default. To create a direct connection, use the command below (using your specific cluster name if it differs from workshop-cluster). ssh -i id_rsa username@ec2-52-5-167-162.compute-1.amazonaws.com This will open a direct, non-graphical connection in the terminal. To exit the remote terminal session, use the standard Linux command \u201cexit\u201d. Alternatively, you may want to save the configuration of this connection (and create an alias for it) by editing the file ~/.ssh/config (create it if it does not already exist) and adding the following entries: Host aws-ulhpc-access User username Hostname ec2-52-5-167-162.compute-1.amazonaws.com IdentityFile ~/.ssh/id_rsa IdentitiesOnly yes For additionnal information about ssh connection , please refer to the following page . Data storage HOME storage is limited to 500GB for all users. The ULHPC team will also create for you a project directory located at /shared/projects/ . All members of the project will have the possibility to read, write and execute only in this directory. We strongly advise you to use the project directory to store data and install softwares.","title":"Access to the frontend"},{"location":"aws/overview/","text":"Context & System Overview \u00b6 Context \u00b6 The University of Luxembourg announced a collaboration with Amazon Web Services (AWS) to deploy Amazon Elastic Compute Cloud (Amazon EC2) cloud computing infrastructure in order to accelerate strategic high-performance computing (HPC) research and development in Europe. University of Luxembourg will be among the first European universities to provide research and development communities with access to compute environments that use an architecture similar to the European Processor Initiative (EPI), which will be the basis for Europe\u2019s future exascale computing architecture. Using Amazon EC2 instances powered by AWS Graviton3 , the University of Luxembourg will make simulation capacity available to University researchers. This autumn, research projects will be selected from proposals submitted by University R&D teams. As part of this project, AWS will provide cloud computing services to the University that will support the development, design, and testing of numerical codes (i.e., codes that use only digits, such as binary), which traditionally demands a lot of compute power. This will give researchers an accessible, easy-to-use, end-to-end environment in which they can validate their simulation codes on ARM64 architectures, including servers, personal computers, and Internet of Things (IoT). After initial project selection by a steering committee that includes representatives from the University of Luxembourg and AWS, additional projects will be selected each quarter. Selections will be based on the University\u2019s outlined research goals. Priority will be given to research carried out by the University of Luxembourg and its interdisciplinary research centers; however, based on available capacity and project qualifications, the initiative could extend to European industrial partners. System description and environment \u00b6 The AWS Parallel Cluster based on the new HPC-based Graviton3 instances (all instances and storage located in US-EAST-1) will provide cloud computing services to Uni.lu that will support the development, design, and testing of numerical codes, which traditionally demands a lot of compute power. This will give researchers an accessible, easy-to-use, end-to-end environment in which they can validate their simulation codes on ARM64 architectures, including servers, personal computers, and Internet of Things (IoT). The cluster will consist in two main partitions and jobs will be submitted using the Slurm scheduler : PIs and their teams of the funded projects under this call will have the possibility to compile their code with the Arm compiler and using the Arm Performance Library(APL) . Support will be provided by the ULHPC team as well as training activities.","title":"Context & System Overview"},{"location":"aws/overview/#context-system-overview","text":"","title":"Context & System Overview"},{"location":"aws/overview/#context","text":"The University of Luxembourg announced a collaboration with Amazon Web Services (AWS) to deploy Amazon Elastic Compute Cloud (Amazon EC2) cloud computing infrastructure in order to accelerate strategic high-performance computing (HPC) research and development in Europe. University of Luxembourg will be among the first European universities to provide research and development communities with access to compute environments that use an architecture similar to the European Processor Initiative (EPI), which will be the basis for Europe\u2019s future exascale computing architecture. Using Amazon EC2 instances powered by AWS Graviton3 , the University of Luxembourg will make simulation capacity available to University researchers. This autumn, research projects will be selected from proposals submitted by University R&D teams. As part of this project, AWS will provide cloud computing services to the University that will support the development, design, and testing of numerical codes (i.e., codes that use only digits, such as binary), which traditionally demands a lot of compute power. This will give researchers an accessible, easy-to-use, end-to-end environment in which they can validate their simulation codes on ARM64 architectures, including servers, personal computers, and Internet of Things (IoT). After initial project selection by a steering committee that includes representatives from the University of Luxembourg and AWS, additional projects will be selected each quarter. Selections will be based on the University\u2019s outlined research goals. Priority will be given to research carried out by the University of Luxembourg and its interdisciplinary research centers; however, based on available capacity and project qualifications, the initiative could extend to European industrial partners.","title":"Context"},{"location":"aws/overview/#system-description-and-environment","text":"The AWS Parallel Cluster based on the new HPC-based Graviton3 instances (all instances and storage located in US-EAST-1) will provide cloud computing services to Uni.lu that will support the development, design, and testing of numerical codes, which traditionally demands a lot of compute power. This will give researchers an accessible, easy-to-use, end-to-end environment in which they can validate their simulation codes on ARM64 architectures, including servers, personal computers, and Internet of Things (IoT). The cluster will consist in two main partitions and jobs will be submitted using the Slurm scheduler : PIs and their teams of the funded projects under this call will have the possibility to compile their code with the Arm compiler and using the Arm Performance Library(APL) . Support will be provided by the ULHPC team as well as training activities.","title":"System description and environment"},{"location":"aws/setup/","text":"Environment Setup \u00b6 AWS suggest to use Spack to setup your software environment. There is no hard requirement that you must use Spack. However we have included it here, as it is a quick, simple way to setup a development environment. The official ULHPC swsets are not available on the AWS cluster. If you prefer to use EasyBuild or manually compile softwares, please refer to the ULHPC software documentation for this purpose. Environment modules and LMod \u00b6 Like the ULHPC facility, the AWS cluster relies on the Environment Modules / LMod framework which provided the module utility on Compute nodes to manage nearly all software. There are two main advantages of the module approach: ULHPC can provide many different versions and/or installations of a single software package on a given machine, including a default version as well as several older and newer version. Users can easily switch to different versions or installations without having to explicitly specify different paths. With modules, the PATH and related environment variables ( LD_LIBRARY_PATH , MANPATH , etc.) are automatically managed. Environment Modules in itself are a standard and well-established technology across HPC sites, to permit developing and using complex software and libraries build with dependencies, allowing multiple versions of software stacks and combinations thereof to co-exist. It brings the module command which is used to manage environment variables such as PATH , LD_LIBRARY_PATH and MANPATH , enabling the easy loading and unloading of application/library profiles and their dependencies. See https://hpc-docs.uni.lu/environment/modules/ for more details Command Description module avail Lists all the modules which are available to be loaded module spider Search for among available modules (Lmod only) module load [mod2...] Load a module module unload Unload a module module list List loaded modules module purge Unload all modules (purge) module display Display what a module does module use Prepend the directory to the MODULEPATH environment variable module unuse Remove the directory from the MODULEPATH environment variable At the heart of environment modules interaction resides the following components: the MODULEPATH environment variable, which defines the list of searched directories for modulefiles modulefile Take a look at the current values: $ echo $MODULEPATH /shared/apps/easybuild/modules/all:/usr/share/Modules/modulefiles:/etc/modulefiles:/usr/share/modulefiles/Linux:/usr/share/modulefiles/Core:/usr/share/lmod/lmod/modulefiles/Core $ module show toolchain/foss ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /shared/apps/easybuild/modules/all/toolchain/foss/2022b.lua: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- help ([[ Description =========== GNU Compiler Collection ( GCC ) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS ( BLAS and LAPACK support ) , FFTW and ScaLAPACK. More information ================ - Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain ]]) whatis ( \"Description: GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.\" ) whatis ( \"Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain\" ) whatis ( \"URL: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain\" ) conflict ( \"toolchain/foss\" ) load ( \"compiler/GCC/12.2.0\" ) load ( \"mpi/OpenMPI/4.1.4-GCC-12.2.0\" ) load ( \"lib/FlexiBLAS/3.2.1-GCC-12.2.0\" ) load ( \"numlib/FFTW/3.3.10-GCC-12.2.0\" ) load ( \"numlib/FFTW.MPI/3.3.10-gompi-2022b\" ) load ( \"numlib/ScaLAPACK/2.2.0-gompi-2022b-fb\" ) setenv ( \"EBROOTFOSS\" , \"/shared/apps/easybuild/software/foss/2022b\" ) setenv ( \"EBVERSIONFOSS\" , \"2022b\" ) setenv ( \"EBDEVELFOSS\" , \"/shared/apps/easybuild/software/foss/2022b/easybuild/toolchain-foss-2022b-easybuild-devel\" ) Now you can search for a given software using module spider : $ module spider lang/Python ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ lang/Python: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Description: Python is a programming language that lets you work more quickly and integrate your systems more effectively. Versions: lang/Python/3.10.8-GCCcore-12.2.0-bare lang/Python/3.10.8-GCCcore-12.2.0 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ For detailed information about a specific \"lang/Python\" module ( including how to load the modules ) use the module ' s full name. For example: $ module spider lang/Python/3.10.8-GCCcore-12.2.0 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Let's see the effect of loading/unloading a module $ module list No modules loaded $ which python /usr/bin/python $ python --version # System level python Python 2 .7.18 $ module load lang/Python # use TAB to auto-complete $ which python /shared/apps/easybuild/software/Python/3.10.8-GCCcore-12.2.0/bin/python $ python --version Python 3 .10.8 $ module purge Installing softwares with Easybuild \u00b6 EasyBuild is a tool that allows to perform automated and reproducible compilation and installation of software. A large number of scientific software are supported ( 2995 supported software packages in the last release 4.8.0) -- see also What is EasyBuild? All builds and installations are performed at user level, so you don't need the admin (i.e. root ) rights. The software are installed in your home directory under $EASYBUILD_PREFIX -- see https://hpc-docs.uni.lu/environment/easybuild/ Default setting (local) Recommended setting $EASYBUILD_PREFIX $HOME/.local/easybuild /shared/apps/easybuild/ built software are placed under ${EASYBUILD_PREFIX}/software/ modules install path ${EASYBUILD_PREFIX}/modules/all Easybuild main concepts \u00b6 See also the official Easybuild Tutorial: \"Maintaining a Modern Scientific Software Stack Made Easy with EasyBuild\" EasyBuild relies on two main concepts: Toolchains and EasyConfig files . A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. The two main toolchains frequently used on the UL HPC platform are the foss (\" Free and Open Source Software \") and the intel one. foss is based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.). intel is based on the Intel compiler and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.). An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedures (like configure , make and make install ), this file is very simple. Many EasyConfig files are already provided with EasyBuild. By default, EasyConfig files and generated modules are named using the following convention: --- . However, we use a hierarchical approach where the software are classified under a category (or class) -- see the CategorizedModuleNamingScheme option for the EASYBUILD_MODULE_NAMING_SCHEME environmental variable), meaning that the layout will respect the following hierarchy: //-- Additional details are available on EasyBuild website: EasyBuild homepage EasyBuild documentation What is EasyBuild? Toolchains EasyConfig files List of supported software packages Easybuild is provided to you as a software module to complement the existing software set. module load tools/EasyBuild In case you cant to install the latest version yourself, please follow the official instructions . Nonetheless, we strongly recommand to use the provided module. Don't forget to setup your local Easybuild configuration first. What is important for the installation of Easybuild are the following variables: EASYBUILD_PREFIX : where to install local modules and software, i.e. $HOME/.local/easybuild EASYBUILD_MODULES_TOOL : the type of modules tool you are using, i.e. LMod in this case EASYBUILD_MODULE_NAMING_SCHEME : the way the software and modules should be organized (flat view or hierarchical) -- we're advising on CategorizedModuleNamingScheme Important Recall that you should be on a compute node to install Easybuild (otherwise the checks of the module command availability will fail.) Install a missing software by complementing the software set \u00b6 The current software set contains the toolchain foss-2022b that is necessary to build other softwares. We have build OpenMPI-4.1.4 to take in account the latest AWS EFA and the slurm integration. In order to install missing softwares for your project, you can complement the existing software set located at /shared/apps/easybuild by using the provided EasyBuild module (latest version). Once Easybuild has been loaded, you can search and install new softwares. By default, these new softwares will be installed at ${HOME}/.local/easybuild . Feel free to adapt the environment variable ${EASYBUILD_PREFIX} to select a new installation directory. Let's try to install a missing software ( heanode ) $ srun -p small -N 1 -n1 -c16 --pty bash -i ( node ) $ module spider HPL # HPL is a software package that solves a (random) dense linear system in double precision (64 bits) Lmod has detected the following error: Unable to find: \"HPL\" . ( node ) $ module load tools/EasyBuild # Search for recipes for the missing software ( node ) $ eb -S HPL == found valid index for /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs, so using it... CFGS1 = /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs * $CFGS1 /b/bashplotlib/bashplotlib-0.6.5-GCCcore-10.3.0.eb * $CFGS1 /h/HPL/HPL-2.1-foss-2016.04.eb * $CFGS1 /h/HPL/HPL-2.1-foss-2016.06.eb * $CFGS1 /h/HPL/HPL-2.1-foss-2016a.eb * $CFGS1 /h/HPL/HPL-2.1-foss-2016b.eb ... * $CFGS1 /h/HPL/HPL-2.3-foss-2022a.eb * $CFGS1 /h/HPL/HPL-2.3-foss-2022b.eb * $CFGS1 /h/HPL/HPL-2.3-foss-2023a.eb ... * $CFGS1 /h/HPL/HPL-2.3-intel-2022b.eb * $CFGS1 /h/HPL/HPL-2.3-intel-2023.03.eb * $CFGS1 /h/HPL/HPL-2.3-intel-2023a.eb * $CFGS1 /h/HPL/HPL-2.3-intelcuda-2019b.eb * $CFGS1 /h/HPL/HPL-2.3-intelcuda-2020a.eb * $CFGS1 /h/HPL/HPL-2.3-iomkl-2019.01.eb * $CFGS1 /h/HPL/HPL-2.3-iomkl-2021a.eb * $CFGS1 /h/HPL/HPL-2.3-iomkl-2021b.eb * $CFGS1 /h/HPL/HPL_parallel-make.patch From this list, you should select the version matching the target toolchain version -- here foss-2022b . Once you pick a given recipy, install it with eb .eb [-D] -r -D enables the dry-run mode to check what's going to be install -- ALWAYS try it first -r enables the robot mode to automatically install all dependencies while searching for easyconfigs in a set of pre-defined directories -- you can also prepend new directories to search for eb files (like the current directory $PWD ) using the option and syntax --robot-paths=$PWD: (do not forget the ':'). See Controlling the robot search path documentation The $CFGS/ prefix should be dropped unless you know what you're doing (and thus have previously defined the variable -- see the first output of the eb -S [...] command). Let's try to review the missing dependencies from a dry-run : # Select the one matching the target software set version ( node ) $ eb HPL-2.3-foss-2022b.eb -Dr # Dry-run == Temporary log file in case of crash /tmp/eb-lzv785be/easybuild-ihga94y0.log == found valid index for /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs, so using it... == found valid index for /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs, so using it... Dry run: printing build status of easyconfigs and dependencies CFGS = /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs * [ x ] $CFGS /m/M4/M4-1.4.19.eb ( module: devel/M4/1.4.19 ) * [ x ] $CFGS /b/Bison/Bison-3.8.2.eb ( module: lang/Bison/3.8.2 ) * [ x ] $CFGS /f/flex/flex-2.6.4.eb ( module: lang/flex/2.6.4 ) * [ x ] $CFGS /z/zlib/zlib-1.2.12.eb ( module: lib/zlib/1.2.12 ) * [ x ] $CFGS /b/binutils/binutils-2.39.eb ( module: tools/binutils/2.39 ) * [ x ] $CFGS /g/GCCcore/GCCcore-12.2.0.eb ( module: compiler/GCCcore/12.2.0 ) * [ x ] $CFGS /z/zlib/zlib-1.2.12-GCCcore-12.2.0.eb ( module: lib/zlib/1.2.12-GCCcore-12.2.0 ) * [ x ] $CFGS /h/help2man/help2man-1.49.2-GCCcore-12.2.0.eb ( module: tools/help2man/1.49.2-GCCcore-12.2.0 ) * [ x ] $CFGS /m/M4/M4-1.4.19-GCCcore-12.2.0.eb ( module: devel/M4/1.4.19-GCCcore-12.2.0 ) * [ x ] $CFGS /b/Bison/Bison-3.8.2-GCCcore-12.2.0.eb ( module: lang/Bison/3.8.2-GCCcore-12.2.0 ) * [ x ] $CFGS /f/flex/flex-2.6.4-GCCcore-12.2.0.eb ( module: lang/flex/2.6.4-GCCcore-12.2.0 ) * [ x ] $CFGS /b/binutils/binutils-2.39-GCCcore-12.2.0.eb ( module: tools/binutils/2.39-GCCcore-12.2.0 ) * [ x ] $CFGS /p/pkgconf/pkgconf-1.9.3-GCCcore-12.2.0.eb ( module: devel/pkgconf/1.9.3-GCCcore-12.2.0 ) * [ x ] $CFGS /g/groff/groff-1.22.4-GCCcore-12.2.0.eb ( module: tools/groff/1.22.4-GCCcore-12.2.0 ) * [ x ] $CFGS /n/ncurses/ncurses-6.3-GCCcore-12.2.0.eb ( module: devel/ncurses/6.3-GCCcore-12.2.0 ) * [ x ] $CFGS /e/expat/expat-2.4.9-GCCcore-12.2.0.eb ( module: tools/expat/2.4.9-GCCcore-12.2.0 ) * [ x ] $CFGS /b/bzip2/bzip2-1.0.8-GCCcore-12.2.0.eb ( module: tools/bzip2/1.0.8-GCCcore-12.2.0 ) * [ x ] $CFGS /g/GCC/GCC-12.2.0.eb ( module: compiler/GCC/12.2.0 ) * [ x ] $CFGS /f/FFTW/FFTW-3.3.10-GCC-12.2.0.eb ( module: numlib/FFTW/3.3.10-GCC-12.2.0 ) * [ x ] $CFGS /u/UnZip/UnZip-6.0-GCCcore-12.2.0.eb ( module: tools/UnZip/6.0-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libreadline/libreadline-8.2-GCCcore-12.2.0.eb ( module: lib/libreadline/8.2-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libtool/libtool-2.4.7-GCCcore-12.2.0.eb ( module: lib/libtool/2.4.7-GCCcore-12.2.0 ) * [ x ] $CFGS /m/make/make-4.3-GCCcore-12.2.0.eb ( module: devel/make/4.3-GCCcore-12.2.0 ) * [ x ] $CFGS /t/Tcl/Tcl-8.6.12-GCCcore-12.2.0.eb ( module: lang/Tcl/8.6.12-GCCcore-12.2.0 ) * [ x ] $CFGS /p/pkgconf/pkgconf-1.8.0.eb ( module: devel/pkgconf/1.8.0 ) * [ x ] $CFGS /s/SQLite/SQLite-3.39.4-GCCcore-12.2.0.eb ( module: devel/SQLite/3.39.4-GCCcore-12.2.0 ) * [ x ] $CFGS /o/OpenSSL/OpenSSL-1.1.eb ( module: system/OpenSSL/1.1 ) * [ x ] $CFGS /l/libevent/libevent-2.1.12-GCCcore-12.2.0.eb ( module: lib/libevent/2.1.12-GCCcore-12.2.0 ) * [ x ] $CFGS /c/cURL/cURL-7.86.0-GCCcore-12.2.0.eb ( module: tools/cURL/7.86.0-GCCcore-12.2.0 ) * [ x ] $CFGS /d/DB/DB-18.1.40-GCCcore-12.2.0.eb ( module: tools/DB/18.1.40-GCCcore-12.2.0 ) * [ x ] $CFGS /p/Perl/Perl-5.36.0-GCCcore-12.2.0.eb ( module: lang/Perl/5.36.0-GCCcore-12.2.0 ) * [ x ] $CFGS /a/Autoconf/Autoconf-2.71-GCCcore-12.2.0.eb ( module: devel/Autoconf/2.71-GCCcore-12.2.0 ) * [ x ] $CFGS /a/Automake/Automake-1.16.5-GCCcore-12.2.0.eb ( module: devel/Automake/1.16.5-GCCcore-12.2.0 ) * [ x ] $CFGS /a/Autotools/Autotools-20220317-GCCcore-12.2.0.eb ( module: devel/Autotools/20220317-GCCcore-12.2.0 ) * [ x ] $CFGS /n/numactl/numactl-2.0.16-GCCcore-12.2.0.eb ( module: tools/numactl/2.0.16-GCCcore-12.2.0 ) * [ x ] $CFGS /u/UCX/UCX-1.13.1-GCCcore-12.2.0.eb ( module: lib/UCX/1.13.1-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libfabric/libfabric-1.16.1-GCCcore-12.2.0.eb ( module: lib/libfabric/1.16.1-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libffi/libffi-3.4.4-GCCcore-12.2.0.eb ( module: lib/libffi/3.4.4-GCCcore-12.2.0 ) * [ x ] $CFGS /x/xorg-macros/xorg-macros-1.19.3-GCCcore-12.2.0.eb ( module: devel/xorg-macros/1.19.3-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libpciaccess/libpciaccess-0.17-GCCcore-12.2.0.eb ( module: system/libpciaccess/0.17-GCCcore-12.2.0 ) * [ x ] $CFGS /u/UCC/UCC-1.1.0-GCCcore-12.2.0.eb ( module: lib/UCC/1.1.0-GCCcore-12.2.0 ) * [ x ] $CFGS /n/ncurses/ncurses-6.3.eb ( module: devel/ncurses/6.3 ) * [ x ] $CFGS /g/gettext/gettext-0.21.1.eb ( module: tools/gettext/0.21.1 ) * [ x ] $CFGS /x/XZ/XZ-5.2.7-GCCcore-12.2.0.eb ( module: tools/XZ/5.2.7-GCCcore-12.2.0 ) * [ x ] $CFGS /p/Python/Python-3.10.8-GCCcore-12.2.0-bare.eb ( module: lang/Python/3.10.8-GCCcore-12.2.0-bare ) * [ x ] $CFGS /b/BLIS/BLIS-0.9.0-GCC-12.2.0.eb ( module: numlib/BLIS/0.9.0-GCC-12.2.0 ) * [ x ] $CFGS /o/OpenBLAS/OpenBLAS-0.3.21-GCC-12.2.0.eb ( module: numlib/OpenBLAS/0.3.21-GCC-12.2.0 ) * [ x ] $CFGS /l/libarchive/libarchive-3.6.1-GCCcore-12.2.0.eb ( module: tools/libarchive/3.6.1-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libxml2/libxml2-2.10.3-GCCcore-12.2.0.eb ( module: lib/libxml2/2.10.3-GCCcore-12.2.0 ) * [ x ] $CFGS /c/CMake/CMake-3.24.3-GCCcore-12.2.0.eb ( module: devel/CMake/3.24.3-GCCcore-12.2.0 ) * [ ] $CFGS /h/hwloc/hwloc-2.8.0-GCCcore-12.2.0.eb ( module: system/hwloc/2.8.0-GCCcore-12.2.0 ) * [ ] $CFGS /p/PMIx/PMIx-4.2.2-GCCcore-12.2.0.eb ( module: lib/PMIx/4.2.2-GCCcore-12.2.0 ) * [ x ] $CFGS /o/OpenMPI/OpenMPI-4.1.4-GCC-12.2.0.eb ( module: mpi/OpenMPI/4.1.4-GCC-12.2.0 ) * [ x ] $CFGS /f/FlexiBLAS/FlexiBLAS-3.2.1-GCC-12.2.0.eb ( module: lib/FlexiBLAS/3.2.1-GCC-12.2.0 ) * [ x ] $CFGS /g/gompi/gompi-2022b.eb ( module: toolchain/gompi/2022b ) * [ x ] $CFGS /f/FFTW.MPI/FFTW.MPI-3.3.10-gompi-2022b.eb ( module: numlib/FFTW.MPI/3.3.10-gompi-2022b ) * [ x ] $CFGS /s/ScaLAPACK/ScaLAPACK-2.2.0-gompi-2022b-fb.eb ( module: numlib/ScaLAPACK/2.2.0-gompi-2022b-fb ) * [ x ] $CFGS /f/foss/foss-2022b.eb ( module: toolchain/foss/2022b ) * [ ] $CFGS /h/HPL/HPL-2.3-foss-2022b.eb ( module: tools/HPL/2.3-foss-2022b ) == Temporary log file ( s ) /tmp/eb-lzv785be/easybuild-ihga94y0.log* have been removed. == Temporary directory /tmp/eb-lzv785be has been removed. Let's try to install it (remove the -D ): # Select the one matching the target software set version ( node ) $ eb HPL-2.3-foss-2022b.eb -r From now on, you should be able to see the new module. ( node ) $ module spider HPL ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ tools/HPL: tools/HPL/2.3-foss-2022b ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Description: HPL is a software package that solves a ( random ) dense linear system in double precision ( 64 bits ) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. This module can be loaded directly: module load tools/HPL/2.3-foss-2022b Help: Description =========== HPL is a software package that solves a ( random ) dense linear system in double precision ( 64 bits ) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. More information ================ - Homepage: https://www.netlib.org/benchmark/hpl/ Tips : When you load a module generated by Easybuild, it is installed within the directory reported by the $EBROOT variable. In the above case, you will find the generated binary in ${EBROOTHPL}/ . Installing softwares with Spack \u00b6 To do this, please clone the Spack GitHub repository into a SPACK_ROOT which is defined to be on a your project directory, i.e., /shared/project/ . Then add the configuration to you ~/.bashrc file. You may wish to change the location of the SPACK_ROOT to fit your specific cluster configuration. Here, we consider the release v0.19 of Spack from the releases/v0.19 branch, however, you may wish to checkout the develop branch for the latest packages. git clone -c feature.manyFiles = true -b releases/v0.19 https://github.com/spack/spack $SPACK_ROOT Then, add the following lines in your .bashrc export PROJECT = \"/shared/projects/\" export SPACK_ROOT = \" ${ PROJECT } /spack\" if [[ -f \" ${ SPACK_ROOT } /share/spack/setup-env.sh\" && -n ${ SLURM_JOB_ID } ]] ; then source ${ SPACK_ROOT } /share/spack/setup-env.sh \" fi Adapt accordingly Do NOT forget to replace with your project name Spack Binary Cache \u00b6 At ISC'22 , in conjunction with the Spack v0.18 release, AWS announced a collaborative effort to host a Binary Cache . The binary cache stores prebuilt versions of common HPC packages, meaning that the installation process is reduced to relocation rather than compilation. To increase flexibility the binary cache contains package builds with different variants and built with different compilers. The purpose of the binary cache is to drastically speed up package installation, especially when long dependency chains exist. The binary cache is periodically updated with the latest versions of packages, and is released in conjunction with Spack releases. Thus you can use the v0.18 binary cache to have packages specifically from that Spack release. Alternatively, you can make use of the develop binary cache, which is kept up to date with the Spack develop branch. To add the develop binary cache, and trusting the associated gpg keys: spack mirror add binary_mirror https://binaries.spack.io/develop spack buildcache keys -it Installing packages \u00b6 The notation for installing packages, when the binary cache has been enabled is unchanged. Spack will first check to see if the package is installable from the binary cache, and only upon failure will it install from source. We see confirmation of this in the output: $ spack install bzip2 == > Installing bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k == > Fetching https://binaries.spack.io/develop/build_cache/linux-amzn2-x86_64_v4-gcc-7.3.1-bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k.spec.json.sig gpg: Signature made Fri 01 Jul 2022 04 :21:22 AM UTC using RSA key ID 3DB0C723 gpg: Good signature from \"Spack Project Official Binaries \" == > Fetching https://binaries.spack.io/develop/build_cache/linux-amzn2-x86_64_v4/gcc-7.3.1/bzip2-1.0.8/linux-amzn2-x86_64_v4-gcc-7.3.1-bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k.spack == > Extracting bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k from binary cache [ + ] /shared/spack/opt/spack/linux-amzn2-x86_64_v4/gcc-7.3.1/bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k Bypassing the binary cache \u00b6 Sometimes we might want to install a specific package from source, and bypass the binary cache. To achieve this we can pass the --no-cache flag to the install command. We can use this notation to install cowsay. spack install --no-cache cowsay To compile any software we are going to need a compiler. Out of the box Spack does not know about any compilers on the system. To list your registered compilers, please use the following command: spack compiler list It will return an empty list the first time you used after installing Spack == > No compilers available. Run ` spack compiler find ` to autodetect compilers AWS ParallelCluster installs GCC by default, so you can ask Spack to discover compilers on the system: spack compiler find This should identify your GCC install. In your case a conmpiler should be found. == > Added 1 new compiler to /home/ec2-user/.spack/linux/compilers.yaml gcc@7.3.1 == > Compilers are defined in the following files: /home/ec2-user/.spack/linux/compilers.yaml Install other compilers \u00b6 This default GCC compiler may be sufficient for many applications, we may want to install a newer version of GCC or other compilers in general. Spack is able to install compilers like any other package. Newer GCC version \u00b6 For example we can install a version of GCC 11.2.0, complete with binutils, and then add it to the Spack compiler list. ```\u00b7bash spack install -j [num cores] gcc@11.2.0+binutils spack load gcc@11.2.0 spack compiler find spack unload As Spack is building GCC and all of the dependency packages this install can take a long time (>30 mins). ## Arm Compiler for Linux The Arm Compiler for Linux (ACfL) can be installed by Spack on Arm systems, like the Graviton2 (C6g) or Graviton3 (C7g).o ```bash spack install arm@22.0.1 spack load arm@22.0.1 spack compiler find spack unload Where to build softwares \u00b6 The cluster has quite a small headnode, this means that the compilation of complex software is prohibited. One simple solution is to use the compute nodes to perform the Spack installations, by submitting the command through Slurm. srun -N1 -c 36 spack install -j36 gcc@11.2.0+binutils AWS Environment \u00b6 The versions of these external packages may change and are included for reference. The Cluster comes pre-installed with Slurm , libfabric , PMIx , Intel MPI , and Open MPI . To use these packages, you need to tell spack where to find them. cat << EOF > $SPACK_ROOT/etc/spack/packages.yaml packages: libfabric: variants: fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm externals: - spec: libfabric@1.13.2 fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm prefix: /opt/amazon/efa buildable: False openmpi: variants: fabrics=ofi +legacylaunchers schedulers=slurm ^libfabric externals: - spec: openmpi@4.1.1 %gcc@7.3.1 prefix: /opt/amazon/openmpi buildable: False pmix: externals: - spec: pmix@3.2.3 ~pmi_backwards_compatibility prefix: /opt/pmix buildable: False slurm: variants: +pmix sysconfdir=/opt/slurm/etc externals: - spec: slurm@21.08.8-2 +pmix sysconfdir=/opt/slurm/etc prefix: /opt/slurm buildable: False armpl: externals: - spec: armpl@21.0.0%gcc@9.3.0 prefix: /opt/arm/armpl/21.0.0/armpl_21.0_gcc-9.3/ buildable: False EOF Add the GCC 9.3 Compiler \u00b6 The Graviton image ships with an additional compiler within the ArmPL project. We can add this compiler to the Spack environment with the following command: spack compiler add /opt/arm/armpl/gcc/9.3.0/bin/ Open MPI \u00b6 For Open MPI we have already made the definition to set libfabric as a dependency of Open MPI. So by default it will configure it correctly. spack install openmpi%gcc@11.2.0 Additional resources \u00b6 Job submission relies on the Slurm scheduler. Please refer to the following page for more details. Spack tutorial on AWS ParallelCluster","title":"Environment Setup"},{"location":"aws/setup/#environment-setup","text":"AWS suggest to use Spack to setup your software environment. There is no hard requirement that you must use Spack. However we have included it here, as it is a quick, simple way to setup a development environment. The official ULHPC swsets are not available on the AWS cluster. If you prefer to use EasyBuild or manually compile softwares, please refer to the ULHPC software documentation for this purpose.","title":"Environment Setup"},{"location":"aws/setup/#environment-modules-and-lmod","text":"Like the ULHPC facility, the AWS cluster relies on the Environment Modules / LMod framework which provided the module utility on Compute nodes to manage nearly all software. There are two main advantages of the module approach: ULHPC can provide many different versions and/or installations of a single software package on a given machine, including a default version as well as several older and newer version. Users can easily switch to different versions or installations without having to explicitly specify different paths. With modules, the PATH and related environment variables ( LD_LIBRARY_PATH , MANPATH , etc.) are automatically managed. Environment Modules in itself are a standard and well-established technology across HPC sites, to permit developing and using complex software and libraries build with dependencies, allowing multiple versions of software stacks and combinations thereof to co-exist. It brings the module command which is used to manage environment variables such as PATH , LD_LIBRARY_PATH and MANPATH , enabling the easy loading and unloading of application/library profiles and their dependencies. See https://hpc-docs.uni.lu/environment/modules/ for more details Command Description module avail Lists all the modules which are available to be loaded module spider Search for among available modules (Lmod only) module load [mod2...] Load a module module unload Unload a module module list List loaded modules module purge Unload all modules (purge) module display Display what a module does module use Prepend the directory to the MODULEPATH environment variable module unuse Remove the directory from the MODULEPATH environment variable At the heart of environment modules interaction resides the following components: the MODULEPATH environment variable, which defines the list of searched directories for modulefiles modulefile Take a look at the current values: $ echo $MODULEPATH /shared/apps/easybuild/modules/all:/usr/share/Modules/modulefiles:/etc/modulefiles:/usr/share/modulefiles/Linux:/usr/share/modulefiles/Core:/usr/share/lmod/lmod/modulefiles/Core $ module show toolchain/foss ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /shared/apps/easybuild/modules/all/toolchain/foss/2022b.lua: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- help ([[ Description =========== GNU Compiler Collection ( GCC ) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS ( BLAS and LAPACK support ) , FFTW and ScaLAPACK. More information ================ - Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain ]]) whatis ( \"Description: GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.\" ) whatis ( \"Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain\" ) whatis ( \"URL: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain\" ) conflict ( \"toolchain/foss\" ) load ( \"compiler/GCC/12.2.0\" ) load ( \"mpi/OpenMPI/4.1.4-GCC-12.2.0\" ) load ( \"lib/FlexiBLAS/3.2.1-GCC-12.2.0\" ) load ( \"numlib/FFTW/3.3.10-GCC-12.2.0\" ) load ( \"numlib/FFTW.MPI/3.3.10-gompi-2022b\" ) load ( \"numlib/ScaLAPACK/2.2.0-gompi-2022b-fb\" ) setenv ( \"EBROOTFOSS\" , \"/shared/apps/easybuild/software/foss/2022b\" ) setenv ( \"EBVERSIONFOSS\" , \"2022b\" ) setenv ( \"EBDEVELFOSS\" , \"/shared/apps/easybuild/software/foss/2022b/easybuild/toolchain-foss-2022b-easybuild-devel\" ) Now you can search for a given software using module spider : $ module spider lang/Python ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ lang/Python: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Description: Python is a programming language that lets you work more quickly and integrate your systems more effectively. Versions: lang/Python/3.10.8-GCCcore-12.2.0-bare lang/Python/3.10.8-GCCcore-12.2.0 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ For detailed information about a specific \"lang/Python\" module ( including how to load the modules ) use the module ' s full name. For example: $ module spider lang/Python/3.10.8-GCCcore-12.2.0 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Let's see the effect of loading/unloading a module $ module list No modules loaded $ which python /usr/bin/python $ python --version # System level python Python 2 .7.18 $ module load lang/Python # use TAB to auto-complete $ which python /shared/apps/easybuild/software/Python/3.10.8-GCCcore-12.2.0/bin/python $ python --version Python 3 .10.8 $ module purge","title":"Environment modules and LMod"},{"location":"aws/setup/#installing-softwares-with-easybuild","text":"EasyBuild is a tool that allows to perform automated and reproducible compilation and installation of software. A large number of scientific software are supported ( 2995 supported software packages in the last release 4.8.0) -- see also What is EasyBuild? All builds and installations are performed at user level, so you don't need the admin (i.e. root ) rights. The software are installed in your home directory under $EASYBUILD_PREFIX -- see https://hpc-docs.uni.lu/environment/easybuild/ Default setting (local) Recommended setting $EASYBUILD_PREFIX $HOME/.local/easybuild /shared/apps/easybuild/ built software are placed under ${EASYBUILD_PREFIX}/software/ modules install path ${EASYBUILD_PREFIX}/modules/all","title":"Installing softwares with Easybuild"},{"location":"aws/setup/#easybuild-main-concepts","text":"See also the official Easybuild Tutorial: \"Maintaining a Modern Scientific Software Stack Made Easy with EasyBuild\" EasyBuild relies on two main concepts: Toolchains and EasyConfig files . A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. The two main toolchains frequently used on the UL HPC platform are the foss (\" Free and Open Source Software \") and the intel one. foss is based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.). intel is based on the Intel compiler and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.). An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedures (like configure , make and make install ), this file is very simple. Many EasyConfig files are already provided with EasyBuild. By default, EasyConfig files and generated modules are named using the following convention: --- . However, we use a hierarchical approach where the software are classified under a category (or class) -- see the CategorizedModuleNamingScheme option for the EASYBUILD_MODULE_NAMING_SCHEME environmental variable), meaning that the layout will respect the following hierarchy: //-- Additional details are available on EasyBuild website: EasyBuild homepage EasyBuild documentation What is EasyBuild? Toolchains EasyConfig files List of supported software packages Easybuild is provided to you as a software module to complement the existing software set. module load tools/EasyBuild In case you cant to install the latest version yourself, please follow the official instructions . Nonetheless, we strongly recommand to use the provided module. Don't forget to setup your local Easybuild configuration first. What is important for the installation of Easybuild are the following variables: EASYBUILD_PREFIX : where to install local modules and software, i.e. $HOME/.local/easybuild EASYBUILD_MODULES_TOOL : the type of modules tool you are using, i.e. LMod in this case EASYBUILD_MODULE_NAMING_SCHEME : the way the software and modules should be organized (flat view or hierarchical) -- we're advising on CategorizedModuleNamingScheme Important Recall that you should be on a compute node to install Easybuild (otherwise the checks of the module command availability will fail.)","title":"Easybuild main concepts"},{"location":"aws/setup/#install-a-missing-software-by-complementing-the-software-set","text":"The current software set contains the toolchain foss-2022b that is necessary to build other softwares. We have build OpenMPI-4.1.4 to take in account the latest AWS EFA and the slurm integration. In order to install missing softwares for your project, you can complement the existing software set located at /shared/apps/easybuild by using the provided EasyBuild module (latest version). Once Easybuild has been loaded, you can search and install new softwares. By default, these new softwares will be installed at ${HOME}/.local/easybuild . Feel free to adapt the environment variable ${EASYBUILD_PREFIX} to select a new installation directory. Let's try to install a missing software ( heanode ) $ srun -p small -N 1 -n1 -c16 --pty bash -i ( node ) $ module spider HPL # HPL is a software package that solves a (random) dense linear system in double precision (64 bits) Lmod has detected the following error: Unable to find: \"HPL\" . ( node ) $ module load tools/EasyBuild # Search for recipes for the missing software ( node ) $ eb -S HPL == found valid index for /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs, so using it... CFGS1 = /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs * $CFGS1 /b/bashplotlib/bashplotlib-0.6.5-GCCcore-10.3.0.eb * $CFGS1 /h/HPL/HPL-2.1-foss-2016.04.eb * $CFGS1 /h/HPL/HPL-2.1-foss-2016.06.eb * $CFGS1 /h/HPL/HPL-2.1-foss-2016a.eb * $CFGS1 /h/HPL/HPL-2.1-foss-2016b.eb ... * $CFGS1 /h/HPL/HPL-2.3-foss-2022a.eb * $CFGS1 /h/HPL/HPL-2.3-foss-2022b.eb * $CFGS1 /h/HPL/HPL-2.3-foss-2023a.eb ... * $CFGS1 /h/HPL/HPL-2.3-intel-2022b.eb * $CFGS1 /h/HPL/HPL-2.3-intel-2023.03.eb * $CFGS1 /h/HPL/HPL-2.3-intel-2023a.eb * $CFGS1 /h/HPL/HPL-2.3-intelcuda-2019b.eb * $CFGS1 /h/HPL/HPL-2.3-intelcuda-2020a.eb * $CFGS1 /h/HPL/HPL-2.3-iomkl-2019.01.eb * $CFGS1 /h/HPL/HPL-2.3-iomkl-2021a.eb * $CFGS1 /h/HPL/HPL-2.3-iomkl-2021b.eb * $CFGS1 /h/HPL/HPL_parallel-make.patch From this list, you should select the version matching the target toolchain version -- here foss-2022b . Once you pick a given recipy, install it with eb .eb [-D] -r -D enables the dry-run mode to check what's going to be install -- ALWAYS try it first -r enables the robot mode to automatically install all dependencies while searching for easyconfigs in a set of pre-defined directories -- you can also prepend new directories to search for eb files (like the current directory $PWD ) using the option and syntax --robot-paths=$PWD: (do not forget the ':'). See Controlling the robot search path documentation The $CFGS/ prefix should be dropped unless you know what you're doing (and thus have previously defined the variable -- see the first output of the eb -S [...] command). Let's try to review the missing dependencies from a dry-run : # Select the one matching the target software set version ( node ) $ eb HPL-2.3-foss-2022b.eb -Dr # Dry-run == Temporary log file in case of crash /tmp/eb-lzv785be/easybuild-ihga94y0.log == found valid index for /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs, so using it... == found valid index for /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs, so using it... Dry run: printing build status of easyconfigs and dependencies CFGS = /shared/apps/easybuild/software/EasyBuild/4.8.0/easybuild/easyconfigs * [ x ] $CFGS /m/M4/M4-1.4.19.eb ( module: devel/M4/1.4.19 ) * [ x ] $CFGS /b/Bison/Bison-3.8.2.eb ( module: lang/Bison/3.8.2 ) * [ x ] $CFGS /f/flex/flex-2.6.4.eb ( module: lang/flex/2.6.4 ) * [ x ] $CFGS /z/zlib/zlib-1.2.12.eb ( module: lib/zlib/1.2.12 ) * [ x ] $CFGS /b/binutils/binutils-2.39.eb ( module: tools/binutils/2.39 ) * [ x ] $CFGS /g/GCCcore/GCCcore-12.2.0.eb ( module: compiler/GCCcore/12.2.0 ) * [ x ] $CFGS /z/zlib/zlib-1.2.12-GCCcore-12.2.0.eb ( module: lib/zlib/1.2.12-GCCcore-12.2.0 ) * [ x ] $CFGS /h/help2man/help2man-1.49.2-GCCcore-12.2.0.eb ( module: tools/help2man/1.49.2-GCCcore-12.2.0 ) * [ x ] $CFGS /m/M4/M4-1.4.19-GCCcore-12.2.0.eb ( module: devel/M4/1.4.19-GCCcore-12.2.0 ) * [ x ] $CFGS /b/Bison/Bison-3.8.2-GCCcore-12.2.0.eb ( module: lang/Bison/3.8.2-GCCcore-12.2.0 ) * [ x ] $CFGS /f/flex/flex-2.6.4-GCCcore-12.2.0.eb ( module: lang/flex/2.6.4-GCCcore-12.2.0 ) * [ x ] $CFGS /b/binutils/binutils-2.39-GCCcore-12.2.0.eb ( module: tools/binutils/2.39-GCCcore-12.2.0 ) * [ x ] $CFGS /p/pkgconf/pkgconf-1.9.3-GCCcore-12.2.0.eb ( module: devel/pkgconf/1.9.3-GCCcore-12.2.0 ) * [ x ] $CFGS /g/groff/groff-1.22.4-GCCcore-12.2.0.eb ( module: tools/groff/1.22.4-GCCcore-12.2.0 ) * [ x ] $CFGS /n/ncurses/ncurses-6.3-GCCcore-12.2.0.eb ( module: devel/ncurses/6.3-GCCcore-12.2.0 ) * [ x ] $CFGS /e/expat/expat-2.4.9-GCCcore-12.2.0.eb ( module: tools/expat/2.4.9-GCCcore-12.2.0 ) * [ x ] $CFGS /b/bzip2/bzip2-1.0.8-GCCcore-12.2.0.eb ( module: tools/bzip2/1.0.8-GCCcore-12.2.0 ) * [ x ] $CFGS /g/GCC/GCC-12.2.0.eb ( module: compiler/GCC/12.2.0 ) * [ x ] $CFGS /f/FFTW/FFTW-3.3.10-GCC-12.2.0.eb ( module: numlib/FFTW/3.3.10-GCC-12.2.0 ) * [ x ] $CFGS /u/UnZip/UnZip-6.0-GCCcore-12.2.0.eb ( module: tools/UnZip/6.0-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libreadline/libreadline-8.2-GCCcore-12.2.0.eb ( module: lib/libreadline/8.2-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libtool/libtool-2.4.7-GCCcore-12.2.0.eb ( module: lib/libtool/2.4.7-GCCcore-12.2.0 ) * [ x ] $CFGS /m/make/make-4.3-GCCcore-12.2.0.eb ( module: devel/make/4.3-GCCcore-12.2.0 ) * [ x ] $CFGS /t/Tcl/Tcl-8.6.12-GCCcore-12.2.0.eb ( module: lang/Tcl/8.6.12-GCCcore-12.2.0 ) * [ x ] $CFGS /p/pkgconf/pkgconf-1.8.0.eb ( module: devel/pkgconf/1.8.0 ) * [ x ] $CFGS /s/SQLite/SQLite-3.39.4-GCCcore-12.2.0.eb ( module: devel/SQLite/3.39.4-GCCcore-12.2.0 ) * [ x ] $CFGS /o/OpenSSL/OpenSSL-1.1.eb ( module: system/OpenSSL/1.1 ) * [ x ] $CFGS /l/libevent/libevent-2.1.12-GCCcore-12.2.0.eb ( module: lib/libevent/2.1.12-GCCcore-12.2.0 ) * [ x ] $CFGS /c/cURL/cURL-7.86.0-GCCcore-12.2.0.eb ( module: tools/cURL/7.86.0-GCCcore-12.2.0 ) * [ x ] $CFGS /d/DB/DB-18.1.40-GCCcore-12.2.0.eb ( module: tools/DB/18.1.40-GCCcore-12.2.0 ) * [ x ] $CFGS /p/Perl/Perl-5.36.0-GCCcore-12.2.0.eb ( module: lang/Perl/5.36.0-GCCcore-12.2.0 ) * [ x ] $CFGS /a/Autoconf/Autoconf-2.71-GCCcore-12.2.0.eb ( module: devel/Autoconf/2.71-GCCcore-12.2.0 ) * [ x ] $CFGS /a/Automake/Automake-1.16.5-GCCcore-12.2.0.eb ( module: devel/Automake/1.16.5-GCCcore-12.2.0 ) * [ x ] $CFGS /a/Autotools/Autotools-20220317-GCCcore-12.2.0.eb ( module: devel/Autotools/20220317-GCCcore-12.2.0 ) * [ x ] $CFGS /n/numactl/numactl-2.0.16-GCCcore-12.2.0.eb ( module: tools/numactl/2.0.16-GCCcore-12.2.0 ) * [ x ] $CFGS /u/UCX/UCX-1.13.1-GCCcore-12.2.0.eb ( module: lib/UCX/1.13.1-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libfabric/libfabric-1.16.1-GCCcore-12.2.0.eb ( module: lib/libfabric/1.16.1-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libffi/libffi-3.4.4-GCCcore-12.2.0.eb ( module: lib/libffi/3.4.4-GCCcore-12.2.0 ) * [ x ] $CFGS /x/xorg-macros/xorg-macros-1.19.3-GCCcore-12.2.0.eb ( module: devel/xorg-macros/1.19.3-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libpciaccess/libpciaccess-0.17-GCCcore-12.2.0.eb ( module: system/libpciaccess/0.17-GCCcore-12.2.0 ) * [ x ] $CFGS /u/UCC/UCC-1.1.0-GCCcore-12.2.0.eb ( module: lib/UCC/1.1.0-GCCcore-12.2.0 ) * [ x ] $CFGS /n/ncurses/ncurses-6.3.eb ( module: devel/ncurses/6.3 ) * [ x ] $CFGS /g/gettext/gettext-0.21.1.eb ( module: tools/gettext/0.21.1 ) * [ x ] $CFGS /x/XZ/XZ-5.2.7-GCCcore-12.2.0.eb ( module: tools/XZ/5.2.7-GCCcore-12.2.0 ) * [ x ] $CFGS /p/Python/Python-3.10.8-GCCcore-12.2.0-bare.eb ( module: lang/Python/3.10.8-GCCcore-12.2.0-bare ) * [ x ] $CFGS /b/BLIS/BLIS-0.9.0-GCC-12.2.0.eb ( module: numlib/BLIS/0.9.0-GCC-12.2.0 ) * [ x ] $CFGS /o/OpenBLAS/OpenBLAS-0.3.21-GCC-12.2.0.eb ( module: numlib/OpenBLAS/0.3.21-GCC-12.2.0 ) * [ x ] $CFGS /l/libarchive/libarchive-3.6.1-GCCcore-12.2.0.eb ( module: tools/libarchive/3.6.1-GCCcore-12.2.0 ) * [ x ] $CFGS /l/libxml2/libxml2-2.10.3-GCCcore-12.2.0.eb ( module: lib/libxml2/2.10.3-GCCcore-12.2.0 ) * [ x ] $CFGS /c/CMake/CMake-3.24.3-GCCcore-12.2.0.eb ( module: devel/CMake/3.24.3-GCCcore-12.2.0 ) * [ ] $CFGS /h/hwloc/hwloc-2.8.0-GCCcore-12.2.0.eb ( module: system/hwloc/2.8.0-GCCcore-12.2.0 ) * [ ] $CFGS /p/PMIx/PMIx-4.2.2-GCCcore-12.2.0.eb ( module: lib/PMIx/4.2.2-GCCcore-12.2.0 ) * [ x ] $CFGS /o/OpenMPI/OpenMPI-4.1.4-GCC-12.2.0.eb ( module: mpi/OpenMPI/4.1.4-GCC-12.2.0 ) * [ x ] $CFGS /f/FlexiBLAS/FlexiBLAS-3.2.1-GCC-12.2.0.eb ( module: lib/FlexiBLAS/3.2.1-GCC-12.2.0 ) * [ x ] $CFGS /g/gompi/gompi-2022b.eb ( module: toolchain/gompi/2022b ) * [ x ] $CFGS /f/FFTW.MPI/FFTW.MPI-3.3.10-gompi-2022b.eb ( module: numlib/FFTW.MPI/3.3.10-gompi-2022b ) * [ x ] $CFGS /s/ScaLAPACK/ScaLAPACK-2.2.0-gompi-2022b-fb.eb ( module: numlib/ScaLAPACK/2.2.0-gompi-2022b-fb ) * [ x ] $CFGS /f/foss/foss-2022b.eb ( module: toolchain/foss/2022b ) * [ ] $CFGS /h/HPL/HPL-2.3-foss-2022b.eb ( module: tools/HPL/2.3-foss-2022b ) == Temporary log file ( s ) /tmp/eb-lzv785be/easybuild-ihga94y0.log* have been removed. == Temporary directory /tmp/eb-lzv785be has been removed. Let's try to install it (remove the -D ): # Select the one matching the target software set version ( node ) $ eb HPL-2.3-foss-2022b.eb -r From now on, you should be able to see the new module. ( node ) $ module spider HPL ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ tools/HPL: tools/HPL/2.3-foss-2022b ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Description: HPL is a software package that solves a ( random ) dense linear system in double precision ( 64 bits ) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. This module can be loaded directly: module load tools/HPL/2.3-foss-2022b Help: Description =========== HPL is a software package that solves a ( random ) dense linear system in double precision ( 64 bits ) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. More information ================ - Homepage: https://www.netlib.org/benchmark/hpl/ Tips : When you load a module generated by Easybuild, it is installed within the directory reported by the $EBROOT variable. In the above case, you will find the generated binary in ${EBROOTHPL}/ .","title":"Install a missing software by complementing the software set"},{"location":"aws/setup/#installing-softwares-with-spack","text":"To do this, please clone the Spack GitHub repository into a SPACK_ROOT which is defined to be on a your project directory, i.e., /shared/project/ . Then add the configuration to you ~/.bashrc file. You may wish to change the location of the SPACK_ROOT to fit your specific cluster configuration. Here, we consider the release v0.19 of Spack from the releases/v0.19 branch, however, you may wish to checkout the develop branch for the latest packages. git clone -c feature.manyFiles = true -b releases/v0.19 https://github.com/spack/spack $SPACK_ROOT Then, add the following lines in your .bashrc export PROJECT = \"/shared/projects/\" export SPACK_ROOT = \" ${ PROJECT } /spack\" if [[ -f \" ${ SPACK_ROOT } /share/spack/setup-env.sh\" && -n ${ SLURM_JOB_ID } ]] ; then source ${ SPACK_ROOT } /share/spack/setup-env.sh \" fi Adapt accordingly Do NOT forget to replace with your project name","title":"Installing softwares with Spack"},{"location":"aws/setup/#spack-binary-cache","text":"At ISC'22 , in conjunction with the Spack v0.18 release, AWS announced a collaborative effort to host a Binary Cache . The binary cache stores prebuilt versions of common HPC packages, meaning that the installation process is reduced to relocation rather than compilation. To increase flexibility the binary cache contains package builds with different variants and built with different compilers. The purpose of the binary cache is to drastically speed up package installation, especially when long dependency chains exist. The binary cache is periodically updated with the latest versions of packages, and is released in conjunction with Spack releases. Thus you can use the v0.18 binary cache to have packages specifically from that Spack release. Alternatively, you can make use of the develop binary cache, which is kept up to date with the Spack develop branch. To add the develop binary cache, and trusting the associated gpg keys: spack mirror add binary_mirror https://binaries.spack.io/develop spack buildcache keys -it","title":"Spack Binary Cache"},{"location":"aws/setup/#installing-packages","text":"The notation for installing packages, when the binary cache has been enabled is unchanged. Spack will first check to see if the package is installable from the binary cache, and only upon failure will it install from source. We see confirmation of this in the output: $ spack install bzip2 == > Installing bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k == > Fetching https://binaries.spack.io/develop/build_cache/linux-amzn2-x86_64_v4-gcc-7.3.1-bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k.spec.json.sig gpg: Signature made Fri 01 Jul 2022 04 :21:22 AM UTC using RSA key ID 3DB0C723 gpg: Good signature from \"Spack Project Official Binaries \" == > Fetching https://binaries.spack.io/develop/build_cache/linux-amzn2-x86_64_v4/gcc-7.3.1/bzip2-1.0.8/linux-amzn2-x86_64_v4-gcc-7.3.1-bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k.spack == > Extracting bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k from binary cache [ + ] /shared/spack/opt/spack/linux-amzn2-x86_64_v4/gcc-7.3.1/bzip2-1.0.8-paghlsmxrq7p26qna6ml6au4fj2bdw6k","title":"Installing packages"},{"location":"aws/setup/#bypassing-the-binary-cache","text":"Sometimes we might want to install a specific package from source, and bypass the binary cache. To achieve this we can pass the --no-cache flag to the install command. We can use this notation to install cowsay. spack install --no-cache cowsay To compile any software we are going to need a compiler. Out of the box Spack does not know about any compilers on the system. To list your registered compilers, please use the following command: spack compiler list It will return an empty list the first time you used after installing Spack == > No compilers available. Run ` spack compiler find ` to autodetect compilers AWS ParallelCluster installs GCC by default, so you can ask Spack to discover compilers on the system: spack compiler find This should identify your GCC install. In your case a conmpiler should be found. == > Added 1 new compiler to /home/ec2-user/.spack/linux/compilers.yaml gcc@7.3.1 == > Compilers are defined in the following files: /home/ec2-user/.spack/linux/compilers.yaml","title":"Bypassing the binary cache"},{"location":"aws/setup/#install-other-compilers","text":"This default GCC compiler may be sufficient for many applications, we may want to install a newer version of GCC or other compilers in general. Spack is able to install compilers like any other package.","title":"Install other compilers"},{"location":"aws/setup/#newer-gcc-version","text":"For example we can install a version of GCC 11.2.0, complete with binutils, and then add it to the Spack compiler list. ```\u00b7bash spack install -j [num cores] gcc@11.2.0+binutils spack load gcc@11.2.0 spack compiler find spack unload As Spack is building GCC and all of the dependency packages this install can take a long time (>30 mins). ## Arm Compiler for Linux The Arm Compiler for Linux (ACfL) can be installed by Spack on Arm systems, like the Graviton2 (C6g) or Graviton3 (C7g).o ```bash spack install arm@22.0.1 spack load arm@22.0.1 spack compiler find spack unload","title":"Newer GCC version"},{"location":"aws/setup/#where-to-build-softwares","text":"The cluster has quite a small headnode, this means that the compilation of complex software is prohibited. One simple solution is to use the compute nodes to perform the Spack installations, by submitting the command through Slurm. srun -N1 -c 36 spack install -j36 gcc@11.2.0+binutils","title":"Where to build softwares"},{"location":"aws/setup/#aws-environment","text":"The versions of these external packages may change and are included for reference. The Cluster comes pre-installed with Slurm , libfabric , PMIx , Intel MPI , and Open MPI . To use these packages, you need to tell spack where to find them. cat << EOF > $SPACK_ROOT/etc/spack/packages.yaml packages: libfabric: variants: fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm externals: - spec: libfabric@1.13.2 fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm prefix: /opt/amazon/efa buildable: False openmpi: variants: fabrics=ofi +legacylaunchers schedulers=slurm ^libfabric externals: - spec: openmpi@4.1.1 %gcc@7.3.1 prefix: /opt/amazon/openmpi buildable: False pmix: externals: - spec: pmix@3.2.3 ~pmi_backwards_compatibility prefix: /opt/pmix buildable: False slurm: variants: +pmix sysconfdir=/opt/slurm/etc externals: - spec: slurm@21.08.8-2 +pmix sysconfdir=/opt/slurm/etc prefix: /opt/slurm buildable: False armpl: externals: - spec: armpl@21.0.0%gcc@9.3.0 prefix: /opt/arm/armpl/21.0.0/armpl_21.0_gcc-9.3/ buildable: False EOF","title":"AWS Environment"},{"location":"aws/setup/#add-the-gcc-93-compiler","text":"The Graviton image ships with an additional compiler within the ArmPL project. We can add this compiler to the Spack environment with the following command: spack compiler add /opt/arm/armpl/gcc/9.3.0/bin/","title":"Add the GCC 9.3 Compiler"},{"location":"aws/setup/#open-mpi","text":"For Open MPI we have already made the definition to set libfabric as a dependency of Open MPI. So by default it will configure it correctly. spack install openmpi%gcc@11.2.0","title":"Open MPI"},{"location":"aws/setup/#additional-resources","text":"Job submission relies on the Slurm scheduler. Please refer to the following page for more details. Spack tutorial on AWS ParallelCluster","title":"Additional resources"},{"location":"connect/access/","text":"Login Nodes \u00b6 Opening an SSH connection to ULHPC systems results in a connection to an access node. Iris ssh iris-cluster Aion ssh aion-cluster Iris (X11) To be able to further run GUI applications within your [interactive] jobs: ssh -X iris-cluster # OR on Mac OS: ssh -Y iris-cluster Aion (X11) To be able to further run GUI applications within your [interactive] jobs: ssh -X aion-cluster # OR on Mac OS: ssh -Y aion-cluster Important Recall that you SHOULD NOT run any HPC application on the login nodes. That's why the module command is NOT available on them. Usage \u00b6 On access nodes, typical user tasks include Transferring and managing files Editing files Submitting jobs Appropriate Use Do not run compute- or memory-intensive applications on access nodes. These nodes are a shared resource. ULHPC admins may terminate processes which are having negative impacts on other users or the systems. Avoid watch If you must use the watch command, please use a much longer interval such as 5 minutes (=300 sec), e.g., watch -n 300 . Avoid Visual Studio Code Avoid using Visual Studio Code to connect to the HPC, as it consumes a lot of resources in the login nodes. Heavy development shouldn't be done directly on the HPC. For most tasks using a terminal based editor should be enough like: Vim or Emacs . If you want to have some more advanced features try Neovim where you can add plugins to meet your specific needs. Tips \u00b6 ULHPC provides a wide variety of qos's An interactive qos is available on Iris and Aion for compute- and memory-intensive interactive work. Please, use an interactive job for resource-intensive processes instead of running them on access nodes. Tip To help identify processes that make heavy use of resources, you can use: top -u $USER /usr/bin/time -v ./my_command Running GUI Application over X11 If you intend to run GUI applications (MATLAB, Stata, ParaView etc.), you MUST connect by SSH to the login nodes with the -X (or -Y on Mac OS) option: Iris ssh -X iris-cluster # OR on Mac OS: ssh -Y iris-cluster Aion ssh -X aion-cluster # OR on Mac OS: ssh -Y aion-cluster Install Neovim using Micormamba Neovim is not installed by default on the HPC but you can install it using Micromamba . micromamba create --name editor-env micromamba install --name editor-env conda-forge::nvim After installation you can create a alias in your .bashrc for easy access: alias nvim = 'micromamba run --name editor-env nvim'","title":"Access/Login Servers"},{"location":"connect/access/#login-nodes","text":"Opening an SSH connection to ULHPC systems results in a connection to an access node. Iris ssh iris-cluster Aion ssh aion-cluster Iris (X11) To be able to further run GUI applications within your [interactive] jobs: ssh -X iris-cluster # OR on Mac OS: ssh -Y iris-cluster Aion (X11) To be able to further run GUI applications within your [interactive] jobs: ssh -X aion-cluster # OR on Mac OS: ssh -Y aion-cluster Important Recall that you SHOULD NOT run any HPC application on the login nodes. That's why the module command is NOT available on them.","title":"Login Nodes"},{"location":"connect/access/#usage","text":"On access nodes, typical user tasks include Transferring and managing files Editing files Submitting jobs Appropriate Use Do not run compute- or memory-intensive applications on access nodes. These nodes are a shared resource. ULHPC admins may terminate processes which are having negative impacts on other users or the systems. Avoid watch If you must use the watch command, please use a much longer interval such as 5 minutes (=300 sec), e.g., watch -n 300 . Avoid Visual Studio Code Avoid using Visual Studio Code to connect to the HPC, as it consumes a lot of resources in the login nodes. Heavy development shouldn't be done directly on the HPC. For most tasks using a terminal based editor should be enough like: Vim or Emacs . If you want to have some more advanced features try Neovim where you can add plugins to meet your specific needs.","title":"Usage"},{"location":"connect/access/#tips","text":"ULHPC provides a wide variety of qos's An interactive qos is available on Iris and Aion for compute- and memory-intensive interactive work. Please, use an interactive job for resource-intensive processes instead of running them on access nodes. Tip To help identify processes that make heavy use of resources, you can use: top -u $USER /usr/bin/time -v ./my_command Running GUI Application over X11 If you intend to run GUI applications (MATLAB, Stata, ParaView etc.), you MUST connect by SSH to the login nodes with the -X (or -Y on Mac OS) option: Iris ssh -X iris-cluster # OR on Mac OS: ssh -Y iris-cluster Aion ssh -X aion-cluster # OR on Mac OS: ssh -Y aion-cluster Install Neovim using Micormamba Neovim is not installed by default on the HPC but you can install it using Micromamba . micromamba create --name editor-env micromamba install --name editor-env conda-forge::nvim After installation you can create a alias in your .bashrc for easy access: alias nvim = 'micromamba run --name editor-env nvim'","title":"Tips"},{"location":"connect/ipa/","text":"ULHPC Identity Management Portal (IdM/IPA) \u00b6 ULHPC Identity Management Portal Red Hat Identity Management (IdM), formally referred to as IPA (\"Identity, Policy, and Audit\" -- see also https://www.freeipa.org ), provides a centralized and unified way to manage identity stores, authentication, policies, and authorization policies in a Linux-based domain. IdM significantly reduces the administrative overhead of managing different services individually and using different tools on different machines. All services (HPC and complementary ones) managed by the ULHPC team rely on a highly redundant setup involving several Redhat IdM/IPA server. SSH Key Management You are responsible for uploading and managing your authorized public SSH keys for your account, under the terms of the Acceptable Use Policy . Be aware that the ULHPC team review on a periodical basis the compliance to the policy, as well as the security of your keys. See also the note on deprecated/weak DSA/RSA keys References Redhat 7 Documentation Upload your SSH key on the ULHPC Identity Management Portal \u00b6 You should upload your public SSH key(s) *.pub to your user entry on the ULHPC Identity Management Portal. For that, connect to the ULHPC IdM portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail) and enter your ULHPC credentials. First copy the content of the key you want to add # Example with ED25519 **public** key ( laptop ) $> cat ~/.ssh/id_ed25519.pub ssh-ed25519 AAAA [ ... ] # OR the RSA **public** key ( laptop ) $> cat ~/.ssh/id_rsa.pub ssh-rsa AAAA [ ... ] Then on the portal: Select Identity / Users. Select your login entry Under the Settings tab in the Account Settings area, click SSH public keys: Add . Paste in the Base 64-encoded public key string, and click Set . Click Save at the top of the page. Your key fingerprint should be listed now. Listing SSH keys attached to your account through SSSD SSSD is a system daemon used on ULHPC computational resources. Its primary function is to provide access to local or remote identity and authentication resources through a common framework that can provide caching and offline support to the system. To easily access the authorized keys configured for your account from the command-line (i.e. without login on the ULHPC IPA portal), you can use: sss_ssh_authorizedkeys $(whoami) Change Your Password \u00b6 connect to the ULHPC IdM portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail) and enter your ULHPC credentials. On the top right under your name, select the entry \"Change Password\" In the dialog window that appears, enter the current password, and your new password. Your password should meet the password requirements explained in the next section below, and must be 'safe' or 'very safe' according to the provided password strength meter.","title":"Identity Management (IdM/IPA)"},{"location":"connect/ipa/#ulhpc-identity-management-portal-idmipa","text":"ULHPC Identity Management Portal Red Hat Identity Management (IdM), formally referred to as IPA (\"Identity, Policy, and Audit\" -- see also https://www.freeipa.org ), provides a centralized and unified way to manage identity stores, authentication, policies, and authorization policies in a Linux-based domain. IdM significantly reduces the administrative overhead of managing different services individually and using different tools on different machines. All services (HPC and complementary ones) managed by the ULHPC team rely on a highly redundant setup involving several Redhat IdM/IPA server. SSH Key Management You are responsible for uploading and managing your authorized public SSH keys for your account, under the terms of the Acceptable Use Policy . Be aware that the ULHPC team review on a periodical basis the compliance to the policy, as well as the security of your keys. See also the note on deprecated/weak DSA/RSA keys References Redhat 7 Documentation","title":"ULHPC Identity Management Portal (IdM/IPA)"},{"location":"connect/ipa/#upload-your-ssh-key-on-the-ulhpc-identity-management-portal","text":"You should upload your public SSH key(s) *.pub to your user entry on the ULHPC Identity Management Portal. For that, connect to the ULHPC IdM portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail) and enter your ULHPC credentials. First copy the content of the key you want to add # Example with ED25519 **public** key ( laptop ) $> cat ~/.ssh/id_ed25519.pub ssh-ed25519 AAAA [ ... ] # OR the RSA **public** key ( laptop ) $> cat ~/.ssh/id_rsa.pub ssh-rsa AAAA [ ... ] Then on the portal: Select Identity / Users. Select your login entry Under the Settings tab in the Account Settings area, click SSH public keys: Add . Paste in the Base 64-encoded public key string, and click Set . Click Save at the top of the page. Your key fingerprint should be listed now. Listing SSH keys attached to your account through SSSD SSSD is a system daemon used on ULHPC computational resources. Its primary function is to provide access to local or remote identity and authentication resources through a common framework that can provide caching and offline support to the system. To easily access the authorized keys configured for your account from the command-line (i.e. without login on the ULHPC IPA portal), you can use: sss_ssh_authorizedkeys $(whoami)","title":"Upload your SSH key on the ULHPC Identity Management Portal"},{"location":"connect/ipa/#change-your-password","text":"connect to the ULHPC IdM portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail) and enter your ULHPC credentials. On the top right under your name, select the entry \"Change Password\" In the dialog window that appears, enter the current password, and your new password. Your password should meet the password requirements explained in the next section below, and must be 'safe' or 'very safe' according to the provided password strength meter.","title":"Change Your Password"},{"location":"connect/linux/","text":"Installation notes \u00b6 Normally, SSH is installed natively on your machine and the ssh command should be accessible from the command line (or a Terminal) through the ssh command: (your_workstation)$> ssh -V OpenSSH_8.4p1, OpenSSL 1.1.1h 22 Sep 2020 If that's not the case, consider installing the package openssh-client (Debian-like systems) or ssh (Redhat-like systems). Your local SSH configuration is located in the ~/.ssh/ directory and consists of: ~/.ssh/id_rsa.pub : your SSH public key. This one is the only one SAFE to distribute. ~/.ssh/id_rsa : the associated private key. NEVER EVER TRANSMIT THIS FILE (eventually) the configuration of the SSH client ~/.ssh/config ~/.ssh/known_hosts : Contains a list of host keys for all hosts you have logged into that are not already in the system-wide list of known host keys. This permits to detect man-in-the-middle attacks. SSH Key Management \u00b6 To generate an SSH keys, just use the ssh-keygen command, typically as follows: (your_workstation)$> ssh-keygen -t rsa -b 4096 Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: fe:e8:26:df:38:49:3a:99:d7:85:4e:c3:85:c8:24:5b username@yourworkstation The key's randomart image is: +---[RSA 4096]----+ | | | . E | | * . . | | . o . . | | S. o | | .. = . | | =.= o | | * ==o | | B=.o | +-----------------+ Warning To ensure the security of the platform and your data stored on it, you must protect your SSH keys with a passphrase! Additionally, your private key and passphrase should never be transmitted to anybody. After the execution of ssh-keygen command, the keys are generated and stored in the following files: SSH RSA Private key: ~/.ssh/id_rsa . Again, NEVER EVER TRANSMIT THIS FILE SSH RSA Public key: ~/.ssh/id_rsa.pub . This file is the ONLY one SAFE to distribute Ensure the access rights are correct on the generated keys using the ' ls -l ' command. The private key should be readable only by you: (your_workstation)$> ls -l ~/.ssh/id_* -rw------- 1 git git 751 Mar 1 20:16 /home/username/.ssh/id_rsa -rw-r--r-- 1 git git 603 Mar 1 20:16 /home/username/.ssh/id_rsa.pub Configuration \u00b6 In order to be able to login to the clusters, you will have to add this public key (i.e. id_rsa.pub ) into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail). The port on which the SSH servers are listening is not the default one ( i.e. 22) but 8022 . Consequently, if you want to connect to the Iris cluster, open a terminal and run (substituting yourlogin with the login name you received from us): (your_workstation)$> ssh -p 8022 yourlogin@access-iris.uni.lu For the Aion cluster, the access server host name is access-aion.uni.lu : (your_workstation)$> ssh -p 8022 yourlogin@access-aion.uni.lu Alternatively, you may want to save the configuration of this connection (and create an alias for it) by editing the file ~/.ssh/config (create it if it does not already exist) and adding the following entries: Host iris-cluster Hostname access-iris.uni.lu Host aion-cluster Hostname access-aion.uni.lu Host *-cluster User yourlogin Port 8022 ForwardAgent no Now you'll be able to issue the following (simpler) command to connect to the cluster and obtain the welcome banner: (your_workstation)$> ssh iris-cluster ================================================================================== Welcome to access2.iris-cluster.uni.lux ================================================================================== _ ____ / \\ ___ ___ ___ ___ ___|___ \\ / _ \\ / __/ __/ _ \\/ __/ __| __) | / ___ \\ (_| (_| __/\\__ \\__ \\/ __/ /_/ \\_\\___\\___\\___||___/___/_____| _____ _ ____ _ _ __ / /_ _|_ __(_)___ / ___| |_ _ ___| |_ ___ _ _\\ \\ | | | || '__| / __| | | | | | | / __| __/ _ \\ '__| | | | | || | | \\__ \\ | |___| | |_| \\__ \\ || __/ | | | | ||___|_| |_|___/ \\____|_|\\__,_|___/\\__\\___|_| | | \\_\\ /_/ ================================================================================== === Computing Nodes ========================================= #RAM/n === #Cores == iris-[001-108] 108 Dell C6320 (2 Xeon E5-2680v4@2.4GHz [14c/120W]) 128GB 3024 iris-[109-168] 60 Dell C6420 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 128GB 1680 iris-[169-186] 18 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 504 +72 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 16GB +368640 iris-[187-190] 4 Dell R840 (4 Xeon Platin.8180M@2.5GHz [28c/205W]) 3TB 448 iris-[191-196] 6 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 168 +24 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 32GB +122880 ================================================================================== *** TOTAL: 196 nodes, 5824 cores + 491520 CUDA cores + 61440 Tensor cores *** Fast interconnect using InfiniBand EDR 100 Gb/s technology Shared Storage (raw capacity): 2180 TB (GPFS) + 1300 TB (Lustre) = 3480 TB Support (in this order!) Platform notifications - User DOC ........ https://hpc.uni.lu/docs - Twitter: @ULHPC - FAQ ............. https://hpc.uni.lu/faq - Mailing-list .... hpc-users@uni.lu - Bug reports .NEW. https://hpc.uni.lu/support (Service Now) - Admins .......... hpc-team@uni.lu (OPEN TICKETS) ================================================================================== /!\\ NEVER COMPILE OR RUN YOUR PROGRAMS FROM THIS FRONTEND ! First reserve your nodes (using srun/sbatch(1)) Linux access2.iris-cluster.uni.lux 3.10.0-957.21.3.el7.x86_64 x86_64 15:51:56 up 6 days, 2:32, 39 users, load average: 0.59, 0.68, 0.54 [yourlogin@access2 ~]$ Activate the SSH agent \u00b6 To be able to use your SSH key in a public-key authentication scheme, it must be loaded by an SSH agent . Mac OS X (>= 10.5) , this will be handled automatically; you will be asked to fill in the passphrase on the first connection. Linux , this will be handled automatically; you will be asked to fill the passphrase on the first connection. However if you get a message similar to the following: (your_workstation)$> ssh -vv iris-cluster [...] Agent admitted failure to sign using the key. Permission denied (publickey). This means that you have to manually load your key in the SSH agent by running: $> ssh-add ~/.ssh/id_rsa SSH Resources \u00b6 Mac OS X : Cyberduck is a free Cocoa FTP and SFTP client. Linux : OpenSSH is available in every good linux distro, and every *BSD, and Mac OS X. SSH Advanced Tips \u00b6 Bash completion : The bash-completion package eases the ssh command usage by providing completion for hostnames and more (assuming you set the directive HashKnownHost to no in your ~/etc/ssh_config ) Forwarding a local port : You can forward a local port to a host behind a firewall. This is useful if you run a server on one of the cluster nodes (let's say listening on port 2222) and you want to access it via the local port 1111 on your machine. Then you'll run: (your_workstation)$> ssh iris-cluster -L 1111:iris-014:2222 Forwarding a remote port : You can forward a remote port back to a host protected by your firewall. Tunnelling for others : By using the -g parameter, you allow connections from other hosts than localhost to use your SSH tunnels. Be warned that anybody within your network may access the tunnelized host this way, which may be a security issue. Using OpenSSH SOCKS proxy feature (with Firefox for instance) : the OpenSSH ssh client also embeds a SOCKS proxy. You may activate it by using the -D parameter and a value for a port ( e.g. 3128), then configuring your application (Firefox for instance) to use localhost:port (i.e. \"localhost:3128\") as a SOCKS proxy. The FoxyProxy module is typically useful for that. One very nice feature of FoxyProxy is that you can use the host resolution on the remote server. This permits you to access your local machine within the university for instance with the same name you would use within the UL network. To summarize, that's better than the VPN proxy ;) Once you setup a SSH SOCKS proxy, you can also use tsocks , a Shell wrapper to simplify the use of the tsocks(8) library to transparently allow an application (not aware of SOCKS) to transparently use a SOCKS proxy. For instance, assuming you create a VNC server on a given remote server as follows: (remote_server)$> vncserver -geometry 1366x768 New 'X' desktop is remote_server:1 Starting applications specified in /home/username/.vnc/xstartup Log file is /home/username/.vnc/remote_server:1.log Then you can make the VNC client on your workstation use this tunnel to access the VNS server as follows: (your_workstation)$> tsocks vncviewer :1 Escape character : use ~. to disconnect, even if your remote command hangs.","title":"Installation notes"},{"location":"connect/linux/#installation-notes","text":"Normally, SSH is installed natively on your machine and the ssh command should be accessible from the command line (or a Terminal) through the ssh command: (your_workstation)$> ssh -V OpenSSH_8.4p1, OpenSSL 1.1.1h 22 Sep 2020 If that's not the case, consider installing the package openssh-client (Debian-like systems) or ssh (Redhat-like systems). Your local SSH configuration is located in the ~/.ssh/ directory and consists of: ~/.ssh/id_rsa.pub : your SSH public key. This one is the only one SAFE to distribute. ~/.ssh/id_rsa : the associated private key. NEVER EVER TRANSMIT THIS FILE (eventually) the configuration of the SSH client ~/.ssh/config ~/.ssh/known_hosts : Contains a list of host keys for all hosts you have logged into that are not already in the system-wide list of known host keys. This permits to detect man-in-the-middle attacks.","title":"Installation notes"},{"location":"connect/linux/#ssh-key-management","text":"To generate an SSH keys, just use the ssh-keygen command, typically as follows: (your_workstation)$> ssh-keygen -t rsa -b 4096 Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: fe:e8:26:df:38:49:3a:99:d7:85:4e:c3:85:c8:24:5b username@yourworkstation The key's randomart image is: +---[RSA 4096]----+ | | | . E | | * . . | | . o . . | | S. o | | .. = . | | =.= o | | * ==o | | B=.o | +-----------------+ Warning To ensure the security of the platform and your data stored on it, you must protect your SSH keys with a passphrase! Additionally, your private key and passphrase should never be transmitted to anybody. After the execution of ssh-keygen command, the keys are generated and stored in the following files: SSH RSA Private key: ~/.ssh/id_rsa . Again, NEVER EVER TRANSMIT THIS FILE SSH RSA Public key: ~/.ssh/id_rsa.pub . This file is the ONLY one SAFE to distribute Ensure the access rights are correct on the generated keys using the ' ls -l ' command. The private key should be readable only by you: (your_workstation)$> ls -l ~/.ssh/id_* -rw------- 1 git git 751 Mar 1 20:16 /home/username/.ssh/id_rsa -rw-r--r-- 1 git git 603 Mar 1 20:16 /home/username/.ssh/id_rsa.pub","title":"SSH Key Management"},{"location":"connect/linux/#configuration","text":"In order to be able to login to the clusters, you will have to add this public key (i.e. id_rsa.pub ) into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail). The port on which the SSH servers are listening is not the default one ( i.e. 22) but 8022 . Consequently, if you want to connect to the Iris cluster, open a terminal and run (substituting yourlogin with the login name you received from us): (your_workstation)$> ssh -p 8022 yourlogin@access-iris.uni.lu For the Aion cluster, the access server host name is access-aion.uni.lu : (your_workstation)$> ssh -p 8022 yourlogin@access-aion.uni.lu Alternatively, you may want to save the configuration of this connection (and create an alias for it) by editing the file ~/.ssh/config (create it if it does not already exist) and adding the following entries: Host iris-cluster Hostname access-iris.uni.lu Host aion-cluster Hostname access-aion.uni.lu Host *-cluster User yourlogin Port 8022 ForwardAgent no Now you'll be able to issue the following (simpler) command to connect to the cluster and obtain the welcome banner: (your_workstation)$> ssh iris-cluster ================================================================================== Welcome to access2.iris-cluster.uni.lux ================================================================================== _ ____ / \\ ___ ___ ___ ___ ___|___ \\ / _ \\ / __/ __/ _ \\/ __/ __| __) | / ___ \\ (_| (_| __/\\__ \\__ \\/ __/ /_/ \\_\\___\\___\\___||___/___/_____| _____ _ ____ _ _ __ / /_ _|_ __(_)___ / ___| |_ _ ___| |_ ___ _ _\\ \\ | | | || '__| / __| | | | | | | / __| __/ _ \\ '__| | | | | || | | \\__ \\ | |___| | |_| \\__ \\ || __/ | | | | ||___|_| |_|___/ \\____|_|\\__,_|___/\\__\\___|_| | | \\_\\ /_/ ================================================================================== === Computing Nodes ========================================= #RAM/n === #Cores == iris-[001-108] 108 Dell C6320 (2 Xeon E5-2680v4@2.4GHz [14c/120W]) 128GB 3024 iris-[109-168] 60 Dell C6420 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 128GB 1680 iris-[169-186] 18 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 504 +72 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 16GB +368640 iris-[187-190] 4 Dell R840 (4 Xeon Platin.8180M@2.5GHz [28c/205W]) 3TB 448 iris-[191-196] 6 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 168 +24 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 32GB +122880 ================================================================================== *** TOTAL: 196 nodes, 5824 cores + 491520 CUDA cores + 61440 Tensor cores *** Fast interconnect using InfiniBand EDR 100 Gb/s technology Shared Storage (raw capacity): 2180 TB (GPFS) + 1300 TB (Lustre) = 3480 TB Support (in this order!) Platform notifications - User DOC ........ https://hpc.uni.lu/docs - Twitter: @ULHPC - FAQ ............. https://hpc.uni.lu/faq - Mailing-list .... hpc-users@uni.lu - Bug reports .NEW. https://hpc.uni.lu/support (Service Now) - Admins .......... hpc-team@uni.lu (OPEN TICKETS) ================================================================================== /!\\ NEVER COMPILE OR RUN YOUR PROGRAMS FROM THIS FRONTEND ! First reserve your nodes (using srun/sbatch(1)) Linux access2.iris-cluster.uni.lux 3.10.0-957.21.3.el7.x86_64 x86_64 15:51:56 up 6 days, 2:32, 39 users, load average: 0.59, 0.68, 0.54 [yourlogin@access2 ~]$","title":"Configuration"},{"location":"connect/linux/#activate-the-ssh-agent","text":"To be able to use your SSH key in a public-key authentication scheme, it must be loaded by an SSH agent . Mac OS X (>= 10.5) , this will be handled automatically; you will be asked to fill in the passphrase on the first connection. Linux , this will be handled automatically; you will be asked to fill the passphrase on the first connection. However if you get a message similar to the following: (your_workstation)$> ssh -vv iris-cluster [...] Agent admitted failure to sign using the key. Permission denied (publickey). This means that you have to manually load your key in the SSH agent by running: $> ssh-add ~/.ssh/id_rsa","title":"Activate the SSH agent"},{"location":"connect/linux/#ssh-resources","text":"Mac OS X : Cyberduck is a free Cocoa FTP and SFTP client. Linux : OpenSSH is available in every good linux distro, and every *BSD, and Mac OS X.","title":"SSH Resources"},{"location":"connect/linux/#ssh-advanced-tips","text":"Bash completion : The bash-completion package eases the ssh command usage by providing completion for hostnames and more (assuming you set the directive HashKnownHost to no in your ~/etc/ssh_config ) Forwarding a local port : You can forward a local port to a host behind a firewall. This is useful if you run a server on one of the cluster nodes (let's say listening on port 2222) and you want to access it via the local port 1111 on your machine. Then you'll run: (your_workstation)$> ssh iris-cluster -L 1111:iris-014:2222 Forwarding a remote port : You can forward a remote port back to a host protected by your firewall. Tunnelling for others : By using the -g parameter, you allow connections from other hosts than localhost to use your SSH tunnels. Be warned that anybody within your network may access the tunnelized host this way, which may be a security issue. Using OpenSSH SOCKS proxy feature (with Firefox for instance) : the OpenSSH ssh client also embeds a SOCKS proxy. You may activate it by using the -D parameter and a value for a port ( e.g. 3128), then configuring your application (Firefox for instance) to use localhost:port (i.e. \"localhost:3128\") as a SOCKS proxy. The FoxyProxy module is typically useful for that. One very nice feature of FoxyProxy is that you can use the host resolution on the remote server. This permits you to access your local machine within the university for instance with the same name you would use within the UL network. To summarize, that's better than the VPN proxy ;) Once you setup a SSH SOCKS proxy, you can also use tsocks , a Shell wrapper to simplify the use of the tsocks(8) library to transparently allow an application (not aware of SOCKS) to transparently use a SOCKS proxy. For instance, assuming you create a VNC server on a given remote server as follows: (remote_server)$> vncserver -geometry 1366x768 New 'X' desktop is remote_server:1 Starting applications specified in /home/username/.vnc/xstartup Log file is /home/username/.vnc/remote_server:1.log Then you can make the VNC client on your workstation use this tunnel to access the VNS server as follows: (your_workstation)$> tsocks vncviewer :1 Escape character : use ~. to disconnect, even if your remote command hangs.","title":"SSH Advanced Tips"},{"location":"connect/ood/","text":"ULHPC Open On Demand (OOD) Portal \u00b6 Open OnDemand (OOD) is a Web portal compatible with Windows, Linux and MacOS. You should login with your ULHPC credential using the URL communicated to you by the UL HPC team. OOD provides a convenient web access to the HPC resources and integrates a file management system a job management system (job composer, monitoring your submitted jobs, ...) an interactive command-line shell access interactive apps with graphical desktop environments ULHPC OOD Portal limitations The ULHPC OOD portal is NOT accessible outside the UniLu network. If you want to use it, you will need to setup a VPN to access the UniLu network Note : The portal is in _still under active development state: missing features and bugs can be reported to the ULHPC team via the support portal Live tests and demo are proposed during the ULHPC Tutorial: Preliminaries / OOD . Below are illustrations of OOD capabilities on the ULHPC facility. File management \u00b6 Job composer and Job List \u00b6 Shell access \u00b6 Interactive sessions \u00b6 Graphical Desktop Environment \u00b6","title":"Open On Demand Portal"},{"location":"connect/ood/#ulhpc-open-on-demand-ood-portal","text":"Open OnDemand (OOD) is a Web portal compatible with Windows, Linux and MacOS. You should login with your ULHPC credential using the URL communicated to you by the UL HPC team. OOD provides a convenient web access to the HPC resources and integrates a file management system a job management system (job composer, monitoring your submitted jobs, ...) an interactive command-line shell access interactive apps with graphical desktop environments ULHPC OOD Portal limitations The ULHPC OOD portal is NOT accessible outside the UniLu network. If you want to use it, you will need to setup a VPN to access the UniLu network Note : The portal is in _still under active development state: missing features and bugs can be reported to the ULHPC team via the support portal Live tests and demo are proposed during the ULHPC Tutorial: Preliminaries / OOD . Below are illustrations of OOD capabilities on the ULHPC facility.","title":"ULHPC Open On Demand (OOD) Portal"},{"location":"connect/ood/#file-management","text":"","title":"File management"},{"location":"connect/ood/#job-composer-and-job-list","text":"","title":"Job composer and Job List"},{"location":"connect/ood/#shell-access","text":"","title":"Shell access"},{"location":"connect/ood/#interactive-sessions","text":"","title":"Interactive sessions"},{"location":"connect/ood/#graphical-desktop-environment","text":"","title":"Graphical Desktop Environment"},{"location":"connect/ssh/","text":"SSH \u00b6 All ULHPC servers are reached using either the Secure Shell (SSH) communication and encryption protocol (version 2). Developed by SSH Communications Security Ltd. , Secure Shell is a an encrypted network protocol used to log into another computer over an unsecured network, to execute commands in a remote machine, and to move files from one machine to another in a secure way. On UNIX/LINUX/BSD type systems, SSH is also the name of a suite of software applications for connecting via the SSH protocol. The SSH applications can execute commands on a remote machine and transfer files from one machine to another. All communications are automatically and transparently encrypted, including passwords. Most versions of SSH provide login ( ssh , slogin ), a remote copy operation ( scp ), and many also provide a secure ftp client ( sftp ). Additionally, SSH allows secure X Window connections. To use SSH, you have to generate a pair of keys, one public and the other private . The public key authentication is the most secure and flexible approach to ensure a multi-purpose transparent connection to a remote server. This approach is enforced on the ULHPC platforms and assumes that the public key is known by the system in order to perform an authentication based on a challenge/response protocol instead of the classical password-based protocol. The way SSH handles the keys and the configuration files is illustrated in the following figure: Installation \u00b6 OpenSSH is natively supported on Linux / Mac OS / Unix / WSL (see below) On Windows, you are thus encouraged to install Windows Subsystem for Linux (WSL) and setup an Ubuntu subsystem from Microsoft Store . You probably want to also install Windows Terminal and MobaXterm Better performance of your Linux subsystem can be obtained by migrating to WSL 2 Follow the ULHPC Tutorial: Setup Pre-Requisites / Windows for detailed instructions. SSH Key Generation \u00b6 To generate an RSA SSH keys of 4096-bit length , just use the ssh-keygen command as follows : ssh-keygen -t rsa -b 4096 -a 100 After the execution of this command, the generated keys are stored in the following files: SSH RSA Private key: ~/.ssh/id_rsa . NEVER EVER TRANSMIT THIS FILE SSH RSA Public key: ~/.ssh/id_rsa.pub . This file is the ONLY one SAFE to distribute To passphrase or not to passphrase To ensure the security of your SSH key-pair on your laptop, you MUST protect your SSH keys with a passphrase! Note however that while possible, this passphrase is purely private and has a priori nothing to do with your University or your ULHPC credentials. Nevertheless, a strong passphrase follows the same recommendations as for strong passwords (for instance: see password requirements and guidelines . Finally, just like encryption keys, passphrases need to be kept safe and protected from unauthorised access. A Password Manager can help you to store all your passwords safely. The University is currently not offering a university wide password manger but there are many free and paid ones you can use, for example: KeePassX , PWSafe , Dashlane , 1Password or LastPass . You may want to generate also ED25519 Key Pairs (which is the most recommended public-key algorithm available today) -- see explaination ssh-keygen -t ed25519 -a 100 Your key pairs will be located under ~/.ssh/ and follow the following format -- the .pub extension indicated the public key part and is the ONLY one SAFE to distribute : $ ls -l ~/.ssh/id_* -rw------- username groupname ~/.ssh/id_rsa -rw-r--r-- username groupname ~/.ssh/id_rsa.pub # Public RSA key -rw------- username groupname ~/.ssh/id_ed25519 -rw-r--r-- username groupname ~/.ssh/id_ed25519.pub # Public ED25519 key Ensure the access rights are correct on the generated keys using the ' ls -l ' command. In particular, the private key should be readable only by you: For more details, follow the ULHPC Tutorials: Preliminaries / SSH . (deprecated - Windows only): SSH key management with MobaKeyGen tool On Windows with MobaXterm , a tool exists and can be used to generate an SSH key pair. While not recommended (we encourage you to run WSL), here are the instructions to follow to generate these keys: Open the application Start > Program Files > MobaXterm . Change the default home directory for a persistent home directory instead of the default Temp directory. Go onto Settings > Configuration > General > Persistent home directory . choose a location for your home directory. your local SSH configuration will be located under HOME/.ssh/ Go onto Tools > Network > MobaKeyGen (SSH key generator) . Choose RSA as the type of key to generate and change \"Number of bits in a generated key\" to 4096. Click on the Generate button. Move your mouse to generate some randomness. Select a strong passphrase in the Key passphrase field for your key. Save the public and private keys as respectively id_rsa.pub and id_rsa.ppk . Please keep a copy of the public key, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail). (deprecated - Windows only): SSH key management with PuTTY While no longer recommended, you may still want to use Putty and the associated tools, more precisely: PuTTY , the free SSH client Pageant , an SSH authentication agent for PuTTY tools PuTTYgen , an RSA key generation utility PSCP , an SCP (file transfer) client, i.e. command-line secure file copy WinSCP , SCP/SFTP (file transfer) client with easy-to-use graphical interface The different steps involved in the installation process are illustrated below ( REMEMBER to tick the option \"Associate .PPK files (PuTTY Private Key) with Pageant and PuTTYGen\" ): Now you can use the PuTTYgen utility to generate an RSA key pair. The main steps for the generation of the keys are illustrated below (yet with 4096 bits instead of 2048): Save the public and private keys as respectively id_rsa.pub and id_rsa.ppk . Please keep a copy of the public key, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail). Password-less logins and transfers \u00b6 Password based authentication is disabled on all ULHPC servers. You can only use public-key authentication . This assumes that you upload your public SSH keys *.pub to your user entry on the ULHPC Identity Management Portal . Consult the associated documentation to discover how to do it. Once done, you can connect by SSH to the ULHPC clusters. Note that the port on which the SSH servers are listening is not the default SSH one ( i.e. 22) but 8022 . Consequently, if you want to connect to the Iris cluster, open a terminal and run (substituting yourlogin with the login name you received from us): Iris # ADAPT 'yourlogin' accordingly ssh -p 8022 yourlogin@access-iris.uni.lu Aion # ADAPT 'yourlogin' accordingly ssh -p 8022 yourlogin@access-aion.uni.lu Of course, we advise you to setup your SSH configuration to avoid typing this detailed command. This is explained in the next section. SSH Configuration \u00b6 On Linux / Mac OS / Unix / WSL, your SSH configuration is defined in ~/.ssh/config . As recommended in the ULHPC Tutorials: Preliminaries / SSH , you probably want to create the following configuration to easiest further access and data transfers: # ~/.ssh/config -- SSH Configuration # Common options Host * Compression yes ConnectTimeout 15 # ULHPC Clusters Host iris-cluster Hostname access-iris.uni.lu Host aion-cluster Hostname access-aion.uni.lu # /!\\ ADAPT 'yourlogin' accordingly Host *-cluster User yourlogin Port 8022 ForwardAgent no You should now be able to connect as follows Iris ssh iris-cluster Aion ssh aion-cluster (Windows only) Remote session configuration with MobaXterm This part of the documentation comes from MobaXterm documentation page MobaXterm allows you to launch remote sessions. You just have to click on the \"Sessions\" button to start a new session. Select SSH session on the second screen. Enter the following parameters: Remote host: access-iris.uni.lu (repeat with access-aion.uni.lu ) Check the Specify username box Username: yourlogin Adapt to match the one that was sent to you in the Welcome e-mail once your HPC account was created Port: 8022 Go in Advanced SSH settings and check the Use private key box. Select your previously generated key id_rsa.ppk . You can now click on Connect and enjoy. (deprecated - Windows only) - Remote session configuration with PuTTY If you want to connect to one of the ULHPC cluster, open Putty and enter the following settings: In Category:Session : Host Name: access-iris.uni.lu (or access-aion.uni.lu if you want to access Aion) Port: 8022 Connection Type: SSH (leave as default) In Category:Connection:Data : Auto-login username: yourlogin Adapt to match the one that was sent to you in the Welcome e-mail once your HPC account was created In Category:SSH:Auth : Upload your private key: Options controlling SSH authentication Click on Open button. If this is the first time connecting to the server from this computer a Putty Security Alert will appear. Accept the connection by clicking Yes . You should now be logged into the selected ULHPC login node . Now you probably want want to save the configuration of this connection : Go onto the Session category. Enter the settings you want to save. Enter a name in the Saved session field (for example Iris for access to Iris cluster). Click on the Save button. Next time you want to connect to the cluster, click on Load button and Open to open a new connection. SSH Agent \u00b6 On your laptop \u00b6 To be able to use your SSH key in a public-key authentication scheme, it must be loaded by an SSH agent . Mac OS X (>= 10.5) , this will be handled automatically; you will be asked to fill in the passphrase on the first connection. Linux , this will be handled automatically; you will be asked to fill the passphrase on the first connection. However if you get a message similar to the following: ( laptop ) $> ssh -vv iris-cluster [ ... ] Agent admitted failure to sign using the key. Permission denied ( publickey ) . This means that you have to manually load your key in the SSH agent by running: ( laptop ) $> ssh-add ~/.ssh/id_rsa Enter passphrase for ~/.ssh/id_rsa: # <-- enter your passphrase here Identity added: ~/.ssh/id_rsa ( @ ) ( laptop ) $> ssh-add ~/.ssh/id_ed25519 Enter passphrase for ~/.ssh/id_ed25519: # <-- enter your passphrase here Identity added: ~/.ssh/id_ed25519 ( @ ) On Ubuntu/WSL , if you experience issues when using ssh-add , you should install the keychain package and use it as follows (eventually add it to your ~/.profile ): # Installation ( laptop ) $> sudo apt install keychain # Save your passphrase /usr/bin/keychain --nogui ~/.ssh/id_ed25519 # (eventually) repeat with ~/.ssh/id_rsa # Load the agent in your shell source ~/.keychain/ $( hostname ) -sh (Windows only) SSH Agent within MobaXterm Go in Settings > SSH Tab In SSH agents section, check Use internal SSH agent \"MobAgent\" Click on the + button on the right Select your private key file. If you have several keys, you can add them by doing steps above again. Click on \"Show keys currently loaded in MobAgent\". An advertisement window may appear asking if you want to run MobAgent. Click on \"Yes\". Check that your key(s) appears in the window. Close the window. Click on OK . Restart MobaXterm. (deprecated - Windows only) - SSH Agent with PuTTY Pageant To be able to use your PuTTY key in a public-key authentication scheme, it must be loaded by an SSH agent . You should run Pageant for that. To load your SSH key in Pageant: Right-click on the pageant icon in the system tray, click on the Add key menu item select the private key file you saved while running puttygen.exe i.e. `` click on the Open button: a new dialog will pop up and ask for your passphrase. Once your passphrase is entered, your key will be loaded in pageant, enabling you to connect with Putty. On ULHPC clusters \u00b6 For security reason, SSH agent forwarding is prohibited and explicitly disabled (see ForwardAgent no configuration by default in the above configuration , you may need to manually load an agent once connected on the ULHPC facility, for instance if you are tired of typing the passphrase of a SSH key generated on the cluster to access a remote (private) service. You need to proceed as follows: $ eval \" $( ssh-agent ) \" # export the SSH_AUTH_SOCK and SSH_AGENT_PID variables $ ssh-add ~/.ssh/id_rsa # [...] Enter passphrase for [ ... ] Identity added: ~/.ssh/id_rsa ( @ ) You can then enjoy it. Be aware however that this exposes your private key. So you MUST properly kill your agent when you don't need it any mode, using $ eval \" $( ssh-agent -k ) \" Agent pid killed Key fingerprints \u00b6 ULHPC may occasionally update the host keys on the major systems. Check here to confirm the current fingerprints. Iris With regards access-iris.uni.lu : 256 SHA256:tkhRD9IVo04NPw4OV/s2LSKEwe54LAEphm7yx8nq1pE /etc/ssh/ssh_host_ed25519_key.pub (ED25519) 2048 SHA256:WDWb2hh5uPU6RgaSotxzUe567F3scioJWy+9iftVmhI /etc/ssh/ssh_host_rsa_key.pub (RSA) Aion With regards access-aion.uni.lu : 256 SHA256:jwbW8pkfCzXrh1Xhf9n0UI+7hd/YGi4FlyOE92yxxe0 [access-aion.uni.lu]:8022 (ED25519) 3072 SHA256:L9n2gT6aV9KGy0Xdh1ks2DciE9wFz7MDRBPGWPFwFK4 [access-aion.uni.lu]:8022 (RSA) Get SSH key fingerprint The ssh fingerprints can be obtained via: ssh-keygen -lf <(ssh-keyscan -t rsa,ed25519 $(hostname) 2>/dev/null) Putty key fingerprint format Depending on the ssh client you use to connect to ULHPC systems, you may see different key fingerprints. For example, Putty uses different format of fingerprints as follows: access-iris.uni.lu ssh-ed25519 255 4096 07:6a:5f:11:df:d4:3f:d4:97:98:12:69:3a:63:70:2f You may see the following warning when connecting to Cori with Putty, but it is safe to ingore. PuTTY Security Alert The server's host key is not cached in the registry. You have no guarantee that the server is the computer you think it is. The server's ssh-ed25519 key fingerprint is: ssh-ed25519 255 4096 07:6a:5f:11:df:d4:3f:d4:97:98:12:69:3a:63:70:2f If you trust this host, hit Yes to add the key to PuTTY's cache and carry on connecting. If you want to carry on connecting just once, without adding the key to the cache, hit No. If you do not trust this host, hit Cancel to abandon the connection. Host Keys \u00b6 These are the entries in ~/.ssh/known_hosts . Iris The known host SSH entry for the Iris cluster should be as follows: [access-iris.uni.lu]:8022 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOP1eF8uJ37h5jFQQShn/NHRGD/d8KsMMUTHkoPRANLn Aion The known host SSH entry for the Aion cluster should be as follows: [access-aion.uni.lu]:8022 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFmcYJ7T6A1wOvIQaohgwVCrKLqIrzpQZAZrlEKx8Vsy Troubleshooting \u00b6 See the corresponding section . Advanced SSH Tips and Tricks \u00b6 CLI Completion \u00b6 The bash-completion package eases the ssh command usage by providing completion for hostnames and more (assuming you set the directive HashKnownHost to no in your ~/etc/ssh_config ) SOCKS 5 Proxy plugin \u00b6 Many Data Analytics framework involves a web interface (at the level of the master and/or the workers) you probably want to access in a relative transparent way. For that, a convenient way is to rely on a SOCKS proxy, which is basically an SSH tunnel in which specific applications forward their traffic down the tunnel to the server, and then on the server end, the proxy forwards the traffic out to the general Internet. Unlike a VPN, a SOCKS proxy has to be configured on an app by app basis on the client machine, but can be set up without any specialty client agents. The general principle is depicted below. Setting Up the Tunnel \u00b6 To initiate such a SOCKS proxy using SSH (listening on localhost:1080 for instance), you simply need to use the -D 1080 command line option when connecting to a remote server: Iris ssh -D 1080 -C iris-cluster Aion ssh -D 1080 -C aion-cluster -D : Tells SSH that we want a SOCKS tunnel on the specified port number (you can choose a number between 1025-65536) -C : Compresses the data before sending it FoxyProxy [Firefox] Extension \u00b6 Now that you have an SSH tunnel, it's time to configure your web browser (recommended: Firefox) to use that tunnel. In particular, install the Foxy Proxy extension for Firefox and configure it to use your SOCKS proxy: Right click on the fox icon, Select Options Add a new proxy button Name: ULHPC proxy Informations > Manual configuration Host IP: 127.0.0.1 Port: 1080 Check the Proxy SOCKS Option Click on OK Close Open a new tab Click on the Fox Choose the ULHPC proxy disable it when you no longer need it. You can now access any web interface deployed on any service reachable from the SSH jump host i.e. the ULHPC login node. Using tsock \u00b6 Once you setup a SSH SOCKS proxy, you can also use tsocks , a Shell wrapper to simplify the use of the tsocks(8) library to transparently allow an application (not aware of SOCKS) to transparently use a SOCKS proxy. For instance, assuming you create a VNC server on a given remote server as follows: (remote_server) $ > vncserver -geometry 1366x768 New 'X' desktop is remote_server:1 Starting applications specified in /home/username/.vnc/xstartup Log file is /home/username/.vnc/remote_server:1.log Then you can make the VNC client on your workstation use this tunnel to access the VNS server as follows: (laptop) $ > tsocks vncviewer :1 tsock Escape character Use ~. to disconnect, even if your remote command hangs. SSH Port Forwarding \u00b6 Forwarding a local port \u00b6 You can forward a local port to a host behind a firewall. This is useful if you run a server on one of the cluster nodes (let's say listening on port 2222) and you want to access it via the local port 1111 on your machine. Then you'll run: # Here targeting iris cluster ( laptop ) $ ssh iris-cluster -L 1111 :iris-014:2222 Forwarding a remote port \u00b6 You can forward a remote port back to a host protected by your firewall. This is useful when you want the HPC node to access some local service. For instance is your local machine runs a service that is listening at some local port, say 2222, and you have some service in the HPC node that listens to some local port, say 1111, then the you'll run: # Here targeting the iris cluster ( local machine ) $ ssh iris-cluster -R 1111 : $( hostname -i ) :2222 Tunnelling for others \u00b6 By using the -g parameter, you allow connections from other hosts than localhost to use your SSH tunnels. Be warned that anybody within your network may access the tunnelized host this way, which may be a security issue. SSH jumps \u00b6 Compute nodes are not directly accessible from the outside network. To login into a cluster node you will need to jump through a login node. Remember, you need a job running in a node before you can ssh into it. Assume that you have some job running on aion-0014 for instance. Then, connect to aion-0014 with: ssh -J ${ USER } @access-aion.uni.lu:8022 ${ USER } @aion-0014 The domain resolution in the login node will determine the IP of the aion-0014 . You can always use the IP address if the node directly if you know it. Passwordless SSH jumps \u00b6 The ssh agent is not configured in the login nodes for security reasons. As a result, compute nodes will request your password. To configure a passwordless jump to a compute node, you will need to install the same key in your ssh configuration of your local machine and the login node. To avoid exposing your keys at your personal machine, create and share a new key. Create a key in your local machine, ssh-keygen -a 127 -t ed25519 -f ~/.ssh/ulhpc_id_ed25519 and then copy both the private and public keys in your HPC account, scp ~/.ssh/ulhpc_id_ed25519* aion-cluster:~/.ssh/ where the command assumes that you have setup your SSH configuration file . Finally, add the key to the list of authorized keys: ssh-copy-id -i ~/.ssh/ulhpc_id_ed25519 aion-cluster Then you can connect without a password to any compute node at which you have a job running with the command: ssh -i ~/.ssh/ulhpc_id_ed25519 -J ${ USER } @access-aion.uni.lu:8022 ${ USER } @ In the option you can use the node IP address or the node name. Port forwarding over SSH jumps \u00b6 You can combine the jump command with other options, such as port forwarding , for instance to access from you local machine a web server running in a compute node. Assume for instance that you have a server running in iris-014 and listens at the IP 127.0.0.1 and port 2222 , and that you would like to forward the remote port 2222 to the 1111 port of you local machine. The, call the port forwarding command with a jump though the login node: ssh -J iris-cluster -L 1111 :127.0.0.1:2222 @iris-014 This command can be combined with passwordless access to the cluster node. Extras Tools around SSH \u00b6 Assh - Advanced SSH config is a transparent wrapper that make ~/.ssh/config easier to manage support for templates , aliases , defaults , inheritance etc. gateways : transparent ssh connection chaining more flexible command-line. Ex : Connect to hosta using hostb as a gateway $ ssh hosta/hostb drastically simplify your SSH config Linux / Mac OS only ClusterShell : clush , nodeset (or cluset), light, unified, robust command execution framework well-suited to ease daily administrative tasks of Linux clusters. using tools like clush and nodeset efficient, parallel, scalable command execution engine \\hfill{\\tiny in Python} provides an unified node groups syntax and external group access see nodeset and the NodeSet class DSH - Distributed / Dancer's Shell sshutle , \" where transparent proxy meets VPN meets ssh \"","title":"SSH"},{"location":"connect/ssh/#ssh","text":"All ULHPC servers are reached using either the Secure Shell (SSH) communication and encryption protocol (version 2). Developed by SSH Communications Security Ltd. , Secure Shell is a an encrypted network protocol used to log into another computer over an unsecured network, to execute commands in a remote machine, and to move files from one machine to another in a secure way. On UNIX/LINUX/BSD type systems, SSH is also the name of a suite of software applications for connecting via the SSH protocol. The SSH applications can execute commands on a remote machine and transfer files from one machine to another. All communications are automatically and transparently encrypted, including passwords. Most versions of SSH provide login ( ssh , slogin ), a remote copy operation ( scp ), and many also provide a secure ftp client ( sftp ). Additionally, SSH allows secure X Window connections. To use SSH, you have to generate a pair of keys, one public and the other private . The public key authentication is the most secure and flexible approach to ensure a multi-purpose transparent connection to a remote server. This approach is enforced on the ULHPC platforms and assumes that the public key is known by the system in order to perform an authentication based on a challenge/response protocol instead of the classical password-based protocol. The way SSH handles the keys and the configuration files is illustrated in the following figure:","title":"SSH"},{"location":"connect/ssh/#installation","text":"OpenSSH is natively supported on Linux / Mac OS / Unix / WSL (see below) On Windows, you are thus encouraged to install Windows Subsystem for Linux (WSL) and setup an Ubuntu subsystem from Microsoft Store . You probably want to also install Windows Terminal and MobaXterm Better performance of your Linux subsystem can be obtained by migrating to WSL 2 Follow the ULHPC Tutorial: Setup Pre-Requisites / Windows for detailed instructions.","title":"Installation"},{"location":"connect/ssh/#ssh-key-generation","text":"To generate an RSA SSH keys of 4096-bit length , just use the ssh-keygen command as follows : ssh-keygen -t rsa -b 4096 -a 100 After the execution of this command, the generated keys are stored in the following files: SSH RSA Private key: ~/.ssh/id_rsa . NEVER EVER TRANSMIT THIS FILE SSH RSA Public key: ~/.ssh/id_rsa.pub . This file is the ONLY one SAFE to distribute To passphrase or not to passphrase To ensure the security of your SSH key-pair on your laptop, you MUST protect your SSH keys with a passphrase! Note however that while possible, this passphrase is purely private and has a priori nothing to do with your University or your ULHPC credentials. Nevertheless, a strong passphrase follows the same recommendations as for strong passwords (for instance: see password requirements and guidelines . Finally, just like encryption keys, passphrases need to be kept safe and protected from unauthorised access. A Password Manager can help you to store all your passwords safely. The University is currently not offering a university wide password manger but there are many free and paid ones you can use, for example: KeePassX , PWSafe , Dashlane , 1Password or LastPass . You may want to generate also ED25519 Key Pairs (which is the most recommended public-key algorithm available today) -- see explaination ssh-keygen -t ed25519 -a 100 Your key pairs will be located under ~/.ssh/ and follow the following format -- the .pub extension indicated the public key part and is the ONLY one SAFE to distribute : $ ls -l ~/.ssh/id_* -rw------- username groupname ~/.ssh/id_rsa -rw-r--r-- username groupname ~/.ssh/id_rsa.pub # Public RSA key -rw------- username groupname ~/.ssh/id_ed25519 -rw-r--r-- username groupname ~/.ssh/id_ed25519.pub # Public ED25519 key Ensure the access rights are correct on the generated keys using the ' ls -l ' command. In particular, the private key should be readable only by you: For more details, follow the ULHPC Tutorials: Preliminaries / SSH . (deprecated - Windows only): SSH key management with MobaKeyGen tool On Windows with MobaXterm , a tool exists and can be used to generate an SSH key pair. While not recommended (we encourage you to run WSL), here are the instructions to follow to generate these keys: Open the application Start > Program Files > MobaXterm . Change the default home directory for a persistent home directory instead of the default Temp directory. Go onto Settings > Configuration > General > Persistent home directory . choose a location for your home directory. your local SSH configuration will be located under HOME/.ssh/ Go onto Tools > Network > MobaKeyGen (SSH key generator) . Choose RSA as the type of key to generate and change \"Number of bits in a generated key\" to 4096. Click on the Generate button. Move your mouse to generate some randomness. Select a strong passphrase in the Key passphrase field for your key. Save the public and private keys as respectively id_rsa.pub and id_rsa.ppk . Please keep a copy of the public key, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail). (deprecated - Windows only): SSH key management with PuTTY While no longer recommended, you may still want to use Putty and the associated tools, more precisely: PuTTY , the free SSH client Pageant , an SSH authentication agent for PuTTY tools PuTTYgen , an RSA key generation utility PSCP , an SCP (file transfer) client, i.e. command-line secure file copy WinSCP , SCP/SFTP (file transfer) client with easy-to-use graphical interface The different steps involved in the installation process are illustrated below ( REMEMBER to tick the option \"Associate .PPK files (PuTTY Private Key) with Pageant and PuTTYGen\" ): Now you can use the PuTTYgen utility to generate an RSA key pair. The main steps for the generation of the keys are illustrated below (yet with 4096 bits instead of 2048): Save the public and private keys as respectively id_rsa.pub and id_rsa.ppk . Please keep a copy of the public key, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail).","title":"SSH Key Generation"},{"location":"connect/ssh/#password-less-logins-and-transfers","text":"Password based authentication is disabled on all ULHPC servers. You can only use public-key authentication . This assumes that you upload your public SSH keys *.pub to your user entry on the ULHPC Identity Management Portal . Consult the associated documentation to discover how to do it. Once done, you can connect by SSH to the ULHPC clusters. Note that the port on which the SSH servers are listening is not the default SSH one ( i.e. 22) but 8022 . Consequently, if you want to connect to the Iris cluster, open a terminal and run (substituting yourlogin with the login name you received from us): Iris # ADAPT 'yourlogin' accordingly ssh -p 8022 yourlogin@access-iris.uni.lu Aion # ADAPT 'yourlogin' accordingly ssh -p 8022 yourlogin@access-aion.uni.lu Of course, we advise you to setup your SSH configuration to avoid typing this detailed command. This is explained in the next section.","title":"Password-less logins and transfers"},{"location":"connect/ssh/#ssh-configuration","text":"On Linux / Mac OS / Unix / WSL, your SSH configuration is defined in ~/.ssh/config . As recommended in the ULHPC Tutorials: Preliminaries / SSH , you probably want to create the following configuration to easiest further access and data transfers: # ~/.ssh/config -- SSH Configuration # Common options Host * Compression yes ConnectTimeout 15 # ULHPC Clusters Host iris-cluster Hostname access-iris.uni.lu Host aion-cluster Hostname access-aion.uni.lu # /!\\ ADAPT 'yourlogin' accordingly Host *-cluster User yourlogin Port 8022 ForwardAgent no You should now be able to connect as follows Iris ssh iris-cluster Aion ssh aion-cluster (Windows only) Remote session configuration with MobaXterm This part of the documentation comes from MobaXterm documentation page MobaXterm allows you to launch remote sessions. You just have to click on the \"Sessions\" button to start a new session. Select SSH session on the second screen. Enter the following parameters: Remote host: access-iris.uni.lu (repeat with access-aion.uni.lu ) Check the Specify username box Username: yourlogin Adapt to match the one that was sent to you in the Welcome e-mail once your HPC account was created Port: 8022 Go in Advanced SSH settings and check the Use private key box. Select your previously generated key id_rsa.ppk . You can now click on Connect and enjoy. (deprecated - Windows only) - Remote session configuration with PuTTY If you want to connect to one of the ULHPC cluster, open Putty and enter the following settings: In Category:Session : Host Name: access-iris.uni.lu (or access-aion.uni.lu if you want to access Aion) Port: 8022 Connection Type: SSH (leave as default) In Category:Connection:Data : Auto-login username: yourlogin Adapt to match the one that was sent to you in the Welcome e-mail once your HPC account was created In Category:SSH:Auth : Upload your private key: Options controlling SSH authentication Click on Open button. If this is the first time connecting to the server from this computer a Putty Security Alert will appear. Accept the connection by clicking Yes . You should now be logged into the selected ULHPC login node . Now you probably want want to save the configuration of this connection : Go onto the Session category. Enter the settings you want to save. Enter a name in the Saved session field (for example Iris for access to Iris cluster). Click on the Save button. Next time you want to connect to the cluster, click on Load button and Open to open a new connection.","title":"SSH Configuration"},{"location":"connect/ssh/#ssh-agent","text":"","title":"SSH Agent"},{"location":"connect/ssh/#on-your-laptop","text":"To be able to use your SSH key in a public-key authentication scheme, it must be loaded by an SSH agent . Mac OS X (>= 10.5) , this will be handled automatically; you will be asked to fill in the passphrase on the first connection. Linux , this will be handled automatically; you will be asked to fill the passphrase on the first connection. However if you get a message similar to the following: ( laptop ) $> ssh -vv iris-cluster [ ... ] Agent admitted failure to sign using the key. Permission denied ( publickey ) . This means that you have to manually load your key in the SSH agent by running: ( laptop ) $> ssh-add ~/.ssh/id_rsa Enter passphrase for ~/.ssh/id_rsa: # <-- enter your passphrase here Identity added: ~/.ssh/id_rsa ( @ ) ( laptop ) $> ssh-add ~/.ssh/id_ed25519 Enter passphrase for ~/.ssh/id_ed25519: # <-- enter your passphrase here Identity added: ~/.ssh/id_ed25519 ( @ ) On Ubuntu/WSL , if you experience issues when using ssh-add , you should install the keychain package and use it as follows (eventually add it to your ~/.profile ): # Installation ( laptop ) $> sudo apt install keychain # Save your passphrase /usr/bin/keychain --nogui ~/.ssh/id_ed25519 # (eventually) repeat with ~/.ssh/id_rsa # Load the agent in your shell source ~/.keychain/ $( hostname ) -sh (Windows only) SSH Agent within MobaXterm Go in Settings > SSH Tab In SSH agents section, check Use internal SSH agent \"MobAgent\" Click on the + button on the right Select your private key file. If you have several keys, you can add them by doing steps above again. Click on \"Show keys currently loaded in MobAgent\". An advertisement window may appear asking if you want to run MobAgent. Click on \"Yes\". Check that your key(s) appears in the window. Close the window. Click on OK . Restart MobaXterm. (deprecated - Windows only) - SSH Agent with PuTTY Pageant To be able to use your PuTTY key in a public-key authentication scheme, it must be loaded by an SSH agent . You should run Pageant for that. To load your SSH key in Pageant: Right-click on the pageant icon in the system tray, click on the Add key menu item select the private key file you saved while running puttygen.exe i.e. `` click on the Open button: a new dialog will pop up and ask for your passphrase. Once your passphrase is entered, your key will be loaded in pageant, enabling you to connect with Putty.","title":"On your laptop"},{"location":"connect/ssh/#on-ulhpc-clusters","text":"For security reason, SSH agent forwarding is prohibited and explicitly disabled (see ForwardAgent no configuration by default in the above configuration , you may need to manually load an agent once connected on the ULHPC facility, for instance if you are tired of typing the passphrase of a SSH key generated on the cluster to access a remote (private) service. You need to proceed as follows: $ eval \" $( ssh-agent ) \" # export the SSH_AUTH_SOCK and SSH_AGENT_PID variables $ ssh-add ~/.ssh/id_rsa # [...] Enter passphrase for [ ... ] Identity added: ~/.ssh/id_rsa ( @ ) You can then enjoy it. Be aware however that this exposes your private key. So you MUST properly kill your agent when you don't need it any mode, using $ eval \" $( ssh-agent -k ) \" Agent pid killed","title":"On ULHPC clusters"},{"location":"connect/ssh/#key-fingerprints","text":"ULHPC may occasionally update the host keys on the major systems. Check here to confirm the current fingerprints. Iris With regards access-iris.uni.lu : 256 SHA256:tkhRD9IVo04NPw4OV/s2LSKEwe54LAEphm7yx8nq1pE /etc/ssh/ssh_host_ed25519_key.pub (ED25519) 2048 SHA256:WDWb2hh5uPU6RgaSotxzUe567F3scioJWy+9iftVmhI /etc/ssh/ssh_host_rsa_key.pub (RSA) Aion With regards access-aion.uni.lu : 256 SHA256:jwbW8pkfCzXrh1Xhf9n0UI+7hd/YGi4FlyOE92yxxe0 [access-aion.uni.lu]:8022 (ED25519) 3072 SHA256:L9n2gT6aV9KGy0Xdh1ks2DciE9wFz7MDRBPGWPFwFK4 [access-aion.uni.lu]:8022 (RSA) Get SSH key fingerprint The ssh fingerprints can be obtained via: ssh-keygen -lf <(ssh-keyscan -t rsa,ed25519 $(hostname) 2>/dev/null) Putty key fingerprint format Depending on the ssh client you use to connect to ULHPC systems, you may see different key fingerprints. For example, Putty uses different format of fingerprints as follows: access-iris.uni.lu ssh-ed25519 255 4096 07:6a:5f:11:df:d4:3f:d4:97:98:12:69:3a:63:70:2f You may see the following warning when connecting to Cori with Putty, but it is safe to ingore. PuTTY Security Alert The server's host key is not cached in the registry. You have no guarantee that the server is the computer you think it is. The server's ssh-ed25519 key fingerprint is: ssh-ed25519 255 4096 07:6a:5f:11:df:d4:3f:d4:97:98:12:69:3a:63:70:2f If you trust this host, hit Yes to add the key to PuTTY's cache and carry on connecting. If you want to carry on connecting just once, without adding the key to the cache, hit No. If you do not trust this host, hit Cancel to abandon the connection.","title":"Key fingerprints"},{"location":"connect/ssh/#host-keys","text":"These are the entries in ~/.ssh/known_hosts . Iris The known host SSH entry for the Iris cluster should be as follows: [access-iris.uni.lu]:8022 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOP1eF8uJ37h5jFQQShn/NHRGD/d8KsMMUTHkoPRANLn Aion The known host SSH entry for the Aion cluster should be as follows: [access-aion.uni.lu]:8022 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFmcYJ7T6A1wOvIQaohgwVCrKLqIrzpQZAZrlEKx8Vsy","title":"Host Keys"},{"location":"connect/ssh/#troubleshooting","text":"See the corresponding section .","title":"Troubleshooting"},{"location":"connect/ssh/#advanced-ssh-tips-and-tricks","text":"","title":"Advanced SSH Tips and Tricks"},{"location":"connect/ssh/#cli-completion","text":"The bash-completion package eases the ssh command usage by providing completion for hostnames and more (assuming you set the directive HashKnownHost to no in your ~/etc/ssh_config )","title":"CLI Completion"},{"location":"connect/ssh/#socks-5-proxy-plugin","text":"Many Data Analytics framework involves a web interface (at the level of the master and/or the workers) you probably want to access in a relative transparent way. For that, a convenient way is to rely on a SOCKS proxy, which is basically an SSH tunnel in which specific applications forward their traffic down the tunnel to the server, and then on the server end, the proxy forwards the traffic out to the general Internet. Unlike a VPN, a SOCKS proxy has to be configured on an app by app basis on the client machine, but can be set up without any specialty client agents. The general principle is depicted below.","title":"SOCKS 5 Proxy plugin"},{"location":"connect/ssh/#setting-up-the-tunnel","text":"To initiate such a SOCKS proxy using SSH (listening on localhost:1080 for instance), you simply need to use the -D 1080 command line option when connecting to a remote server: Iris ssh -D 1080 -C iris-cluster Aion ssh -D 1080 -C aion-cluster -D : Tells SSH that we want a SOCKS tunnel on the specified port number (you can choose a number between 1025-65536) -C : Compresses the data before sending it","title":"Setting Up the Tunnel"},{"location":"connect/ssh/#foxyproxy-firefox-extension","text":"Now that you have an SSH tunnel, it's time to configure your web browser (recommended: Firefox) to use that tunnel. In particular, install the Foxy Proxy extension for Firefox and configure it to use your SOCKS proxy: Right click on the fox icon, Select Options Add a new proxy button Name: ULHPC proxy Informations > Manual configuration Host IP: 127.0.0.1 Port: 1080 Check the Proxy SOCKS Option Click on OK Close Open a new tab Click on the Fox Choose the ULHPC proxy disable it when you no longer need it. You can now access any web interface deployed on any service reachable from the SSH jump host i.e. the ULHPC login node.","title":"FoxyProxy [Firefox] Extension"},{"location":"connect/ssh/#using-tsock","text":"Once you setup a SSH SOCKS proxy, you can also use tsocks , a Shell wrapper to simplify the use of the tsocks(8) library to transparently allow an application (not aware of SOCKS) to transparently use a SOCKS proxy. For instance, assuming you create a VNC server on a given remote server as follows: (remote_server) $ > vncserver -geometry 1366x768 New 'X' desktop is remote_server:1 Starting applications specified in /home/username/.vnc/xstartup Log file is /home/username/.vnc/remote_server:1.log Then you can make the VNC client on your workstation use this tunnel to access the VNS server as follows: (laptop) $ > tsocks vncviewer :1 tsock Escape character Use ~. to disconnect, even if your remote command hangs.","title":"Using tsock"},{"location":"connect/ssh/#ssh-port-forwarding","text":"","title":"SSH Port Forwarding"},{"location":"connect/ssh/#forwarding-a-local-port","text":"You can forward a local port to a host behind a firewall. This is useful if you run a server on one of the cluster nodes (let's say listening on port 2222) and you want to access it via the local port 1111 on your machine. Then you'll run: # Here targeting iris cluster ( laptop ) $ ssh iris-cluster -L 1111 :iris-014:2222","title":"Forwarding a local port"},{"location":"connect/ssh/#forwarding-a-remote-port","text":"You can forward a remote port back to a host protected by your firewall. This is useful when you want the HPC node to access some local service. For instance is your local machine runs a service that is listening at some local port, say 2222, and you have some service in the HPC node that listens to some local port, say 1111, then the you'll run: # Here targeting the iris cluster ( local machine ) $ ssh iris-cluster -R 1111 : $( hostname -i ) :2222","title":"Forwarding a remote port"},{"location":"connect/ssh/#tunnelling-for-others","text":"By using the -g parameter, you allow connections from other hosts than localhost to use your SSH tunnels. Be warned that anybody within your network may access the tunnelized host this way, which may be a security issue.","title":"Tunnelling for others"},{"location":"connect/ssh/#ssh-jumps","text":"Compute nodes are not directly accessible from the outside network. To login into a cluster node you will need to jump through a login node. Remember, you need a job running in a node before you can ssh into it. Assume that you have some job running on aion-0014 for instance. Then, connect to aion-0014 with: ssh -J ${ USER } @access-aion.uni.lu:8022 ${ USER } @aion-0014 The domain resolution in the login node will determine the IP of the aion-0014 . You can always use the IP address if the node directly if you know it.","title":"SSH jumps"},{"location":"connect/ssh/#passwordless-ssh-jumps","text":"The ssh agent is not configured in the login nodes for security reasons. As a result, compute nodes will request your password. To configure a passwordless jump to a compute node, you will need to install the same key in your ssh configuration of your local machine and the login node. To avoid exposing your keys at your personal machine, create and share a new key. Create a key in your local machine, ssh-keygen -a 127 -t ed25519 -f ~/.ssh/ulhpc_id_ed25519 and then copy both the private and public keys in your HPC account, scp ~/.ssh/ulhpc_id_ed25519* aion-cluster:~/.ssh/ where the command assumes that you have setup your SSH configuration file . Finally, add the key to the list of authorized keys: ssh-copy-id -i ~/.ssh/ulhpc_id_ed25519 aion-cluster Then you can connect without a password to any compute node at which you have a job running with the command: ssh -i ~/.ssh/ulhpc_id_ed25519 -J ${ USER } @access-aion.uni.lu:8022 ${ USER } @ In the option you can use the node IP address or the node name.","title":"Passwordless SSH jumps"},{"location":"connect/ssh/#port-forwarding-over-ssh-jumps","text":"You can combine the jump command with other options, such as port forwarding , for instance to access from you local machine a web server running in a compute node. Assume for instance that you have a server running in iris-014 and listens at the IP 127.0.0.1 and port 2222 , and that you would like to forward the remote port 2222 to the 1111 port of you local machine. The, call the port forwarding command with a jump though the login node: ssh -J iris-cluster -L 1111 :127.0.0.1:2222 @iris-014 This command can be combined with passwordless access to the cluster node.","title":"Port forwarding over SSH jumps"},{"location":"connect/ssh/#extras-tools-around-ssh","text":"Assh - Advanced SSH config is a transparent wrapper that make ~/.ssh/config easier to manage support for templates , aliases , defaults , inheritance etc. gateways : transparent ssh connection chaining more flexible command-line. Ex : Connect to hosta using hostb as a gateway $ ssh hosta/hostb drastically simplify your SSH config Linux / Mac OS only ClusterShell : clush , nodeset (or cluset), light, unified, robust command execution framework well-suited to ease daily administrative tasks of Linux clusters. using tools like clush and nodeset efficient, parallel, scalable command execution engine \\hfill{\\tiny in Python} provides an unified node groups syntax and external group access see nodeset and the NodeSet class DSH - Distributed / Dancer's Shell sshutle , \" where transparent proxy meets VPN meets ssh \"","title":"Extras Tools around SSH"},{"location":"connect/troubleshooting/","text":"There are several possibilities and usually the error message can give you some hints. Your account has expired \u00b6 Please open a ticket on ServiceNow (HPC \u2192 User access & accounts \u2192 Report issue with cluster access) or send us an email to hpc-team@uni.lu with the current end date of your contract and we will extend your account accordingly. \"Access Denied\" or \"Permission denied (publickey)\" \u00b6 Basically, you are NOT able to connect to the access servers until your SSH public key is configured. There can be several reason that explain the denied connection message: Make sure you are using the proper ULHPC user name (and not your local username or University/Eduroam login). Check your mail entitled \" [HPC@Uni.lu] Welcome - Account information \" to get your ULHPC login Log into IPA and double check your SSH public key settings. Ensure you have run your SSH agent If you have a new computer or for some other reason you have generated new ssh key , please update your ssh keys on the IPA user portal. See IPA for more details You are using (deprecated) DSA/RSA keys . As per the OpenSSH website : \"OpenSSH 7.0 and greater similarly disable the ssh-dss (DSA) public key algorithm. It too is weak and we recommend against its use\". Solution: generate a new RSA keypair (3092 bit or more) and re-upload it on the IPA web portal (use the URL communicated to you by the UL HPC team in your \u201cwelcome\u201d mail). For more information on keys, see this website . Your public key is corrupted, please verify and re-upload it on the IPA web portal. We have taken the cluster down for maintenance and we forgot to activate the banner message mentioning this. Please check the calendar, the latest Twitter messages (box on the right of this page) and the messages sent on the hpc-users mailing list. If the above steps did not permit to solve your issue, please open a ticket on ServiceNow (HPC \u2192 User access & accounts \u2192 Report issue with cluster access) or send us an email to hpc-team@uni.lu . Host identification changed \u00b6 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. ... Ensure that your ~/.ssh/known_hosts file contains the correct entries for the ULHPC clusters and confirm the fingerprints using the posted fingerprints Open ~/.ssh/known_hosts Remove any lines referring Iris and Aion and save the file Paste the specified host key entries (for all clusters) OR retry connecting to the host and accept the new host key after verify that you have the correct \"fingerprint\" from the reference list . Be careful with permission changes to your $HOME \u00b6 If you change your home directory to be writeable by the group, ssh will not let you connect anymore. It requires drwxr-xr-x or 755 (or less) on your $HOME and ~/.ssh , and -rw-r--r-- or 644 (or less) on ~/.ssh/authorized_keys . File and folder permissions can be verified at any time using stat $path , e.g.: $> stat $HOME $> stat $HOME/.ssh $> stat $HOME/.ssh/authorized_keys Check out the description of the notation of file permissions in both symbolic and numeric mode. On your local machine, you also need to to have read/write permissions to ~/.ssh/config for your user only. This can be ensured with the following command: chmod 600 ~/.ssh/config Open a ticket \u00b6 If you cannot solve your problem, do not hesitate to open a ticket on the Service Now portal .","title":"Troubleshooting"},{"location":"connect/troubleshooting/#your-account-has-expired","text":"Please open a ticket on ServiceNow (HPC \u2192 User access & accounts \u2192 Report issue with cluster access) or send us an email to hpc-team@uni.lu with the current end date of your contract and we will extend your account accordingly.","title":"Your account has expired"},{"location":"connect/troubleshooting/#access-denied-or-permission-denied-publickey","text":"Basically, you are NOT able to connect to the access servers until your SSH public key is configured. There can be several reason that explain the denied connection message: Make sure you are using the proper ULHPC user name (and not your local username or University/Eduroam login). Check your mail entitled \" [HPC@Uni.lu] Welcome - Account information \" to get your ULHPC login Log into IPA and double check your SSH public key settings. Ensure you have run your SSH agent If you have a new computer or for some other reason you have generated new ssh key , please update your ssh keys on the IPA user portal. See IPA for more details You are using (deprecated) DSA/RSA keys . As per the OpenSSH website : \"OpenSSH 7.0 and greater similarly disable the ssh-dss (DSA) public key algorithm. It too is weak and we recommend against its use\". Solution: generate a new RSA keypair (3092 bit or more) and re-upload it on the IPA web portal (use the URL communicated to you by the UL HPC team in your \u201cwelcome\u201d mail). For more information on keys, see this website . Your public key is corrupted, please verify and re-upload it on the IPA web portal. We have taken the cluster down for maintenance and we forgot to activate the banner message mentioning this. Please check the calendar, the latest Twitter messages (box on the right of this page) and the messages sent on the hpc-users mailing list. If the above steps did not permit to solve your issue, please open a ticket on ServiceNow (HPC \u2192 User access & accounts \u2192 Report issue with cluster access) or send us an email to hpc-team@uni.lu .","title":"\"Access Denied\" or \"Permission denied (publickey)\""},{"location":"connect/troubleshooting/#host-identification-changed","text":"@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. ... Ensure that your ~/.ssh/known_hosts file contains the correct entries for the ULHPC clusters and confirm the fingerprints using the posted fingerprints Open ~/.ssh/known_hosts Remove any lines referring Iris and Aion and save the file Paste the specified host key entries (for all clusters) OR retry connecting to the host and accept the new host key after verify that you have the correct \"fingerprint\" from the reference list .","title":"Host identification changed"},{"location":"connect/troubleshooting/#be-careful-with-permission-changes-to-your-home","text":"If you change your home directory to be writeable by the group, ssh will not let you connect anymore. It requires drwxr-xr-x or 755 (or less) on your $HOME and ~/.ssh , and -rw-r--r-- or 644 (or less) on ~/.ssh/authorized_keys . File and folder permissions can be verified at any time using stat $path , e.g.: $> stat $HOME $> stat $HOME/.ssh $> stat $HOME/.ssh/authorized_keys Check out the description of the notation of file permissions in both symbolic and numeric mode. On your local machine, you also need to to have read/write permissions to ~/.ssh/config for your user only. This can be ensured with the following command: chmod 600 ~/.ssh/config","title":"Be careful with permission changes to your $HOME"},{"location":"connect/troubleshooting/#open-a-ticket","text":"If you cannot solve your problem, do not hesitate to open a ticket on the Service Now portal .","title":"Open a ticket"},{"location":"connect/windows/","text":"In this page, we cover two different SSH client software: MobaXterm and Putty. Choose your preferred tool. MobaXterm \u00b6 Installation notes \u00b6 The following steps will help you to configure MobaXterm to access the UL HPC clusters. You can also check out the MobaXterm demo which shows an overview of its features. First, download and install MobaXterm. Open the application Start > Program Files > MobaXterm . Change the default home directory for a persistent home directory instead of the default Temp directory. Go onto Settings > Configuration > General > Persistent home directory . Choose a location for your home directory. Your local SSH configuration is located in the HOME/.ssh/ directory and consists of: HOME/.ssh/id_rsa.pub : your SSH public key. This one is the only one SAFE to distribute. HOME/.ssh/id_rsa : the associated private key. NEVER EVER TRANSMIT THIS FILE (eventually) the configuration of the SSH client HOME/.ssh/config HOME/.ssh/known_hosts : Contains a list of host keys for all hosts you have logged into that are not already in the system-wide list of known host keys. This permits to detect man-in-the-middle attacks. SSH Key Management \u00b6 Choose the method you prefer: either the graphical interface MobaKeyGen or command line generation of the ssh key. With MobaKeyGen tool \u00b6 Go onto Tools > Network > MobaKeyGen (SSH key generator) . Choose RSA as the type of key to generate and change \"Number of bits in a generated key\" to 4096. Click on the Generate button. Move your mouse to generate some randomness. Warning To ensure the security of the platform and your data stored on it, you must protect your SSH keys with a passphrase! Additionally, your private key and passphrase should never be transmitted to anybody. Select a strong passphrase in the Key passphrase field for your key. Save the public and private keys as respectively id_rsa.pub and id_rsa.ppk . Please keep a copy of the public key, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail). With local terminal \u00b6 Click on Start local terminal . To generate an SSH keys, just use the ssh-keygen command, typically as follows: $> ssh-keygen -t rsa -b 4096 Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: fe:e8:26:df:38:49:3a:99:d7:85:4e:c3:85:c8:24:5b username@yourworkstation The key's randomart image is: +---[RSA 4096]----+ | | | . E | | * . . | | . o . . | | S. o | | .. = . | | =.= o | | * ==o | | B=.o | +-----------------+ Warning To ensure the security of the platform and your data stored on it, you must protect your SSH keys with a passphrase! After the execution of ssh-keygen command, the keys are generated and stored in the following files: SSH RSA Private key: HOME/.ssh/id_rsa . Again, NEVER EVER TRANSMIT THIS FILE SSH RSA Public key: HOME/.ssh/id_rsa.pub . This file is the ONLY one SAFE to distribute Configuration \u00b6 This part of the documentation comes from MobaXterm documentation page MobaXterm allows you to launch remote sessions. You just have to click on the \"Sessions\" button to start a new session. Select SSH session on the second screen. Enter the following parameters: Remote host: access-iris.uni.lu or access-aion.uni.lu Check the Specify username box Username: yourlogin as was sent to you in the Welcome e-mail once your HPC account was created Port: 8022 Go in Advanced SSH settings and check the Use private key box. Select your previously generated key id_rsa.ppk . Click on Connect . The following text appears. ================================================================================== Welcome to access2.iris-cluster.uni.lux ================================================================================== _ ____ / \\ ___ ___ ___ ___ ___|___ \\ / _ \\ / __/ __/ _ \\/ __/ __| __) | / ___ \\ (_| (_| __/\\__ \\__ \\/ __/ /_/ \\_\\___\\___\\___||___/___/_____| _____ _ ____ _ _ __ / /_ _|_ __(_)___ / ___| |_ _ ___| |_ ___ _ _\\ \\ | | | || '__| / __| | | | | | | / __| __/ _ \\ '__| | | | | || | | \\__ \\ | |___| | |_| \\__ \\ || __/ | | | | ||___|_| |_|___/ \\____|_|\\__,_|___/\\__\\___|_| | | \\_\\ /_/ ================================================================================== === Computing Nodes ========================================= #RAM/n === #Cores == iris-[001-108] 108 Dell C6320 (2 Xeon E5-2680v4@2.4GHz [14c/120W]) 128GB 3024 iris-[109-168] 60 Dell C6420 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 128GB 1680 iris-[169-186] 18 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 504 +72 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 16GB +368640 iris-[187-190] 4 Dell R840 (4 Xeon Platin.8180M@2.5GHz [28c/205W]) 3TB 448 iris-[191-196] 6 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 168 +24 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 32GB +122880 ================================================================================== *** TOTAL: 196 nodes, 5824 cores + 491520 CUDA cores + 61440 Tensor cores *** Fast interconnect using InfiniBand EDR 100 Gb/s technology Shared Storage (raw capacity): 2180 TB (GPFS) + 1300 TB (Lustre) = 3480 TB Support (in this order!) Platform notifications - User DOC ........ https://hpc.uni.lu/docs - Twitter: @ULHPC - FAQ ............. https://hpc.uni.lu/faq - Mailing-list .... hpc-users@uni.lu - Bug reports .NEW. https://hpc.uni.lu/support (Service Now) - Admins .......... hpc-team@uni.lu (OPEN TICKETS) ================================================================================== /!\\ NEVER COMPILE OR RUN YOUR PROGRAMS FROM THIS FRONTEND ! First reserve your nodes (using srun/sbatch(1)) Linux access2.iris-cluster.uni.lux 3.10.0-957.21.3.el7.x86_64 x86_64 15:51:56 up 6 days, 2:32, 39 users, load average: 0.59, 0.68, 0.54 [yourlogin@access2 ~]$ Putty \u00b6 Installation notes \u00b6 You need to install Putty and the associated tools, more precisely: PuTTY , the free SSH client Pageant , an SSH authentication agent for PuTTY tools PuTTYgen , an RSA key generation utility PSCP , an SCP (file transfer) client, i.e. command-line secure file copy WinSCP , SCP/SFTP (file transfer) client with easy-to-use graphical interface The simplest method is probably to download and run the latest Putty installer (does not include WinSCP). The different steps involved in the installation process are illustrated below ( REMEMBER to tick the option \"Associate .PPK files (PuTTY Private Key) with Pageant and PuTTYGen\" ): Now you should have all the Putty programs available in Start / All Programs / Putty . SSH Key Management \u00b6 Here you can use the PuTTYgen utility, an RSA key generation utility. The main steps for the generation of the keys are illustrated below: Configuration \u00b6 In order to be able to login to the clusters, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail). The port on which the SSH servers are listening is not the default one ( i.e. 22) but 8022 . Consequently, if you want to connect to the Iris cluster, open Putty and enter the following settings: In Category:Session : Host Name: access-iris.uni.lu or access-aion.uni.lu Port: 8022 Connection Type: SSH (leave as default) In Category:Connection:Data : Auto-login username: yourlogin In Category:SSH:Auth : Upload your private key: Options controlling SSH authentication Click on Open button. If this is the first time connecting to the server from this computer a Putty Security Alert will appear. Accept the connection by clicking Yes . Enter your login (username of your HPC account). You are now logged into Iris access server with SSH. Alternatively, you may want to save the configuration of this connection. Go onto the Session category. Enter the settings you want to save. Enter a name in the Saved session field (for example Iris for access to Iris cluster). Click on the Save button. Next time you want to connect to the cluster, click on Load button and Open to open a new connexion. Now you'll be able to obtain the welcome banner: ================================================================================== Welcome to access2.iris-cluster.uni.lux ================================================================================== _ ____ / \\ ___ ___ ___ ___ ___|___ \\ / _ \\ / __/ __/ _ \\/ __/ __| __) | / ___ \\ (_| (_| __/\\__ \\__ \\/ __/ /_/ \\_\\___\\___\\___||___/___/_____| _____ _ ____ _ _ __ / /_ _|_ __(_)___ / ___| |_ _ ___| |_ ___ _ _\\ \\ | | | || '__| / __| | | | | | | / __| __/ _ \\ '__| | | | | || | | \\__ \\ | |___| | |_| \\__ \\ || __/ | | | | ||___|_| |_|___/ \\____|_|\\__,_|___/\\__\\___|_| | | \\_\\ /_/ ================================================================================== === Computing Nodes ========================================= #RAM/n === #Cores == iris-[001-108] 108 Dell C6320 (2 Xeon E5-2680v4@2.4GHz [14c/120W]) 128GB 3024 iris-[109-168] 60 Dell C6420 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 128GB 1680 iris-[169-186] 18 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 504 +72 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 16GB +368640 iris-[187-190] 4 Dell R840 (4 Xeon Platin.8180M@2.5GHz [28c/205W]) 3TB 448 iris-[191-196] 6 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 168 +24 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 32GB +122880 ================================================================================== *** TOTAL: 196 nodes, 5824 cores + 491520 CUDA cores + 61440 Tensor cores *** Fast interconnect using InfiniBand EDR 100 Gb/s technology Shared Storage (raw capacity): 2180 TB (GPFS) + 1300 TB (Lustre) = 3480 TB Support (in this order!) Platform notifications - User DOC ........ https://hpc.uni.lu/docs - Twitter: @ULHPC - FAQ ............. https://hpc.uni.lu/faq - Mailing-list .... hpc-users@uni.lu - Bug reports .NEW. https://hpc.uni.lu/support (Service Now) - Admins .......... hpc-team@uni.lu (OPEN TICKETS) ================================================================================== /!\\ NEVER COMPILE OR RUN YOUR PROGRAMS FROM THIS FRONTEND ! First reserve your nodes (using srun/sbatch(1)) Activate the SSH agent \u00b6 To be able to use your SSH key in a public-key authentication scheme, it must be loaded by an SSH agent . You should run Pageant . To load your SSH key in Pageant, just right-click on the pageant icon in the system tray, click on the Add key menu item and select the private key file you saved while running puttygen.exe and click on the Open button: a new dialog will pop up and ask for your passphrase. Once your passphrase is entered, your key will be loaded in pageant, enabling you to connect with Putty. Open Putty.exe (connection type: SSH ) In _Category:Session_: Host Name: access-iris.uni.lu or access-aion.uni.lu Port: 8022 Saved session: Iris In Category:Connection:Data : Auto-login username: yourlogin Go back to Category:Session and click on Save Click on Open SSH Resources \u00b6 OpenSSH/ Cygwin : OpenSSH is available with Cygwin. You may then find the same features in your SSH client even if you run Windows. Furthermore, Cygwin also embeds many other GNU Un*x like tools, and even a FREE X server for windows. Putty : Free windowish SSH client ssh.com Free for non commercial use windows client","title":"Windows"},{"location":"connect/windows/#mobaxterm","text":"","title":"MobaXterm"},{"location":"connect/windows/#installation-notes","text":"The following steps will help you to configure MobaXterm to access the UL HPC clusters. You can also check out the MobaXterm demo which shows an overview of its features. First, download and install MobaXterm. Open the application Start > Program Files > MobaXterm . Change the default home directory for a persistent home directory instead of the default Temp directory. Go onto Settings > Configuration > General > Persistent home directory . Choose a location for your home directory. Your local SSH configuration is located in the HOME/.ssh/ directory and consists of: HOME/.ssh/id_rsa.pub : your SSH public key. This one is the only one SAFE to distribute. HOME/.ssh/id_rsa : the associated private key. NEVER EVER TRANSMIT THIS FILE (eventually) the configuration of the SSH client HOME/.ssh/config HOME/.ssh/known_hosts : Contains a list of host keys for all hosts you have logged into that are not already in the system-wide list of known host keys. This permits to detect man-in-the-middle attacks.","title":"Installation notes"},{"location":"connect/windows/#ssh-key-management","text":"Choose the method you prefer: either the graphical interface MobaKeyGen or command line generation of the ssh key.","title":"SSH Key Management"},{"location":"connect/windows/#with-mobakeygen-tool","text":"Go onto Tools > Network > MobaKeyGen (SSH key generator) . Choose RSA as the type of key to generate and change \"Number of bits in a generated key\" to 4096. Click on the Generate button. Move your mouse to generate some randomness. Warning To ensure the security of the platform and your data stored on it, you must protect your SSH keys with a passphrase! Additionally, your private key and passphrase should never be transmitted to anybody. Select a strong passphrase in the Key passphrase field for your key. Save the public and private keys as respectively id_rsa.pub and id_rsa.ppk . Please keep a copy of the public key, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail).","title":"With MobaKeyGen tool"},{"location":"connect/windows/#with-local-terminal","text":"Click on Start local terminal . To generate an SSH keys, just use the ssh-keygen command, typically as follows: $> ssh-keygen -t rsa -b 4096 Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: fe:e8:26:df:38:49:3a:99:d7:85:4e:c3:85:c8:24:5b username@yourworkstation The key's randomart image is: +---[RSA 4096]----+ | | | . E | | * . . | | . o . . | | S. o | | .. = . | | =.= o | | * ==o | | B=.o | +-----------------+ Warning To ensure the security of the platform and your data stored on it, you must protect your SSH keys with a passphrase! After the execution of ssh-keygen command, the keys are generated and stored in the following files: SSH RSA Private key: HOME/.ssh/id_rsa . Again, NEVER EVER TRANSMIT THIS FILE SSH RSA Public key: HOME/.ssh/id_rsa.pub . This file is the ONLY one SAFE to distribute","title":"With local terminal"},{"location":"connect/windows/#configuration","text":"This part of the documentation comes from MobaXterm documentation page MobaXterm allows you to launch remote sessions. You just have to click on the \"Sessions\" button to start a new session. Select SSH session on the second screen. Enter the following parameters: Remote host: access-iris.uni.lu or access-aion.uni.lu Check the Specify username box Username: yourlogin as was sent to you in the Welcome e-mail once your HPC account was created Port: 8022 Go in Advanced SSH settings and check the Use private key box. Select your previously generated key id_rsa.ppk . Click on Connect . The following text appears. ================================================================================== Welcome to access2.iris-cluster.uni.lux ================================================================================== _ ____ / \\ ___ ___ ___ ___ ___|___ \\ / _ \\ / __/ __/ _ \\/ __/ __| __) | / ___ \\ (_| (_| __/\\__ \\__ \\/ __/ /_/ \\_\\___\\___\\___||___/___/_____| _____ _ ____ _ _ __ / /_ _|_ __(_)___ / ___| |_ _ ___| |_ ___ _ _\\ \\ | | | || '__| / __| | | | | | | / __| __/ _ \\ '__| | | | | || | | \\__ \\ | |___| | |_| \\__ \\ || __/ | | | | ||___|_| |_|___/ \\____|_|\\__,_|___/\\__\\___|_| | | \\_\\ /_/ ================================================================================== === Computing Nodes ========================================= #RAM/n === #Cores == iris-[001-108] 108 Dell C6320 (2 Xeon E5-2680v4@2.4GHz [14c/120W]) 128GB 3024 iris-[109-168] 60 Dell C6420 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 128GB 1680 iris-[169-186] 18 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 504 +72 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 16GB +368640 iris-[187-190] 4 Dell R840 (4 Xeon Platin.8180M@2.5GHz [28c/205W]) 3TB 448 iris-[191-196] 6 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 168 +24 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 32GB +122880 ================================================================================== *** TOTAL: 196 nodes, 5824 cores + 491520 CUDA cores + 61440 Tensor cores *** Fast interconnect using InfiniBand EDR 100 Gb/s technology Shared Storage (raw capacity): 2180 TB (GPFS) + 1300 TB (Lustre) = 3480 TB Support (in this order!) Platform notifications - User DOC ........ https://hpc.uni.lu/docs - Twitter: @ULHPC - FAQ ............. https://hpc.uni.lu/faq - Mailing-list .... hpc-users@uni.lu - Bug reports .NEW. https://hpc.uni.lu/support (Service Now) - Admins .......... hpc-team@uni.lu (OPEN TICKETS) ================================================================================== /!\\ NEVER COMPILE OR RUN YOUR PROGRAMS FROM THIS FRONTEND ! First reserve your nodes (using srun/sbatch(1)) Linux access2.iris-cluster.uni.lux 3.10.0-957.21.3.el7.x86_64 x86_64 15:51:56 up 6 days, 2:32, 39 users, load average: 0.59, 0.68, 0.54 [yourlogin@access2 ~]$","title":"Configuration"},{"location":"connect/windows/#putty","text":"","title":"Putty"},{"location":"connect/windows/#installation-notes_1","text":"You need to install Putty and the associated tools, more precisely: PuTTY , the free SSH client Pageant , an SSH authentication agent for PuTTY tools PuTTYgen , an RSA key generation utility PSCP , an SCP (file transfer) client, i.e. command-line secure file copy WinSCP , SCP/SFTP (file transfer) client with easy-to-use graphical interface The simplest method is probably to download and run the latest Putty installer (does not include WinSCP). The different steps involved in the installation process are illustrated below ( REMEMBER to tick the option \"Associate .PPK files (PuTTY Private Key) with Pageant and PuTTYGen\" ): Now you should have all the Putty programs available in Start / All Programs / Putty .","title":"Installation notes"},{"location":"connect/windows/#ssh-key-management_1","text":"Here you can use the PuTTYgen utility, an RSA key generation utility. The main steps for the generation of the keys are illustrated below:","title":"SSH Key Management"},{"location":"connect/windows/#configuration_1","text":"In order to be able to login to the clusters, you will have to add this public key into your account, using the IPA user portal (use the URL communicated to you by the UL HPC team in your \"welcome\" mail). The port on which the SSH servers are listening is not the default one ( i.e. 22) but 8022 . Consequently, if you want to connect to the Iris cluster, open Putty and enter the following settings: In Category:Session : Host Name: access-iris.uni.lu or access-aion.uni.lu Port: 8022 Connection Type: SSH (leave as default) In Category:Connection:Data : Auto-login username: yourlogin In Category:SSH:Auth : Upload your private key: Options controlling SSH authentication Click on Open button. If this is the first time connecting to the server from this computer a Putty Security Alert will appear. Accept the connection by clicking Yes . Enter your login (username of your HPC account). You are now logged into Iris access server with SSH. Alternatively, you may want to save the configuration of this connection. Go onto the Session category. Enter the settings you want to save. Enter a name in the Saved session field (for example Iris for access to Iris cluster). Click on the Save button. Next time you want to connect to the cluster, click on Load button and Open to open a new connexion. Now you'll be able to obtain the welcome banner: ================================================================================== Welcome to access2.iris-cluster.uni.lux ================================================================================== _ ____ / \\ ___ ___ ___ ___ ___|___ \\ / _ \\ / __/ __/ _ \\/ __/ __| __) | / ___ \\ (_| (_| __/\\__ \\__ \\/ __/ /_/ \\_\\___\\___\\___||___/___/_____| _____ _ ____ _ _ __ / /_ _|_ __(_)___ / ___| |_ _ ___| |_ ___ _ _\\ \\ | | | || '__| / __| | | | | | | / __| __/ _ \\ '__| | | | | || | | \\__ \\ | |___| | |_| \\__ \\ || __/ | | | | ||___|_| |_|___/ \\____|_|\\__,_|___/\\__\\___|_| | | \\_\\ /_/ ================================================================================== === Computing Nodes ========================================= #RAM/n === #Cores == iris-[001-108] 108 Dell C6320 (2 Xeon E5-2680v4@2.4GHz [14c/120W]) 128GB 3024 iris-[109-168] 60 Dell C6420 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 128GB 1680 iris-[169-186] 18 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 504 +72 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 16GB +368640 iris-[187-190] 4 Dell R840 (4 Xeon Platin.8180M@2.5GHz [28c/205W]) 3TB 448 iris-[191-196] 6 Dell C4140 (2 Xeon Gold 6132@2.6GHz [14c/140W]) 768GB 168 +24 GPU (4 Tesla V100 [5120c CUDA + 640c Tensor]) 32GB +122880 ================================================================================== *** TOTAL: 196 nodes, 5824 cores + 491520 CUDA cores + 61440 Tensor cores *** Fast interconnect using InfiniBand EDR 100 Gb/s technology Shared Storage (raw capacity): 2180 TB (GPFS) + 1300 TB (Lustre) = 3480 TB Support (in this order!) Platform notifications - User DOC ........ https://hpc.uni.lu/docs - Twitter: @ULHPC - FAQ ............. https://hpc.uni.lu/faq - Mailing-list .... hpc-users@uni.lu - Bug reports .NEW. https://hpc.uni.lu/support (Service Now) - Admins .......... hpc-team@uni.lu (OPEN TICKETS) ================================================================================== /!\\ NEVER COMPILE OR RUN YOUR PROGRAMS FROM THIS FRONTEND ! First reserve your nodes (using srun/sbatch(1))","title":"Configuration"},{"location":"connect/windows/#activate-the-ssh-agent","text":"To be able to use your SSH key in a public-key authentication scheme, it must be loaded by an SSH agent . You should run Pageant . To load your SSH key in Pageant, just right-click on the pageant icon in the system tray, click on the Add key menu item and select the private key file you saved while running puttygen.exe and click on the Open button: a new dialog will pop up and ask for your passphrase. Once your passphrase is entered, your key will be loaded in pageant, enabling you to connect with Putty. Open Putty.exe (connection type: SSH ) In _Category:Session_: Host Name: access-iris.uni.lu or access-aion.uni.lu Port: 8022 Saved session: Iris In Category:Connection:Data : Auto-login username: yourlogin Go back to Category:Session and click on Save Click on Open","title":"Activate the SSH agent"},{"location":"connect/windows/#ssh-resources","text":"OpenSSH/ Cygwin : OpenSSH is available with Cygwin. You may then find the same features in your SSH client even if you run Windows. Furthermore, Cygwin also embeds many other GNU Un*x like tools, and even a FREE X server for windows. Putty : Free windowish SSH client ssh.com Free for non commercial use windows client","title":"SSH Resources"},{"location":"containers/","text":"Containers \u00b6 Many applications and libraries can also be used through container systems, with the updated Singularity tool providing many new features of which we can especially highlight support for Open Containers Initiative - OCI containers (including Docker OCI), and support for secure containers - building and running encrypted containers with RSA keys and passphrases. Singularity \u00b6 The ULHPC offers the possibilty to run Singularity containers . Singularity is an open source container platform designed to be simple, fast, and secure. Singularity is optimized for EPC and HPC workloads, allowing untrusted users to run untrusted containers in a trusted way. Loading Singularity \u00b6 To use Singularity, you need to load the corresponding Lmod module. >$ module load tools/Singularity Warning Modules are not allowed on the access servers. To test interactively Singularity, rememerber to ask for an interactive job first. salloc -p interactive --pty bash Pulling container images \u00b6 Like Docker , Singularity provide a way to pull images from a Hubs such as DockerHub and Singuarity Hub . >$ singularity pull docker://ubuntu:latest You should see the following output: Output INFO: Converting OCI blobs to SIF format INFO: Starting build... Getting image source signatures Copying blob d72e567cc804 done Copying blob 0f3630e5ff08 done Copying blob b6a83d81d1f4 done Copying config bbea2a0436 done Writing manifest to image destination Storing signatures ... INFO: Creating SIF file... You may now test the container by executing some inner commands: >$ singularity exec ubuntu_latest.sif cat /etc/os-release Output NAME=\"Ubuntu\" VERSION=\"20.04.1 LTS (Focal Fossa)\" ID=ubuntu ID_LIKE=debian PRETTY_NAME=\"Ubuntu 20.04.1 LTS\" VERSION_ID=\"20.04\" HOME_URL=\" https://www.ubuntu.com/" ; SUPPORT_URL=\" https://help.ubuntu.com/" ; BUG_REPORT_URL=\" https://bugs.launchpad.net/ubuntu/" ; PRIVACY_POLICY_URL=\" https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" ; VERSION_CODENAME=focal UBUNTU_CODENAME=focal Building container images \u00b6 Building container images requires to have root privileges. Therefore, users have to build images on their local machine before transfering them to the platform. Please refer to the Data transfer section for this purpose. Note Singularity 3 introduces the ability to build your containers in the cloud, so you can easily and securely create containers for your applications without speci al privileges or setup on your local system. The Remote Builder can securely build a container for you from a definition file entered here or via the Singularity CLI (see https://cloud.sylabs.io/builder for more details). GPU-enabled Singularity containers \u00b6 This section relies on the very excellent documentation from CSCS . In the following example, a container with CUDA features is build, transfered and tested on the ULHPC platform. This example will pull a CUDA container from DockrHub and setup CUDA examples . For this purpose, a singularity definition file, i.e., cuda_samples.def needs to be created with the following content: Bootstrap: docker From: nvidia/cuda:10.1-devel %post apt-get update apt-get install -y git git clone https://github.com/NVIDIA/cuda-samples.git /usr/local/cuda_samples cd /usr/local/cuda_samples git fetch origin --tags git checkout 10 .1.1 make %runscript /usr/local/cuda_samples/Samples/deviceQuery/deviceQuery On a local machine having singularity installed, we can build the container image, i.e., cuda_samples.sif using the definition file using the follwing singularity command: sudo singularity build cuda_samples.sif cuda_samples.def Warning You should have root privileges on this machine. Without this condition, you will not be able to built the definition file. Once the container is built and transfered to your dedicated storage on the ULHPC plaform, the container can be executed with the following command: # Inside an interactive job on a gpu-enabled node singularity run --nv cuda_samples.sif Warning In order to run a CUDA-enabled container, the --nv option has to be passed to singularity run. According to this option, singularity is going to setup the container environment to use the NVIDIA GPU and the basic CUDA libraries. Output CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: \"Tesla V100-SXM2-16GB\" CUDA Driver Version / Runtime Version 10.2 / 10.1 CUDA Capability Major/Minor version number: 7.0 Total amount of global memory: 16160 MBytes (16945512448 bytes) (80) Multiprocessors, ( 64) CUDA Cores/MP: 5120 CUDA Cores GPU Max Clock rate: 1530 MHz (1.53 GHz) Memory Clock rate: 877 Mhz Memory Bus Width: 4096-bit L2 Cache Size: 6291456 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 5 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Enabled Device supports Unified Addressing (UVA): Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 30 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.1, NumDevs = 1 Result = PASS MPI and Singularity containers \u00b6 This section relies on the very excellent documentation from CSCS . The following singularity definition file mpi_osu.def can be used to build a container with the osu benchmarks using mpi: bootstrap: docker from: debian:jessie %post # Install software apt-get update apt-get install -y file g++ gcc gfortran make gdb strace realpath wget curl --no-install-recommends # Install mpich curl -kO https://www.mpich.org/static/downloads/3.1.4/mpich-3.1.4.tar.gz tar -zxvf mpich-3.1.4.tar.gz cd mpich-3.1.4 ./configure --disable-fortran --enable-fast = all,O3 --prefix = /usr make -j $( nproc ) make install ldconfig # Build osu benchmarks wget -q http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.3.2.tar.gz tar xf osu-micro-benchmarks-5.3.2.tar.gz cd osu-micro-benchmarks-5.3.2 ./configure --prefix = /usr/local CC = $( which mpicc ) CFLAGS = -O3 make make install cd .. rm -rf osu-micro-benchmarks-5.3.2 rm osu-micro-benchmarks-5.3.2.tar.gz %runscript /usr/local/libexec/osu-micro-benchmarks/mpi/pt2pt/osu_bw sudo singularity build mpi_osu.sif mpi_osu.def Once the container image is ready, you can use it for example inside the following slurm launcher to start a best-effort job: #!/bin/bash -l #SBATCH -J ParallelJob #SBATCH -N 2 #SBATCH --ntasks-per-node=1 #SBATCH --time=05:00 #SBATCH -p batch #SBATCH --qos=qos-besteffort module load tools/Singularity srun -n $SLURM_NTASKS singularity run mpi_osu.sif The content of the output file: Output # OSU MPI Bandwidth Test v5.3.2 # Size Bandwidth (MB/s) 1 0.35 2 0.78 4 1.70 8 3.66 16 7.68 32 16.38 64 32.86 128 66.61 256 80.12 512 97.68 1024 151.57 2048 274.60 4096 408.71 8192 456.51 16384 565.84 32768 582.62 65536 587.17 131072 630.64 262144 656.45 524288 682.37 1048576 712.19 2097152 714.55","title":"About"},{"location":"containers/#containers","text":"Many applications and libraries can also be used through container systems, with the updated Singularity tool providing many new features of which we can especially highlight support for Open Containers Initiative - OCI containers (including Docker OCI), and support for secure containers - building and running encrypted containers with RSA keys and passphrases.","title":"Containers"},{"location":"containers/#singularity","text":"The ULHPC offers the possibilty to run Singularity containers . Singularity is an open source container platform designed to be simple, fast, and secure. Singularity is optimized for EPC and HPC workloads, allowing untrusted users to run untrusted containers in a trusted way.","title":"Singularity"},{"location":"containers/#loading-singularity","text":"To use Singularity, you need to load the corresponding Lmod module. >$ module load tools/Singularity Warning Modules are not allowed on the access servers. To test interactively Singularity, rememerber to ask for an interactive job first. salloc -p interactive --pty bash","title":"Loading Singularity"},{"location":"containers/#pulling-container-images","text":"Like Docker , Singularity provide a way to pull images from a Hubs such as DockerHub and Singuarity Hub . >$ singularity pull docker://ubuntu:latest You should see the following output: Output INFO: Converting OCI blobs to SIF format INFO: Starting build... Getting image source signatures Copying blob d72e567cc804 done Copying blob 0f3630e5ff08 done Copying blob b6a83d81d1f4 done Copying config bbea2a0436 done Writing manifest to image destination Storing signatures ... INFO: Creating SIF file... You may now test the container by executing some inner commands: >$ singularity exec ubuntu_latest.sif cat /etc/os-release Output NAME=\"Ubuntu\" VERSION=\"20.04.1 LTS (Focal Fossa)\" ID=ubuntu ID_LIKE=debian PRETTY_NAME=\"Ubuntu 20.04.1 LTS\" VERSION_ID=\"20.04\" HOME_URL=\" https://www.ubuntu.com/" ; SUPPORT_URL=\" https://help.ubuntu.com/" ; BUG_REPORT_URL=\" https://bugs.launchpad.net/ubuntu/" ; PRIVACY_POLICY_URL=\" https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" ; VERSION_CODENAME=focal UBUNTU_CODENAME=focal","title":"Pulling container images"},{"location":"containers/#building-container-images","text":"Building container images requires to have root privileges. Therefore, users have to build images on their local machine before transfering them to the platform. Please refer to the Data transfer section for this purpose. Note Singularity 3 introduces the ability to build your containers in the cloud, so you can easily and securely create containers for your applications without speci al privileges or setup on your local system. The Remote Builder can securely build a container for you from a definition file entered here or via the Singularity CLI (see https://cloud.sylabs.io/builder for more details).","title":"Building container images"},{"location":"containers/#gpu-enabled-singularity-containers","text":"This section relies on the very excellent documentation from CSCS . In the following example, a container with CUDA features is build, transfered and tested on the ULHPC platform. This example will pull a CUDA container from DockrHub and setup CUDA examples . For this purpose, a singularity definition file, i.e., cuda_samples.def needs to be created with the following content: Bootstrap: docker From: nvidia/cuda:10.1-devel %post apt-get update apt-get install -y git git clone https://github.com/NVIDIA/cuda-samples.git /usr/local/cuda_samples cd /usr/local/cuda_samples git fetch origin --tags git checkout 10 .1.1 make %runscript /usr/local/cuda_samples/Samples/deviceQuery/deviceQuery On a local machine having singularity installed, we can build the container image, i.e., cuda_samples.sif using the definition file using the follwing singularity command: sudo singularity build cuda_samples.sif cuda_samples.def Warning You should have root privileges on this machine. Without this condition, you will not be able to built the definition file. Once the container is built and transfered to your dedicated storage on the ULHPC plaform, the container can be executed with the following command: # Inside an interactive job on a gpu-enabled node singularity run --nv cuda_samples.sif Warning In order to run a CUDA-enabled container, the --nv option has to be passed to singularity run. According to this option, singularity is going to setup the container environment to use the NVIDIA GPU and the basic CUDA libraries. Output CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: \"Tesla V100-SXM2-16GB\" CUDA Driver Version / Runtime Version 10.2 / 10.1 CUDA Capability Major/Minor version number: 7.0 Total amount of global memory: 16160 MBytes (16945512448 bytes) (80) Multiprocessors, ( 64) CUDA Cores/MP: 5120 CUDA Cores GPU Max Clock rate: 1530 MHz (1.53 GHz) Memory Clock rate: 877 Mhz Memory Bus Width: 4096-bit L2 Cache Size: 6291456 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 5 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Enabled Device supports Unified Addressing (UVA): Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 30 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.1, NumDevs = 1 Result = PASS","title":"GPU-enabled Singularity containers"},{"location":"containers/#mpi-and-singularity-containers","text":"This section relies on the very excellent documentation from CSCS . The following singularity definition file mpi_osu.def can be used to build a container with the osu benchmarks using mpi: bootstrap: docker from: debian:jessie %post # Install software apt-get update apt-get install -y file g++ gcc gfortran make gdb strace realpath wget curl --no-install-recommends # Install mpich curl -kO https://www.mpich.org/static/downloads/3.1.4/mpich-3.1.4.tar.gz tar -zxvf mpich-3.1.4.tar.gz cd mpich-3.1.4 ./configure --disable-fortran --enable-fast = all,O3 --prefix = /usr make -j $( nproc ) make install ldconfig # Build osu benchmarks wget -q http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.3.2.tar.gz tar xf osu-micro-benchmarks-5.3.2.tar.gz cd osu-micro-benchmarks-5.3.2 ./configure --prefix = /usr/local CC = $( which mpicc ) CFLAGS = -O3 make make install cd .. rm -rf osu-micro-benchmarks-5.3.2 rm osu-micro-benchmarks-5.3.2.tar.gz %runscript /usr/local/libexec/osu-micro-benchmarks/mpi/pt2pt/osu_bw sudo singularity build mpi_osu.sif mpi_osu.def Once the container image is ready, you can use it for example inside the following slurm launcher to start a best-effort job: #!/bin/bash -l #SBATCH -J ParallelJob #SBATCH -N 2 #SBATCH --ntasks-per-node=1 #SBATCH --time=05:00 #SBATCH -p batch #SBATCH --qos=qos-besteffort module load tools/Singularity srun -n $SLURM_NTASKS singularity run mpi_osu.sif The content of the output file: Output # OSU MPI Bandwidth Test v5.3.2 # Size Bandwidth (MB/s) 1 0.35 2 0.78 4 1.70 8 3.66 16 7.68 32 16.38 64 32.86 128 66.61 256 80.12 512 97.68 1024 151.57 2048 274.60 4096 408.71 8192 456.51 16384 565.84 32768 582.62 65536 587.17 131072 630.64 262144 656.45 524288 682.37 1048576 712.19 2097152 714.55","title":"MPI and Singularity containers"},{"location":"contributing/","text":"You are more than welcome to contribute to the development of this project. You are however expected to follow the model of Github Flow for your contributions. What is a [good] Git Workflow? A Git Workflow is a recipe or recommendation for how to use Git to accomplish work in a consistent and productive manner. Indeed, Git offers a lot of flexibility in how changes can be managed, yet there is no standardized process on how to interact with Git. The following questions are expected to be addressed by a successful workflow: Q1 : Does this workflow scale with team size? Q2 : Is it possible to prevent/limit mistakes and errors ? Q3 : Is it easy to undo mistakes and errors with this workflow? Q4 : Does this workflow permits to easily test new feature/functionnalities before production release ? Q5 : Does this workflow allow for Continuous Integration (even if not yet planned at the beginning) Q6 : Does this workflow permit to master the production release Q7 : Does this workflow impose any new unnecessary cognitive overhead to the team? Q8 : The workflow is easy to use/setup and maintain In particular, the default \" workflow \" centralizedgitl (where everybody just commit to the single master branch), while being the only one satisfying Q7, proved to be easily error-prone and can break production system relying on the underlying repository. For this reason, other more or less complex workflows have emerged -- all feature-branch-based , that supports teams and projects where production deployments are made regularly: Git-flow , the historical successful workflow featuring two main branches with an infinite lifetime ( production and {master | devel} ) all operations are facilitated by the git-flow CLI extension maintaining both branches can be bothersome - make up the only one permitting to really control production release Github Flow , a lightweight version with a single branch ( master ) pull-request based - requires interaction with Gitlab/Github web interface ( git request-pull might help) The ULHPC team enforces an hydrid workflow detailed below , HOWEVER you can safely contribute to this documentation by following the Github Flow explained now. Default Git workflow for contributions \u00b6 We expect contributors to follow the Github Flow concept. This flow is ideal for organizations that need simplicity, and roll out frequently. If you are already using Git, you are probably using a version of the Github flow. Every unit of work, whether it be a bugfix or feature, is done through a branch that is created from master. After the work has been completed in the branch, it is reviewed and tested before being merged into master and pushed out to production. In details: As preliminaries (to be done only once), Fork the ULHPC/ulhpc-docs repository under /ulhpc-docs A fork is a copy of a repository placed under your Github namespace. Forking a repository allows you to freely experiment with changes without affecting the original project. In the top-right corner of the ULHPC/ulhpc-docs repository, click \"Fork\" button. Under Settings, change the repository name from docs to ulhpc-docs Once done, you can clone your copy (forked) repository: select the SSH url under the \"Code\" button: # (Recommended) Place your repo in a clean (and self-explicit) directory layout # /!\\ ADAPT 'YOUR-USERNAME' with your Github username $> mkdir -p ~/git/github.com/YOUR-USERNAME $> cd ~/git/github.com/YOUR-USERNAME # /!\\ ADAPT 'YOUR-USERNAME' with your Github username git clone git@github.com:YOUR-USERNAME/ulhpc-docs.git $> cd ulhpc-docs $> make setup Configure your working forked copy to sync with the original ULHPC/ulhpc-docs repository through a dedicated upstream remote # Check current remote: only 'origin' should be listed $> git remote -v origin git@github.com:YOUR-USERNAME/ulhpc-docs.git ( fetch ) origin git@github.com:YOUR-USERNAME/ulhpc-docs.git ( push ) # Add upstream $> make setup-upstream # OR, manually: $> git remote add upstream https://github.com/ULHPC/ulhpc-docs.git # Check the new remote $> git remote -v origin git@github.com:YOUR-USERNAME/ulhpc-docs.git ( fetch ) origin git@github.com:YOUR-USERNAME/ulhpc-docs.git ( push ) upstream https://github.com/ULHPC/ulhpc-docs.git ( fetch ) upstream https://github.com/ULHPC/ulhpc-docs.git ( push ) At this level, you probably want to follow the setup instructions to configure your ulhpc-docs python virtualenv and deploy locally the documentation with make doc access the local documentation with your favorite browser by visiting the URL http://localhost:8000 Then, to bring your contributions: Pull the latest changes from the upstream remote using: make sync-upstream Create your own feature branch with appropriate name : # IF you have installed git-flow: {brew | apt | yum |...} install gitflow git-flow # /!\\ ADAPT with appropriate name: this will create and checkout to branch feature/ git-flow feature start # OR git checkout -b feature/ Commit your changes once satisfied with them git add [ ... ] git commit -s -m 'Added some feature' Push to the feature branch and publish it # IF you have installed git-flow # /!\\ ADAPT accordingly git-flow feature publish # OR git push -u origin feature/ Create a new Pull Request to submit your changes to the ULHPC team. Commit first! # check what would be put in the pull request git request-pull master ./ # Open Pull Request from web interface # Github: Open 'new pull request' # Base = feature/, compare = master Pull request will be reviewed, eventually with comments/suggestion for modifications -- see official doc you may need to apply new commits to resolve the comments -- remember to mention the pull request in the commit message with the prefix ' [PR#] ' (Ex: [PR#5] ) in your commit message cd /path/to/ulhpc-docs git checkout feature/ git pull # [...] git add [ ... ] # /!\\ ADAPT Pull Request ID accordingly git commit -s -m '[PR#] ...' After your pull request has been reviewed and merged , you can safely delete the feature branch. # Adapt accordingly git checkout feature/ # Eventually, if needed make sync-upstream git-flow feature finish # feature branch 'feature/' will be merged into 'devel' # # feature branch 'feature/' will be locally deleted # # you will checkout back to the 'master' branch git push origin --delete feature/ # /!\\ WARNING: Ensure you delete the CORRECT remote branch git push # sync master branch ULHPC Git Workflow \u00b6 Throughout all its projects, the ULHPC team has enforced a stricter workflow for Git repository summarized in the below figure: The main concepts inherited from both advanced workflows ( Git-flow and Github Flow ) are listed below: The central repository holds two main branches with an infinite lifetime: production : the production-ready branch, used for the deployed version of the documentation. devel | master | main ( master in this case): the main (master) branch where the latest developments intervene (name depends on repository purpose). This is the default branch you get when you clone the repository. You should always setup your local copy of the repository with make setup ensure also you have installed the gitflow extension ensure you are properly made the initial configuration of git -- see also sample .gitconfig In compliment to the Github Flow described above, several additional operations are facilitated by the root Makefile : Initial setup of the repository with make setup Release of a new version of this repository with make start_bump_{patch,minor,major} and make release this action is managed by the ULHPC team according to the semantic versioning scheme implemented within this this project.","title":"Overview"},{"location":"contributing/#default-git-workflow-for-contributions","text":"We expect contributors to follow the Github Flow concept. This flow is ideal for organizations that need simplicity, and roll out frequently. If you are already using Git, you are probably using a version of the Github flow. Every unit of work, whether it be a bugfix or feature, is done through a branch that is created from master. After the work has been completed in the branch, it is reviewed and tested before being merged into master and pushed out to production. In details: As preliminaries (to be done only once), Fork the ULHPC/ulhpc-docs repository under /ulhpc-docs A fork is a copy of a repository placed under your Github namespace. Forking a repository allows you to freely experiment with changes without affecting the original project. In the top-right corner of the ULHPC/ulhpc-docs repository, click \"Fork\" button. Under Settings, change the repository name from docs to ulhpc-docs Once done, you can clone your copy (forked) repository: select the SSH url under the \"Code\" button: # (Recommended) Place your repo in a clean (and self-explicit) directory layout # /!\\ ADAPT 'YOUR-USERNAME' with your Github username $> mkdir -p ~/git/github.com/YOUR-USERNAME $> cd ~/git/github.com/YOUR-USERNAME # /!\\ ADAPT 'YOUR-USERNAME' with your Github username git clone git@github.com:YOUR-USERNAME/ulhpc-docs.git $> cd ulhpc-docs $> make setup Configure your working forked copy to sync with the original ULHPC/ulhpc-docs repository through a dedicated upstream remote # Check current remote: only 'origin' should be listed $> git remote -v origin git@github.com:YOUR-USERNAME/ulhpc-docs.git ( fetch ) origin git@github.com:YOUR-USERNAME/ulhpc-docs.git ( push ) # Add upstream $> make setup-upstream # OR, manually: $> git remote add upstream https://github.com/ULHPC/ulhpc-docs.git # Check the new remote $> git remote -v origin git@github.com:YOUR-USERNAME/ulhpc-docs.git ( fetch ) origin git@github.com:YOUR-USERNAME/ulhpc-docs.git ( push ) upstream https://github.com/ULHPC/ulhpc-docs.git ( fetch ) upstream https://github.com/ULHPC/ulhpc-docs.git ( push ) At this level, you probably want to follow the setup instructions to configure your ulhpc-docs python virtualenv and deploy locally the documentation with make doc access the local documentation with your favorite browser by visiting the URL http://localhost:8000 Then, to bring your contributions: Pull the latest changes from the upstream remote using: make sync-upstream Create your own feature branch with appropriate name : # IF you have installed git-flow: {brew | apt | yum |...} install gitflow git-flow # /!\\ ADAPT with appropriate name: this will create and checkout to branch feature/ git-flow feature start # OR git checkout -b feature/ Commit your changes once satisfied with them git add [ ... ] git commit -s -m 'Added some feature' Push to the feature branch and publish it # IF you have installed git-flow # /!\\ ADAPT accordingly git-flow feature publish # OR git push -u origin feature/ Create a new Pull Request to submit your changes to the ULHPC team. Commit first! # check what would be put in the pull request git request-pull master ./ # Open Pull Request from web interface # Github: Open 'new pull request' # Base = feature/, compare = master Pull request will be reviewed, eventually with comments/suggestion for modifications -- see official doc you may need to apply new commits to resolve the comments -- remember to mention the pull request in the commit message with the prefix ' [PR#] ' (Ex: [PR#5] ) in your commit message cd /path/to/ulhpc-docs git checkout feature/ git pull # [...] git add [ ... ] # /!\\ ADAPT Pull Request ID accordingly git commit -s -m '[PR#] ...' After your pull request has been reviewed and merged , you can safely delete the feature branch. # Adapt accordingly git checkout feature/ # Eventually, if needed make sync-upstream git-flow feature finish # feature branch 'feature/' will be merged into 'devel' # # feature branch 'feature/' will be locally deleted # # you will checkout back to the 'master' branch git push origin --delete feature/ # /!\\ WARNING: Ensure you delete the CORRECT remote branch git push # sync master branch","title":"Default Git workflow for contributions"},{"location":"contributing/#ulhpc-git-workflow","text":"Throughout all its projects, the ULHPC team has enforced a stricter workflow for Git repository summarized in the below figure: The main concepts inherited from both advanced workflows ( Git-flow and Github Flow ) are listed below: The central repository holds two main branches with an infinite lifetime: production : the production-ready branch, used for the deployed version of the documentation. devel | master | main ( master in this case): the main (master) branch where the latest developments intervene (name depends on repository purpose). This is the default branch you get when you clone the repository. You should always setup your local copy of the repository with make setup ensure also you have installed the gitflow extension ensure you are properly made the initial configuration of git -- see also sample .gitconfig In compliment to the Github Flow described above, several additional operations are facilitated by the root Makefile : Initial setup of the repository with make setup Release of a new version of this repository with make start_bump_{patch,minor,major} and make release this action is managed by the ULHPC team according to the semantic versioning scheme implemented within this this project.","title":"ULHPC Git Workflow"},{"location":"contributing/versioning/","text":"The operation consisting of releasing a new version of this repository is automated by a set of tasks within the root Makefile . In this context, a version number have the following format: ..[-b] where: < major > corresponds to the major version number < minor > corresponds to the minor version number < patch > corresponds to the patching version number (eventually) < build > states the build number i.e. the total number of commits within the devel branch. Example: `1.0.0-b28`. VERSION file The current version number is stored in the root file VERSION . /!\\ IMPORTANT: NEVER MAKE ANY MANUAL CHANGES TO THIS FILE ULHPC/docs repository release Only the ULHPC team is allowed to perform the releasing operations (and push to the production branch). By default, the main documentation website is built against the production branch. For more information on the version, run: $> make versioninfo ULHPC Team procedure for repository release If a new version number such be bumped, the following command is issued: make start_bump_ { major,minor,patch } This will start the release process for you using git-flow within the release/ branch - see also Git(hub) flow . Once the last changes are committed, the release becomes effective by running: make release It will finish the release using git-flow , create the appropriate tag in the production branch and merge all things the way they should be in the master branch.","title":"Semantic Versioning"},{"location":"data/backups/","text":"Backups \u00b6 Danger All ULHPC users should back up important files on a regular basis. Ultimately, it is your responsibility to protect yourself from data loss. The backups are only accessible by HPC staff, for disaster recovery purposes only. More precisions can be requested via a support request. Directories on the ULHPC clusters infrastructure \u00b6 For computation purposes, ULHPC users can use multiple storages: home, scratch and projects. Note however that the HPC Platform does not have the infrastructure to backup all of them, see details below. Directory Path Backup location Frequency Retention home directories $HOME not backed up scratch $SCRATCH not backed up projects $PROJECTWORK CDC, Belval Weekly one backup per week of the backup directory ONLY ( $PROJECT/backup/ ) Directories on the SIU Isilon infrastructure \u00b6 Projects stored on the Isilon filesystem are snapshotted weekly, the snapshots are kept for 10 days. Danger Snapshots are not a real backup . It does not protect you against a system failure, it will only permit to recover some files in case of accidental deletion Each project directory, in /mnt/isilon/projects/ contains a hidden sub-directory .snapshot : .snapshot is invisible to ls , ls -a , find and similar commands can be browsed normally after cd .snapshot files cannot be created, deleted or edited in snapshots files can only be copied out of a snapshot Services \u00b6 Name Backup location Frequency Retention hpc.uni.lu (pad, privatebin) CDC, Belval Daily last 7 daily backups, one per month for the last 6 months Restore \u00b6 If you require a restoration of lost data that cannot be accomplished via the snapshots capability, please create a new request on Service Now portal , with pathnames and timestamps of the missing data. Such restore requests may take a few days to complete. Backup Tools \u00b6 In practice, the ULHPC backup infrastructure is fully puppetized and make use of several tools facilitating the operations: backupninja , which allows you to coordinate system backup by dropping a few simple configuration files into /etc/backup.d/ a forked version of bontmia , which stands for \"Backup Over Network To Multiple Incremental Archives\" BorgBackup , a deduplicating backup program supporting compression and authenticated encryption. several internal scripts to pilot LVM snapshots/backup/restore operations","title":"Backups"},{"location":"data/backups/#backups","text":"Danger All ULHPC users should back up important files on a regular basis. Ultimately, it is your responsibility to protect yourself from data loss. The backups are only accessible by HPC staff, for disaster recovery purposes only. More precisions can be requested via a support request.","title":"Backups"},{"location":"data/backups/#directories-on-the-ulhpc-clusters-infrastructure","text":"For computation purposes, ULHPC users can use multiple storages: home, scratch and projects. Note however that the HPC Platform does not have the infrastructure to backup all of them, see details below. Directory Path Backup location Frequency Retention home directories $HOME not backed up scratch $SCRATCH not backed up projects $PROJECTWORK CDC, Belval Weekly one backup per week of the backup directory ONLY ( $PROJECT/backup/ )","title":"Directories on the ULHPC clusters infrastructure"},{"location":"data/backups/#directories-on-the-siu-isilon-infrastructure","text":"Projects stored on the Isilon filesystem are snapshotted weekly, the snapshots are kept for 10 days. Danger Snapshots are not a real backup . It does not protect you against a system failure, it will only permit to recover some files in case of accidental deletion Each project directory, in /mnt/isilon/projects/ contains a hidden sub-directory .snapshot : .snapshot is invisible to ls , ls -a , find and similar commands can be browsed normally after cd .snapshot files cannot be created, deleted or edited in snapshots files can only be copied out of a snapshot","title":"Directories on the SIU Isilon infrastructure"},{"location":"data/backups/#services","text":"Name Backup location Frequency Retention hpc.uni.lu (pad, privatebin) CDC, Belval Daily last 7 daily backups, one per month for the last 6 months","title":"Services"},{"location":"data/backups/#restore","text":"If you require a restoration of lost data that cannot be accomplished via the snapshots capability, please create a new request on Service Now portal , with pathnames and timestamps of the missing data. Such restore requests may take a few days to complete.","title":"Restore"},{"location":"data/backups/#backup-tools","text":"In practice, the ULHPC backup infrastructure is fully puppetized and make use of several tools facilitating the operations: backupninja , which allows you to coordinate system backup by dropping a few simple configuration files into /etc/backup.d/ a forked version of bontmia , which stands for \"Backup Over Network To Multiple Incremental Archives\" BorgBackup , a deduplicating backup program supporting compression and authenticated encryption. several internal scripts to pilot LVM snapshots/backup/restore operations","title":"Backup Tools"},{"location":"data/encryption/","text":"Sensitive Data Protection \u00b6 The advent of the EU General Data Protection Regulation ( GDPR ) permitted to highlight the need to protect sensitive information from leakage. GPG \u00b6 A basic approach relies on GPG to encrypt single files -- see this tutorial for more details # File encryption $ gpg --encrypt [ -r ] # => produces .gpg $ rm -f # /!\\ WARNING: encryption DOES NOT delete the input (clear-text) file $ gpg --armor --detach-sign # Generate signature file .asc # Decryption $ gpg --verify .asc # (eventually but STRONGLY encouraged) verify signature file $ gpg --decrypt .gpg # Decrypt PGP encrypted file One drawback is that files need to be completely decrypted for processing Tutorial: Using GnuPG aka Gnu Privacy Guard aka GPG File Encryption Frameworks (EncFS, GoCryptFS...) \u00b6 In contrast to disk-encryption software that operate on whole disks (TrueCrypt, dm-crypt etc), file encryption operates on individual files that can be backed up or synchronised easily, especially within a Git repository. Comparison matrix gocryptfs , aspiring successor of EncFS written in Go EncFS , mature with known security issues eCryptFS , integrated into the Linux kernel Cryptomator , strong cross-platform support through Java and WebDAV securefs , a cross-platform project implemented in C++. CryFS , result of a master thesis at the KIT University that uses chunked storage to obfuscate file sizes. Assuming you are working from /path/to/my/project , your workflow (mentionned below for EncFS, but it can be adpated to all the other tools) operated on encrypted vaults and would be as follows: ( eventually ) if operating within a working copy of a git repository, you should ignore the mounting directory (ex: vault/* ) in the root .gitignore of the repository this ensures neither you nor a collaborator will commit any unencrypted version of a file by mistake you commit only the EncFS / GocryptFS / eCryptFS / Cryptomator / securefs / CryFS raw directory (ex: .crypt/ ) in your repository. Thus only encrypted form or your files are commited You create the EncFS / GocryptFS / eCryptFS / Cryptomator / securefs / CryFS encrypted vault You prepare macros/scripts/Makefile/Rakefile tasks to lock/unlock the vault on demand Here are for instance a few example of these operations in live to create a encrypted vault: EncFS $ cd /path/to/my/project $ rawdir = .crypt # /!\\ ADAPT accordingly $ mountdir = vault # /!\\ ADAPT accordingly # # (eventually) Ignore the mount dir $ echo $mountdir >> .gitignore ### EncFS: Creation of an EncFS vault (only once) $ encfs --standard $rawdir $mountdir GoCryptFS you SHOULD be on a computing node to use GoCryptFS . $ cd /path/to/my/project $ rawdir = .crypt # /!\\ ADAPT accordingly $ mountdir = vault # /!\\ ADAPT accordingly # # (eventually) Ignore the mount dir $ echo $mountdir >> .gitignore ### GoCryptFS: load the module - you SHOULD be on a computing node $ module load tools/gocryptfs # Creation of a GoCryptFS vault (only once) $> gocryptfs -init $rawdir Then you can mount/unmount the vault as follows: Tool OS Opening/Unlocking the vault Closing/locking the vault EncFS Linux encfs -o nonempty --idle=60 $rawdir $mountdir fusermount -u $mountdir EncFS Mac OS encfs --idle=60 $rawdir $mountdir umount $mountdir GocryptFS gocryptfs $rawdir $mountdir as above The fact that GoCryptFS is available as a module brings the advantage that it can be mounted in a view folder ( vault/ ) where you can read and write the unencrypted files, which is Automatically unmounted upon job termination. File Encryption using SSH [RSA] Key Pairs \u00b6 Man pages: openssl rsa , openssl rsautl and openssl enc Tutorial: Encryption with RSA Key Pairs Tutorial: How to encrypt a big file using OpenSSL and someone's public key OpenSSL Command-Line HOWTO , in particular the section 'How do I simply encrypt a file?' If you encrypt/decrypt files or messages on more than a one-off occasion, you should really use GnuPGP as that is a much better suited tool for this kind of operations. But if you already have someone's public SSH key, it can be convenient to use it, and it is safe. Warning The below instructions are NOT compliant with the new OpenSSH format which is used for storing encrypted (or unencrypted) RSA, EcDSA and Ed25519 keys (among others) when you use the -o option of ssh-keygen . You can recognize these keys by the fact that the private SSH key ~/.ssh/id_rsa starts with - ----BEGIN OPENSSH PRIVATE KEY----- Encrypt a file using a public SSH key \u00b6 (eventually) SSH RSA public key conversion to PEM PKCS8 OpenSSL encryption/decryption operations performed using the RSA algorithm relies on keys following the PEM format 1 (ideally in the PKCS#8 format). It is possible to convert OpenSSH public keys (private ones are already compliant) to the PEM PKCS8 format (a more secure format). For that one can either use the ssh-keygen or the openssl commands, the first one being recomm ended. # Convert the public key of your collaborator to the PEM PKCS8 format (a more secure format) $ ssh-keygen -f id_dst_rsa.pub -e -m pkcs8 > id_dst_rsa.pkcs8.pub # OR use OpenSSL for that... $ openssl rsa -in id_dst_rsa -pubout -outform PKCS8 > id_dst_rsa.pkcs8.pub Note that you don't actually need to save the PKCS#8 version of his public key file -- the below command will make this conversion on demand. Generate a 256 bit (32 byte) random symmetric key There is a limit to the maximum length of a message i.e. size of a file that can be encrypted using asymmetric RSA public key encryption keys (which is what SSH ke ys are). For this reason, you should better rely on a 256 bit key to use for symmetric AES encryption and then encrypt/decrypt that symmetric AES key with the asymmetric RSA k eys This is how encrypted connections usually work, by the way. Generate the unique symmetric key key.bin of 32 bytes ( i.e. 256 bit) as follows: openssl rand -base64 32 -out key.bin You should only use this key once . If you send something else to the recipient at another time, you should regenerate another key. Encrypt the (potentially big) file with the symmetric key openssl enc -aes-256-cbc -salt -in bigdata.dat -out bigdata.dat.enc -pass file:./key.bin Indicative performance of OpenSSL Encryption time You can quickly generate random files of 1 or 10 GiB size as follows: # Random generation of a 1GiB file $ dd if = /dev/urandom of = bigfile_1GiB.dat bs = 64M count = 16 iflag = fullblock # Random generation of a 1GiB file $ dd if = /dev/urandom of = bigfile_10GiB.dat bs = 64M count = 160 iflag = fullblock An indicated encryption time taken for above generated random file on a local laptop (Mac OS X, local filesystem over SSD) is proposed in the below table, using openssl enc -aes-256-cbc -salt -in bigfile_GiB.dat -out bigfile_GiB.dat.enc -pass file:./key.bin File size Encryption time bigfile_1GiB.dat 1 GiB 0m5.395s bigfile_10GiB.dat 10 GiB 2m50.214s Encrypt the symmetric key, using your collaborator public SSH key in PKCS8 format: $ openssl rsautl -encrypt -pubin -inkey < ( ssh-keygen -e -m PKCS8 -f id_dst_rsa.pub ) -in key.bin -out key.bin.enc # OR, if you have a copy of the PKCS#8 version of his public key $ openssl rsautl -encrypt -pubin -inkey id_dst_rsa.pkcs8.pub -in key.bin -out key.bin.enc Delete the unencrypted symmetric key as you don't need it any more (and you should not use it anymore) $> rm key.bin Now you can transfer the *.enc files i.e. send the (potentially big) encrypted file .enc and the encrypted symmetric key ( i.e. key.bin.enc ) to the recipient _i.e. your collaborator. Note that you are encouraged to send the encrypted file and the encrypted key separately. Although it's not absolutely necessary, it's good practice to separate the two. If you're allowed to, transfer them by SSH to an agreed remote server. It is even safe to upload the files to a public file sharing service and tell the recipient to download them from there. Decrypt a file encrypted with a public SSH key \u00b6 First decrypt the symmetric key using the SSH private counterpart: # Decrypt the key -- /!\\ ADAPT the path to the private SSH key $ openssl rsautl -decrypt -inkey ~/.ssh/id_rsa -in key.bin.enc -out key.bin Enter pass phrase for ~/.ssh/id_rsa: Now the (potentially big) file can be decrypted, using the symmetric key: openssl enc -d -aes-256-cbc -in bigdata.dat.enc -out bigdata.dat -pass file:./key.bin Misc Q&D for small files \u00b6 For a 'quick and dirty' encryption/decryption of small files: # Encrypt $ openssl rsautl -encrypt -inkey < ( ssh-keygen -e -m PKCS8 -f ~/.ssh/id_rsa.pub ) -pubin -in .dat -out .dat.enc # Decrypt $ openssl rsautl -decrypt -inkey ~/.ssh/id_rsa -in .dat.enc -out .dat Data Encryption in Git Repository with git-crypt \u00b6 It is of course even more important in the context of git repositories , whether public or private, since the disposal of a working copy of the repository enable the access to the full history of commits, in particular the ones eventually done by mistake ( git commit -a ) that used to include sensitive files. That's where git-crypt comes for help. It is an open source, command line utility that empowers developers to protect specific files within a git repository. git-crypt enables transparent encryption and decryption of files in a git repository. Files which you choose to protect are encrypted when committed, and decrypted when checked out. git-crypt lets you freely share a repository containing a mix of public and private content. git-crypt gracefully degrades, so developers without the secret key can still clone and commit to a repository with encrypted files. This lets you store your secret material (such as keys or passwords) in the same repository as your code, without requiring you to lock down your entire repository. The biggest advantage of git-crypt is that private data and public data can live in the same location. Using Git-crypt to Protect Sensitive Data PetaSuite Protect \u00b6 PetaSuite is a compression suite for Next-Generation-Sequencing (NGS) data. It consists of a command-line tool and a user-mode library. The command line tool performs compression and decompression operations on files. The user-mode library allows other tools and pipelines to transparently access the NGS data in their original file formats. PetaSuite is used within LCSB and provides the following features: Encrypt and compress genomic data Encryption keys and access managed centrally Decryption and decompression on-the-fly using a library that intercepts all FS access This is a commercial software -- contact lcsb.software@uni.lu if you would like to use it Defined in RFCs 1421 through 1424 , is a container format for public/private keys or certificates used preferentially by open-source software such as OpenSSL . The name is from Privacy Enhanced Mail (PEM) (a failed method for secure email, but the container format it used lives on, and is a base64 translation of the x509 ASN.1 keys. \u21a9","title":"Sensitive Data Protection"},{"location":"data/encryption/#sensitive-data-protection","text":"The advent of the EU General Data Protection Regulation ( GDPR ) permitted to highlight the need to protect sensitive information from leakage.","title":"Sensitive Data Protection"},{"location":"data/encryption/#gpg","text":"A basic approach relies on GPG to encrypt single files -- see this tutorial for more details # File encryption $ gpg --encrypt [ -r ] # => produces .gpg $ rm -f # /!\\ WARNING: encryption DOES NOT delete the input (clear-text) file $ gpg --armor --detach-sign # Generate signature file .asc # Decryption $ gpg --verify .asc # (eventually but STRONGLY encouraged) verify signature file $ gpg --decrypt .gpg # Decrypt PGP encrypted file One drawback is that files need to be completely decrypted for processing Tutorial: Using GnuPG aka Gnu Privacy Guard aka GPG","title":"GPG"},{"location":"data/encryption/#file-encryption-frameworks-encfs-gocryptfs","text":"In contrast to disk-encryption software that operate on whole disks (TrueCrypt, dm-crypt etc), file encryption operates on individual files that can be backed up or synchronised easily, especially within a Git repository. Comparison matrix gocryptfs , aspiring successor of EncFS written in Go EncFS , mature with known security issues eCryptFS , integrated into the Linux kernel Cryptomator , strong cross-platform support through Java and WebDAV securefs , a cross-platform project implemented in C++. CryFS , result of a master thesis at the KIT University that uses chunked storage to obfuscate file sizes. Assuming you are working from /path/to/my/project , your workflow (mentionned below for EncFS, but it can be adpated to all the other tools) operated on encrypted vaults and would be as follows: ( eventually ) if operating within a working copy of a git repository, you should ignore the mounting directory (ex: vault/* ) in the root .gitignore of the repository this ensures neither you nor a collaborator will commit any unencrypted version of a file by mistake you commit only the EncFS / GocryptFS / eCryptFS / Cryptomator / securefs / CryFS raw directory (ex: .crypt/ ) in your repository. Thus only encrypted form or your files are commited You create the EncFS / GocryptFS / eCryptFS / Cryptomator / securefs / CryFS encrypted vault You prepare macros/scripts/Makefile/Rakefile tasks to lock/unlock the vault on demand Here are for instance a few example of these operations in live to create a encrypted vault: EncFS $ cd /path/to/my/project $ rawdir = .crypt # /!\\ ADAPT accordingly $ mountdir = vault # /!\\ ADAPT accordingly # # (eventually) Ignore the mount dir $ echo $mountdir >> .gitignore ### EncFS: Creation of an EncFS vault (only once) $ encfs --standard $rawdir $mountdir GoCryptFS you SHOULD be on a computing node to use GoCryptFS . $ cd /path/to/my/project $ rawdir = .crypt # /!\\ ADAPT accordingly $ mountdir = vault # /!\\ ADAPT accordingly # # (eventually) Ignore the mount dir $ echo $mountdir >> .gitignore ### GoCryptFS: load the module - you SHOULD be on a computing node $ module load tools/gocryptfs # Creation of a GoCryptFS vault (only once) $> gocryptfs -init $rawdir Then you can mount/unmount the vault as follows: Tool OS Opening/Unlocking the vault Closing/locking the vault EncFS Linux encfs -o nonempty --idle=60 $rawdir $mountdir fusermount -u $mountdir EncFS Mac OS encfs --idle=60 $rawdir $mountdir umount $mountdir GocryptFS gocryptfs $rawdir $mountdir as above The fact that GoCryptFS is available as a module brings the advantage that it can be mounted in a view folder ( vault/ ) where you can read and write the unencrypted files, which is Automatically unmounted upon job termination.","title":"File Encryption Frameworks (EncFS, GoCryptFS...)"},{"location":"data/encryption/#file-encryption-using-ssh-rsa-key-pairs","text":"Man pages: openssl rsa , openssl rsautl and openssl enc Tutorial: Encryption with RSA Key Pairs Tutorial: How to encrypt a big file using OpenSSL and someone's public key OpenSSL Command-Line HOWTO , in particular the section 'How do I simply encrypt a file?' If you encrypt/decrypt files or messages on more than a one-off occasion, you should really use GnuPGP as that is a much better suited tool for this kind of operations. But if you already have someone's public SSH key, it can be convenient to use it, and it is safe. Warning The below instructions are NOT compliant with the new OpenSSH format which is used for storing encrypted (or unencrypted) RSA, EcDSA and Ed25519 keys (among others) when you use the -o option of ssh-keygen . You can recognize these keys by the fact that the private SSH key ~/.ssh/id_rsa starts with - ----BEGIN OPENSSH PRIVATE KEY-----","title":"File Encryption using SSH [RSA] Key Pairs"},{"location":"data/encryption/#encrypt-a-file-using-a-public-ssh-key","text":"(eventually) SSH RSA public key conversion to PEM PKCS8 OpenSSL encryption/decryption operations performed using the RSA algorithm relies on keys following the PEM format 1 (ideally in the PKCS#8 format). It is possible to convert OpenSSH public keys (private ones are already compliant) to the PEM PKCS8 format (a more secure format). For that one can either use the ssh-keygen or the openssl commands, the first one being recomm ended. # Convert the public key of your collaborator to the PEM PKCS8 format (a more secure format) $ ssh-keygen -f id_dst_rsa.pub -e -m pkcs8 > id_dst_rsa.pkcs8.pub # OR use OpenSSL for that... $ openssl rsa -in id_dst_rsa -pubout -outform PKCS8 > id_dst_rsa.pkcs8.pub Note that you don't actually need to save the PKCS#8 version of his public key file -- the below command will make this conversion on demand. Generate a 256 bit (32 byte) random symmetric key There is a limit to the maximum length of a message i.e. size of a file that can be encrypted using asymmetric RSA public key encryption keys (which is what SSH ke ys are). For this reason, you should better rely on a 256 bit key to use for symmetric AES encryption and then encrypt/decrypt that symmetric AES key with the asymmetric RSA k eys This is how encrypted connections usually work, by the way. Generate the unique symmetric key key.bin of 32 bytes ( i.e. 256 bit) as follows: openssl rand -base64 32 -out key.bin You should only use this key once . If you send something else to the recipient at another time, you should regenerate another key. Encrypt the (potentially big) file with the symmetric key openssl enc -aes-256-cbc -salt -in bigdata.dat -out bigdata.dat.enc -pass file:./key.bin Indicative performance of OpenSSL Encryption time You can quickly generate random files of 1 or 10 GiB size as follows: # Random generation of a 1GiB file $ dd if = /dev/urandom of = bigfile_1GiB.dat bs = 64M count = 16 iflag = fullblock # Random generation of a 1GiB file $ dd if = /dev/urandom of = bigfile_10GiB.dat bs = 64M count = 160 iflag = fullblock An indicated encryption time taken for above generated random file on a local laptop (Mac OS X, local filesystem over SSD) is proposed in the below table, using openssl enc -aes-256-cbc -salt -in bigfile_GiB.dat -out bigfile_GiB.dat.enc -pass file:./key.bin File size Encryption time bigfile_1GiB.dat 1 GiB 0m5.395s bigfile_10GiB.dat 10 GiB 2m50.214s Encrypt the symmetric key, using your collaborator public SSH key in PKCS8 format: $ openssl rsautl -encrypt -pubin -inkey < ( ssh-keygen -e -m PKCS8 -f id_dst_rsa.pub ) -in key.bin -out key.bin.enc # OR, if you have a copy of the PKCS#8 version of his public key $ openssl rsautl -encrypt -pubin -inkey id_dst_rsa.pkcs8.pub -in key.bin -out key.bin.enc Delete the unencrypted symmetric key as you don't need it any more (and you should not use it anymore) $> rm key.bin Now you can transfer the *.enc files i.e. send the (potentially big) encrypted file .enc and the encrypted symmetric key ( i.e. key.bin.enc ) to the recipient _i.e. your collaborator. Note that you are encouraged to send the encrypted file and the encrypted key separately. Although it's not absolutely necessary, it's good practice to separate the two. If you're allowed to, transfer them by SSH to an agreed remote server. It is even safe to upload the files to a public file sharing service and tell the recipient to download them from there.","title":"Encrypt a file using a public SSH key"},{"location":"data/encryption/#decrypt-a-file-encrypted-with-a-public-ssh-key","text":"First decrypt the symmetric key using the SSH private counterpart: # Decrypt the key -- /!\\ ADAPT the path to the private SSH key $ openssl rsautl -decrypt -inkey ~/.ssh/id_rsa -in key.bin.enc -out key.bin Enter pass phrase for ~/.ssh/id_rsa: Now the (potentially big) file can be decrypted, using the symmetric key: openssl enc -d -aes-256-cbc -in bigdata.dat.enc -out bigdata.dat -pass file:./key.bin","title":"Decrypt a file encrypted with a public SSH key"},{"location":"data/encryption/#misc-qd-for-small-files","text":"For a 'quick and dirty' encryption/decryption of small files: # Encrypt $ openssl rsautl -encrypt -inkey < ( ssh-keygen -e -m PKCS8 -f ~/.ssh/id_rsa.pub ) -pubin -in .dat -out .dat.enc # Decrypt $ openssl rsautl -decrypt -inkey ~/.ssh/id_rsa -in .dat.enc -out .dat","title":"Misc Q&D for small files"},{"location":"data/encryption/#data-encryption-in-git-repository-with-git-crypt","text":"It is of course even more important in the context of git repositories , whether public or private, since the disposal of a working copy of the repository enable the access to the full history of commits, in particular the ones eventually done by mistake ( git commit -a ) that used to include sensitive files. That's where git-crypt comes for help. It is an open source, command line utility that empowers developers to protect specific files within a git repository. git-crypt enables transparent encryption and decryption of files in a git repository. Files which you choose to protect are encrypted when committed, and decrypted when checked out. git-crypt lets you freely share a repository containing a mix of public and private content. git-crypt gracefully degrades, so developers without the secret key can still clone and commit to a repository with encrypted files. This lets you store your secret material (such as keys or passwords) in the same repository as your code, without requiring you to lock down your entire repository. The biggest advantage of git-crypt is that private data and public data can live in the same location. Using Git-crypt to Protect Sensitive Data","title":"Data Encryption in Git Repository with git-crypt"},{"location":"data/encryption/#petasuite-protect","text":"PetaSuite is a compression suite for Next-Generation-Sequencing (NGS) data. It consists of a command-line tool and a user-mode library. The command line tool performs compression and decompression operations on files. The user-mode library allows other tools and pipelines to transparently access the NGS data in their original file formats. PetaSuite is used within LCSB and provides the following features: Encrypt and compress genomic data Encryption keys and access managed centrally Decryption and decompression on-the-fly using a library that intercepts all FS access This is a commercial software -- contact lcsb.software@uni.lu if you would like to use it Defined in RFCs 1421 through 1424 , is a container format for public/private keys or certificates used preferentially by open-source software such as OpenSSL . The name is from Privacy Enhanced Mail (PEM) (a failed method for secure email, but the container format it used lives on, and is a base64 translation of the x509 ASN.1 keys. \u21a9","title":"PetaSuite Protect"},{"location":"data/gdpr/","text":"UL HPC Acceptable Use Policy (AUP) (pdf) Warning Personal data is/may be visible, accessible or handled: directly on the HPC clusters through Resource and Job Management System (RJMS) tools (Slurm) and associated monitoring interfaces through service portals (like OpenOnDemand ) on code management portals such GitLab , GitHub on secondary storage systems used within the University such as Atlas, DropIT, etc. Data Use \u00b6 Use of UL HPC data storage resources (file systems, data storage tiers, backup, etc.) should be used only for work directly related to the projects for which the resources were requested and granted, and primarily to advance University\u2019s missions of education and research. Use of UL HPC data resources for personal activities is prohibited. The UL HPC Team maintains up-to-date documentation on its data storage resources and their proper use, and provides regular training and support to users. Users assume the responsibility for following the documentation, training sessions and best practice guides in order to understand the proper and considerate use of the UL HPC data storage resources. Authors/generators/owners of information or data are responsible for its correct categorization as sensitive or non-sensitive. Owners of sensitive information are responsible for its secure handling, transmission, processing, storage, and disposal on the UL HPC systems. The UL HPC Team recommends use of encryption to protect the data from unauthorized access. Data Protection inquiries, especially as regards sensitive information processing can be directed to the Data Protection Officer . Users are prohibited from intentionally accessing, modifying or deleting data they do not own or have not been granted explicit permission to access. Users are responsible to ensure the appropriate level of protection, backup and integrity checks on their critical data and applications. It is their responsibility to set appropriate access controls for the data they bring, process and generate on UL HPC facilities. In the event of system failure or malicious actions, UL HPC makes no guarantee against loss of data or that user or project data can be recovered nor that it cannot be accessed, changed, or deleted by another individual. Personal information agreement \u00b6 UL HPC retains the right to monitor all activities on its facilities. Users acknowledge that data regarding their activity on UL HPC facilities will be collected. The data is collected (e.g. by the Slurm workload manager) for utilization accounting and reporting purposes, and for the purpose of understanding typical patterns of user\u2019s behavior on the system in order to further improve the services provided by UL HPC. Another goal is to identify intrusions, misuse, security incidents or illegal actions in order to protect UL HPC users and facilities.. Users agree that this data may be processed to extract information contributing to the above stated purposes. Users agree that their name, surname, email address, affiliation, work place and phone numbers are processed by the UL HPC Team in order to provide HPC and associated services. Data Protection inquiries can be directed to the Data Protection Officer. Further information about Data Protection can be found at: https://wwwen.uni.lu/university/data_protection","title":"GDPR Compliance"},{"location":"data/gdpr/#data-use","text":"Use of UL HPC data storage resources (file systems, data storage tiers, backup, etc.) should be used only for work directly related to the projects for which the resources were requested and granted, and primarily to advance University\u2019s missions of education and research. Use of UL HPC data resources for personal activities is prohibited. The UL HPC Team maintains up-to-date documentation on its data storage resources and their proper use, and provides regular training and support to users. Users assume the responsibility for following the documentation, training sessions and best practice guides in order to understand the proper and considerate use of the UL HPC data storage resources. Authors/generators/owners of information or data are responsible for its correct categorization as sensitive or non-sensitive. Owners of sensitive information are responsible for its secure handling, transmission, processing, storage, and disposal on the UL HPC systems. The UL HPC Team recommends use of encryption to protect the data from unauthorized access. Data Protection inquiries, especially as regards sensitive information processing can be directed to the Data Protection Officer . Users are prohibited from intentionally accessing, modifying or deleting data they do not own or have not been granted explicit permission to access. Users are responsible to ensure the appropriate level of protection, backup and integrity checks on their critical data and applications. It is their responsibility to set appropriate access controls for the data they bring, process and generate on UL HPC facilities. In the event of system failure or malicious actions, UL HPC makes no guarantee against loss of data or that user or project data can be recovered nor that it cannot be accessed, changed, or deleted by another individual.","title":"Data Use"},{"location":"data/gdpr/#personal-information-agreement","text":"UL HPC retains the right to monitor all activities on its facilities. Users acknowledge that data regarding their activity on UL HPC facilities will be collected. The data is collected (e.g. by the Slurm workload manager) for utilization accounting and reporting purposes, and for the purpose of understanding typical patterns of user\u2019s behavior on the system in order to further improve the services provided by UL HPC. Another goal is to identify intrusions, misuse, security incidents or illegal actions in order to protect UL HPC users and facilities.. Users agree that this data may be processed to extract information contributing to the above stated purposes. Users agree that their name, surname, email address, affiliation, work place and phone numbers are processed by the UL HPC Team in order to provide HPC and associated services. Data Protection inquiries can be directed to the Data Protection Officer. Further information about Data Protection can be found at: https://wwwen.uni.lu/university/data_protection","title":"Personal information agreement"},{"location":"data/layout/","text":"Global Directory Structure \u00b6 ULHPC File Systems Overview \u00b6 Several File Systems co-exist on the ULHPC facility and are configured for different purposes. Each servers and computational resources has access to at least three different file systems with different levels of performance, permanence and available space summarized below Directory Env. file system backup /home/users/ $HOME GPFS/Spectrumscale no /work/projects/ - GPFS/Spectrumscale yes (partial, backup subdirectory) /scratch/users/ $SCRATCH Lustre no /mnt/isilon/projects/ - OneFS yes (live sync and snapshots) Global Home directory $HOME \u00b6 Home directories provide a convenient means for a user to have access to files such as dotfiles, source files, input files, configuration files regardless of the platform. Refer to your home directory using the environment variable $HOME whenever possible. The absolute path may change, but the value of $HOME will always be correct. Global Project directory $PROJECTHOME=/work/projects/ \u00b6 Project directories are intended for sharing data within a group of researchers, under /work/projects/ Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible. Global Scratch directory $SCRATCH \u00b6 The scratch area is a Lustre -based file system designed for high performance temporary storage of large files. It is thus intended to support large I/O for jobs that are being actively computed on the ULHPC systems. We recommend that you run your jobs, especially data intensive ones, from the ULHPC scratch file system. Refer to your scratch directory using the environment variable $SCRATCH whenever possible (which expands to /scratch/users/$(whoami) ). The scratch file system is shared via the Infiniband network of the ULHPC facility and is available from all nodes while being tuned for high performance. Project Cold-Data and Archives \u00b6 OneFS, A global low -performance Dell/EMC Isilon solution is used to host project data, and serve for backup and archival purposes. You will find them mounted under /mnt/isilon/projects .","title":"Global Directory Structure"},{"location":"data/layout/#global-directory-structure","text":"","title":"Global Directory Structure"},{"location":"data/layout/#ulhpc-file-systems-overview","text":"Several File Systems co-exist on the ULHPC facility and are configured for different purposes. Each servers and computational resources has access to at least three different file systems with different levels of performance, permanence and available space summarized below Directory Env. file system backup /home/users/ $HOME GPFS/Spectrumscale no /work/projects/ - GPFS/Spectrumscale yes (partial, backup subdirectory) /scratch/users/ $SCRATCH Lustre no /mnt/isilon/projects/ - OneFS yes (live sync and snapshots)","title":"ULHPC File Systems Overview"},{"location":"data/layout/#global-home-directory-home","text":"Home directories provide a convenient means for a user to have access to files such as dotfiles, source files, input files, configuration files regardless of the platform. Refer to your home directory using the environment variable $HOME whenever possible. The absolute path may change, but the value of $HOME will always be correct.","title":"Global Home directory $HOME"},{"location":"data/layout/#global-project-directory-projecthomeworkprojects","text":"Project directories are intended for sharing data within a group of researchers, under /work/projects/ Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible.","title":"Global Project directory $PROJECTHOME=/work/projects/"},{"location":"data/layout/#global-scratch-directory-scratch","text":"The scratch area is a Lustre -based file system designed for high performance temporary storage of large files. It is thus intended to support large I/O for jobs that are being actively computed on the ULHPC systems. We recommend that you run your jobs, especially data intensive ones, from the ULHPC scratch file system. Refer to your scratch directory using the environment variable $SCRATCH whenever possible (which expands to /scratch/users/$(whoami) ). The scratch file system is shared via the Infiniband network of the ULHPC facility and is available from all nodes while being tuned for high performance.","title":"Global Scratch directory $SCRATCH"},{"location":"data/layout/#project-cold-data-and-archives","text":"OneFS, A global low -performance Dell/EMC Isilon solution is used to host project data, and serve for backup and archival purposes. You will find them mounted under /mnt/isilon/projects .","title":"Project Cold-Data and Archives"},{"location":"data/project/","text":"Global Project directory $PROJECTHOME=/work/projects/ \u00b6 Project directories are intended for sharing data within a group of researchers, under /work/projects/ Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible. Research Project Allocations, Accounting and Reporting The Research Support and Accounting Departments of the University keep track of the list of research projects funded within the University. Starting 2021, a new procedure has been put in place to provide a detailed reporting of the HPC usage for such projects. As part of this process, the following actions are taken by the ULHPC team: a dedicated project account (normally the acronym of the project) is created for accounting purpose at the Slurm level (L3 account - see Account Hierarchy ); a dedicated project directory with the same name ( ) is created, allowing to share data within a group of project researchers, under $PROJECTHOME/ , i.e. , /work/projects/ You are then entitled to submit jobs associated to the project using -A such that the HPC usage is reported accurately. The ULHPC team will provide to the project PI (Principal Investigator) and the Research Support department a regular report detailing the corresponding HPC usage. In all cases, job billing under the conditions defined in the Job Accounting and Billing section may apply. New project directory \u00b6 You can request a new project directory under ServiceNow (HPC \u2192 Storage & projects \u2192 Request for a new project). Quotas and Backup Policies \u00b6 See quotas for detailed information about inode, space quotas, and file system purge policies. Your projects backup directories are backuped weekly, according to the policy detailed in the ULHPC backup policies . Access rights to project directory: Quota for clusterusers group in project directories is 0 !!! When a project is created, a group of the same name ( ) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory. Be aware that your default group as a user is clusterusers which has ( on purpose ) a quota in project directories set to 0 . You thus need to ensure you always write data in your project directory using the group (instead of yoru default one.). This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...] When using rsync to transfer file toward the project directory /work/projects/ as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to: give new files the destination-default permissions with --no-p ( --no-perms ), and use the default group of the destination dir with --no-g ( --no-group ) (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX Your full rsync command becomes (adapt accordingly): rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] /work/projects//[...] For the same reason detailed above, in case you are using a build command or more generally any command meant to write data in your project directory /work/projects/ , you want to use the sg as follows: # /!\\ ADAPT accordingly sg -c \" [...]\" This is particularly important if you are building dedicated software with Easybuild for members of the project - you typically want to do it as follows: # /!\\ ADAPT accordingly sg -c \"eb [...] -r --rebuild -D\" # Dry-run - enforce using the '' group sg -c \"eb [...] -r --rebuild\" # Dry-run - enforce using the '' group Project directory modification \u00b6 You can request changes for your project directory (quotas extension, add/remove a group member) under ServiceNow : HPC \u2192 Storage & projects \u2192 Extend quota/Request information HPC \u2192 User access & accounts \u2192 Add/Remove user within project","title":"Project Data Management"},{"location":"data/project/#global-project-directory-projecthomeworkprojects","text":"Project directories are intended for sharing data within a group of researchers, under /work/projects/ Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible. Research Project Allocations, Accounting and Reporting The Research Support and Accounting Departments of the University keep track of the list of research projects funded within the University. Starting 2021, a new procedure has been put in place to provide a detailed reporting of the HPC usage for such projects. As part of this process, the following actions are taken by the ULHPC team: a dedicated project account (normally the acronym of the project) is created for accounting purpose at the Slurm level (L3 account - see Account Hierarchy ); a dedicated project directory with the same name ( ) is created, allowing to share data within a group of project researchers, under $PROJECTHOME/ , i.e. , /work/projects/ You are then entitled to submit jobs associated to the project using -A such that the HPC usage is reported accurately. The ULHPC team will provide to the project PI (Principal Investigator) and the Research Support department a regular report detailing the corresponding HPC usage. In all cases, job billing under the conditions defined in the Job Accounting and Billing section may apply.","title":"Global Project directory $PROJECTHOME=/work/projects/"},{"location":"data/project/#new-project-directory","text":"You can request a new project directory under ServiceNow (HPC \u2192 Storage & projects \u2192 Request for a new project).","title":"New project directory"},{"location":"data/project/#quotas-and-backup-policies","text":"See quotas for detailed information about inode, space quotas, and file system purge policies. Your projects backup directories are backuped weekly, according to the policy detailed in the ULHPC backup policies . Access rights to project directory: Quota for clusterusers group in project directories is 0 !!! When a project is created, a group of the same name ( ) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory. Be aware that your default group as a user is clusterusers which has ( on purpose ) a quota in project directories set to 0 . You thus need to ensure you always write data in your project directory using the group (instead of yoru default one.). This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...] When using rsync to transfer file toward the project directory /work/projects/ as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to: give new files the destination-default permissions with --no-p ( --no-perms ), and use the default group of the destination dir with --no-g ( --no-group ) (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX Your full rsync command becomes (adapt accordingly): rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] /work/projects//[...] For the same reason detailed above, in case you are using a build command or more generally any command meant to write data in your project directory /work/projects/ , you want to use the sg as follows: # /!\\ ADAPT accordingly sg -c \" [...]\" This is particularly important if you are building dedicated software with Easybuild for members of the project - you typically want to do it as follows: # /!\\ ADAPT accordingly sg -c \"eb [...] -r --rebuild -D\" # Dry-run - enforce using the '' group sg -c \"eb [...] -r --rebuild\" # Dry-run - enforce using the '' group","title":"Quotas and Backup Policies"},{"location":"data/project/#project-directory-modification","text":"You can request changes for your project directory (quotas extension, add/remove a group member) under ServiceNow : HPC \u2192 Storage & projects \u2192 Extend quota/Request information HPC \u2192 User access & accounts \u2192 Add/Remove user within project","title":"Project directory modification"},{"location":"data/project_acl/","text":"Global Project quotas and backup policies See quotas for detailed information about inode, space quotas, and file system purge policies. Your projects backup directories are backuped weekly, according to the policy detailed in the ULHPC backup policies . Access rights to project directory: Quota for clusterusers group in project directories is 0 !!! When a project is created, a group of the same name ( ) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory. Be aware that your default group as a user is clusterusers which has ( on purpose ) a quota in project directories set to 0 . You thus need to ensure you always write data in your project directory using the group (instead of yoru default one.). This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...] When using rsync to transfer file toward the project directory /work/projects/ as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to: give new files the destination-default permissions with --no-p ( --no-perms ), and use the default group of the destination dir with --no-g ( --no-group ) (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX Your full rsync command becomes (adapt accordingly): rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] /work/projects//[...] For the same reason detailed above, in case you are using a build command or more generally any command meant to write data in your project directory /work/projects/ , you want to use the sg as follows: # /!\\ ADAPT accordingly sg -c \" [...]\" This is particularly important if you are building dedicated software with Easybuild for members of the project - you typically want to do it as follows: # /!\\ ADAPT accordingly sg -c \"eb [...] -r --rebuild -D\" # Dry-run - enforce using the '' group sg -c \"eb [...] -r --rebuild\" # Dry-run - enforce using the '' group","title":"Project acl"},{"location":"data/sharing/","text":"Security and Data Integrity \u00b6 Sharing data with other users must be done carefully. Permissions should be set to the minimum necessary to achieve the desired access. For instance, consider carefully whether it's really necessary before sharing write permssions on data. Be sure to have archived backups of any critical shared data. It is also important to ensure that private login secrets (such as SSH private keys or apache htaccess files) are NOT shared with other users (either intentionally or accidentally). Good practice is to keep things like this in a separare directory that is as locked down as possible. The very first protection is to maintain your Home with access rights 700 chmod 700 $HOME Sharing Data within ULHPC Facility \u00b6 Sharing with Other Members of Your Project \u00b6 We can setup a project directory with specific group read and write permissions, allowing to share data with other members of your project. Sharing with ULHPC Users Outside of Your Project \u00b6 Unix File Permissions \u00b6 You can share files and directories with ULHPC users outside of your project by adjusting the unix file permissions. We have an extensive write up of unix file permissions and how they work here . Sharing Data outside of ULHPC \u00b6 The IT service of the University can be contacted to easily and quickly share data over the web using a dedicated Data Transfer service. Open the appropriate ticket on the Service Now portal.","title":"Data Sharing"},{"location":"data/sharing/#security-and-data-integrity","text":"Sharing data with other users must be done carefully. Permissions should be set to the minimum necessary to achieve the desired access. For instance, consider carefully whether it's really necessary before sharing write permssions on data. Be sure to have archived backups of any critical shared data. It is also important to ensure that private login secrets (such as SSH private keys or apache htaccess files) are NOT shared with other users (either intentionally or accidentally). Good practice is to keep things like this in a separare directory that is as locked down as possible. The very first protection is to maintain your Home with access rights 700 chmod 700 $HOME","title":"Security and Data Integrity"},{"location":"data/sharing/#sharing-data-within-ulhpc-facility","text":"","title":"Sharing Data within ULHPC Facility"},{"location":"data/sharing/#sharing-with-other-members-of-your-project","text":"We can setup a project directory with specific group read and write permissions, allowing to share data with other members of your project.","title":"Sharing with Other Members of Your Project"},{"location":"data/sharing/#sharing-with-ulhpc-users-outside-of-your-project","text":"","title":"Sharing with ULHPC Users Outside of Your Project"},{"location":"data/sharing/#unix-file-permissions","text":"You can share files and directories with ULHPC users outside of your project by adjusting the unix file permissions. We have an extensive write up of unix file permissions and how they work here .","title":"Unix File Permissions"},{"location":"data/sharing/#sharing-data-outside-of-ulhpc","text":"The IT service of the University can be contacted to easily and quickly share data over the web using a dedicated Data Transfer service. Open the appropriate ticket on the Service Now portal.","title":"Sharing Data outside of ULHPC"},{"location":"data/transfer/","text":"Data Transfer to/from/within UL HPC Clusters \u00b6 Introduction \u00b6 Directories such as $HOME , $WORK or $SCRATCH are shared among the nodes of the cluster that you are using (including the login node) via shared filesystems (SpectrumScale, Lustre) meaning that: every file/directory pushed or created on the login node is available on the computing nodes every file/directory pushed or created on the computing nodes is available on the login node The two most common commands you can use for data transfers over SSH: scp : for the full transfer of files and directories (only works fine for single files or directories of small/trivial size) rsync : a software application which synchronizes files and directories from one location to another while minimizing data transfer as only the outdated or inexistent elements are transferred (practically required for lengthy complex transfers, which are more likely to be interrupted in the middle). scp or rsync? While both ensure a secure transfer of the data within an encrypted tunnel, rsync should be preferred : as mentionned in the from openSSH 8.0 release notes : \" The scp protocol is outdated , inflexible and not readily fixed . We recommend the use of more modern protocols like sftp and rsync for file transfer instead \". scp is also relatively slow when compared to rsync as exhibited for instance in the below sample Distem experience: You will find below notes on scp usage, but kindly prefer to use rsync . Consider scp as deprecated! Click nevertheless to get usage details scp (see scp(1) ) or secure copy is probably the easiest of all the methods. The basic syntax is as follows: scp [-P 8022] [-Cr] source_path destination_path the -P option specifies the SSH port to use (in this case 8022) the -C option activates the compression (actually, it passes the -C flag to ssh(1) to enable compression). the -r option states to recursively copy entire directories (in this case, scp follows symbolic links encountered in the tree traversal). Please note that in this case, you must specify the source file as a directory for this to work. The syntax for declaring a remote path is as follows on the cluster: yourlogin@iris-cluster:path/from/homedir Transfer from your local machine to the remote cluster login node For instance, let's assume you have a local directory ~/devel/myproject you want to transfer to the cluster, in your remote homedir. # /!\\ ADAPT yourlogin to... your ULHPC login $> scp -P 8022 -r ~/devel/myproject yourlogin@iris-cluster: This will transfer recursively your local directory ~/devel/myproject on the cluster login node (in your homedir). Note that if you configured (as advised elsewhere) the SSH connection in your ~/.ssh/config file, you can use a much simpler syntax: $> scp -r ~/devel/myproject iris-cluster: Transfer from the remote cluster front-end to your local machine Conversely, let's assume you want to retrieve the files ~/experiments/parallel_run/* $> scp -P 8022 yourlogin@iris-cluster:experiments/parallel_run/* /path/to/local/directory Again, if you configured the SSH connection in your ~/.ssh/config file, you can use a simpler syntax: $> scp iris-cluster:experiments/parallel_run/* /path/to/local/directory See the scp(1) man page or man scp for more details. Danger scp SHOULD NOT be used in the following cases: When you are copying more than a few files, as scp spawns a new process for each file and can be quite slow and resource intensive when copying a large number of files. When using the -r switch, scp does not know about symbolic links and will blindly follow them, even if it has already made a copy of the file. That can lead to scp copying an infinite amount of data and can easily fill up your hard disk (or worse, a system shared disk), so be careful. N.B. There are many alternative ways to transfer files in HPC platforms and you should check your options according to the problem at hand. Windows and OS X users may wish to transfer files from their systems to the clusters' login nodes with easy-to-use GUI applications such as: WinSCP (Windows only) FileZilla Client (Windows, OS X) Cyberduck (Windows, OS X) These applications will need to be configured to connect to the frontends with the same parameters as discussed on the SSH access page . Using rsync \u00b6 The clever alternative to scp is rsync , which has the advantage of transferring only the files which differ between the source and the destination. This feature is often referred to as fast incremental file transfer. Additionally, symbolic links can be preserved. The typical syntax of rsync (see rsync(1) ) for the cluster is similar to the one of scp : # /!\\ ADAPT and # From LOCAL directory (/path/to/local/source) toward REMOTE server rsync --rsh = 'ssh -p 8022' -avzu /path/to/local/source [ user@ ] hostname:/path/to/destination # Ex: from REMOTE server to LOCAL directory rsync --rsh = 'ssh -p 8022' -avzu [ user@ ] hostname:/path/to/source /path/to/local/destination the --rsh option specifies the connector to use (here SSH on port 8022) the -a option corresponds to the \"Archive\" mode. Most likely you should always keep this on as it preserves file permissions and does not follow symlinks. the -v option enables the verbose mode the -z option enable compression, this will compress each file as it gets sent over the pipe. This can greatly decrease time, depending on what sort of files you are copying. the -u option (or --update ) corresponds to an updating process which skips files that are newer on the receiver. At this level, you may prefer the more dangerous option --delete that deletes extraneous files from dest dirs. Just like scp , the syntax for qualifying a remote path is as follows on the cluster: yourlogin@iris-cluster:path/from/homedir Transfer from your local machine to the remote cluster \u00b6 Coming back to the previous examples, let's assume you have a local directory ~/devel/myproject you want to transfer to the cluster, in your remote homedir. In that case: # /!\\ ADAPT yourlogin to... your ULHPC login $> rsync --rsh = 'ssh -p 8022' -avzu ~/devel/myproject yourlogin@access-iris.uni.lu: This will synchronize your local directory ~/devel/myproject on the cluster front-end (in your homedir). Transfer to Iris, Aion or both? The above example target the access server of Iris. Actually, you could have targetted the access server of Aion: it doesn't matter since the storage is SHARED between both clusters. Note that if you configured (as advised above) your SSH connection in your ~/.ssh/config file with a dedicated SSH entry {iris,aion}-cluster , you can use a simpler syntax: $> rsync -avzu ~/devel/myproject iris-cluster: # OR (it doesn't matter) $> rsync -avzu ~/devel/myproject aion-cluster: Transfer from your local machine to a project directory on the remote cluster \u00b6 When transferring data to a project directory you should keep the group and group permissions imposed by the project directory and quota. Therefore you need to add the options --no-p --no-g to your rsync command: $> rsync -avP --no-p --no-g ~/devel/myproject iris-cluster:/work/projects/myproject/ Transfer from the remote cluster to your local machine \u00b6 Conversely, let's assume you want to synchronize (retrieve) the remote files ~/experiments/parallel_run/* on your local machine: # /!\\ ADAPT yourlogin to... your ULHPC login $> rsync --rsh = 'ssh -p 8022' -avzu yourlogin@access-iris.uni.lu:experiments/parallel_run /path/to/local/directory Again, if you configured the SSH connection in your ~/.ssh/config file, you can use a simpler syntax: $> rsync -avzu iris-cluster:experiments/parallel_run /path/to/local/directory # OR (it doesn't matter) $> rsync -avzu aion-cluster:experiments/parallel_run /path/to/local/directory As always, see the man page or man rsync for more details. Windows Subsystem for Linux (WSL) In WSL, the home directory in Linux virtual machines is not your home directory in Windows. If you want to access the files that you downloaded with rsync inside a Linux virtual machine, please consult the WSL documentation and the file system section in particular. Data Transfer within Project directories \u00b6 The ULHPC facility features a Global Project directory $PROJECTHOME hosted within the GPFS/SpecrumScale file-system. You have to pay a particular attention when using rsync to transfer data within your project directory as depicted below. Access rights to project directory: Quota for clusterusers group in project directories is 0 !!! When a project is created, a group of the same name ( ) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory. Be aware that your default group as a user is clusterusers which has ( on purpose ) a quota in project directories set to 0 . You thus need to ensure you always write data in your project directory using the group (instead of yoru default one.). This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...] When using rsync to transfer file toward the project directory /work/projects/ as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to: give new files the destination-default permissions with --no-p ( --no-perms ), and use the default group of the destination dir with --no-g ( --no-group ) (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX Your full rsync command becomes (adapt accordingly): rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] /work/projects//[...] For the same reason detailed above, in case you are using a build command or more generally any command meant to write data in your project directory /work/projects/ , you want to use the sg as follows: # /!\\ ADAPT accordingly sg -c \" [...]\" This is particularly important if you are building dedicated software with Easybuild for members of the project - you typically want to do it as follows: # /!\\ ADAPT accordingly sg -c \"eb [...] -r --rebuild -D\" # Dry-run - enforce using the '' group sg -c \"eb [...] -r --rebuild\" # Dry-run - enforce using the '' group Debugging quota issues Sometimes, when copying files with rsync or scp commands and you are not careful with the options of these commands, you copy files with incorrect permissions and ownership. If a directory is copied with the wrong permissions and ownership, all files created within the directory may maintain the incorrect permissions and ownership. Typical issues that you may encounter include: If a directory is copied incorrectly from a project directory to your home directory , the contents of the directory may continue counting towards the group data instead of your personal data and data usage may be misquoted by the df-ulphc utility. Actual data usage takes into account the file group not only its location. If a directory is copied incorrectly from a personal directory or another machine to a project directory , you may be unable to create files, since the clusterusers group has no quota inside project directories. Note the group special permission (g\u00b1s) in directories ensures that all files created in the directory will have the group of the directory instead of the process that creates the file. Typical resolutions techniques involve resetting the correct file ownership and permissions: Files in project directories chown -R : find -type d | xargs -I % chmod g+s '%' Files in user home directories chown -R :clusterusers find -type d | xargs -I % chmod g-s '%' Using MobaXterm (Windows) \u00b6 If you are under Windows and you have MobaXterm installed and configured , you probably want to use it to transfer your files to the clusters. Here are the steps to use rsync inside MobaXterm in Windows. Warning Be aware that you SHOULD enable MobaXterm SSH Agent -- see SSH Agent instructions for more instructions. Using a local bash, transfer your files \u00b6 Open a local \"bash\" shell. Click on Start local terminal on the welcome page of MobaXterm. Find the location of the files you want to transfer. They should be located under /drives/ . You will have to use the Linux command line to move from one directory to the other. The cd command is used to change the current directory and ls to list files. For example, if your files are under C:\\\\Users\\janedoe\\Downloads\\ you should then go to /drives/c/Users/janedoe/Downloads/ with this command: cd /drives/c/Users/janedoe/Downloads/ Then list the files with ls command. You should see the list of your data files. When you have retrieved the location of your files, we can begin the transfer with rsync . For example /drives/c/Users/janedoe/Downloads/ (watch out, there is no / character at the end of the path, it is important). Launch the command rsync with this parameters to transfer all the content of the Downloads directory to the /isilon/projects/market_data/ directory on the cluster (the syntax is very important, be careful) rsync -avzpP -e \"ssh -p 8022\" /drives/c/Users/janedoe/Downloads/ yourlogin@access-iris.uni.lu:/isilon/projects/market_data/ You should see the output of transfer in progress. Wait for it to finish (it can be very long). Interrupt and resume a transfer in progress \u00b6 If you want to interrupt the transfer to resume it later, press Ctrl-C and exit MobaXterm. To resume a transfer, go in the right location and execute the rsync command again. Only the files that have not been transferred will be transferred again. Alternative approaches \u00b6 You can also consider alternative approaches to synchronize data with the cluster login node: rely on a versioning system such as Git ; this approach works well for source code trees; mount your remote homedir by SSHFS : on Mac OS X, you should consider installing MacFusion for this purpose, where as on Linux, just use the command-line sshfs or, mc ; use GUI tools like FileZilla , Cyberduck , or WindSCP (or proprietary options like ExpanDrive or ForkLift 3 ). SSHFS \u00b6 SSHFS (SSH Filesystem) is a file system client that mounts directories located on a remote server onto a local directory over a normal ssh connection. Install the requires packages if they are not already available in your system. Linux # Debian-like sudo apt-get install sshfs # RHEL-like sudo yum install sshfs You may need to add yourself to the fuse group. Mac OS X # Assuming HomeBrew -- see https://brew.sh brew install osxfuse sshfs You can also directly install macFUSE from: https://osxfuse.github.io/ . You must reboot for the installation of osxfuse to take effect. You can then update to the latest version. With SSHFS any user can mount their ULHPC home directory onto a local workstation through an ssh connection. The CLI format is as follows: sshfs [user@]host:[dir] mountpoint [options] Proceed as follows ( assuming you have a working SSH connection ): # Create a local directory for the mounting point, e.g. ~/ulhpc mkdir -p ~/ulhpc # Mount the remote file system sshfs iris-cluster: ~/ulhpc -o follow_symlinks,reconnect,dir_cache = no Note the leaving the [dir] argument blanck, mounts the user's home directory by default. The options ( -o ) used are: follow_symlinks presents symbolic links in the remote files system as regular files in the local file system, useful when the symbolic link points outside the mounted directory; reconnect allows the SSHFS client to automatically reconnect to server if connection is interrupted; dir_cache enables or disables the directory cache which holds the names of directory entries (can be slow for mounted remote directories with many files). When you no longer need the mounted remote directory, you must unmount your remote file system: Linux fusermount -u ~/ulhpc Mac OS X diskutil umount ~/ulhpc Transfers between long term storage and the HPC facilities \u00b6 The university provides central data storage services for all employees and students. The data are stored securely on the university campus and are managed by the IT department . The storage servers most commonly used at the university are Atlas (atlas.uni.lux) for staff members, and Poseidon (poseidon.uni.lux) for students. For more details on the university central storage, you can have a look at Usage of Atlas and Poseidon , and Backup of your files on Atlas . Connecting to central data storage services from a personal machine The examples presented here are targeted to the university HPC machines. To connect to the university central data storage with a (Linux) personal machine from outside of the university network, you need to start first a VPN connection. The SMB shares exported for directories in the central data storage are meant to be accessed interactively. Transfer your data manually before and after your jobs are run. You can mount directories from the central data storage in the login nodes, and access the central data storage through the interface of smbclient from both the compute nodes during interactive jobs and the login nodes. Never store your password in plain text Unlike mounting with sshfs , you will always need to enter your password to access a directory in an SMB share. Avoid, storing your password in any manner that it makes it recoverable from plain text. For instance, do not create job scripts that contain your password in plain text just to move data to Atlas within a job. The following commands target Atlas, but commands for Poseidon are similar. Mounting an SMB share to a login node \u00b6 The UL HPC team provides the smb-storage script to mount SMB shares in login nodes. There exists an SMB share users where all staff member have a directory named after their user name ( name.surname ). To mount your directory in an shell session at a login node execute the command smb-storage mount name.surname and your directory will be mounted to the default mount location: ~/atlas.uni.lux-users-name.surname To mount a project share project_name in a shell session at a login node execute the command smb-storage mount name.surname --project project_name and the share will be mounted in the default mount location: ~/atlas.uni.lux-project_name To unmount any share, simply call the unmount subcommand with the mount point path, for instance smb-storage unmount ~/atlas.uni.lux-users-name.surname or: smb-storage unmount ~/atlas.uni.lux-project_name The smb-storage script provides optional flags to modify the default options: --help prints information about the usage and options of he script; --server specifies the server from which the SMB share is mounted (defaults to --server atlas.uni.lux if not specified, use --server poseidon.uni.lux to mount a share from Poseidon); --project [] mounts the share and creates a symbolic link to the optionally provided location , or to the project root directory if a location is not provided (defaults to --project users name.surname if not specified); --mountpoint selects the path where the share directory will be available (defaults to ~/-- if nbot specified); --debug prints details of the operations performed by the mount script. Best practices Mounted SMB shares will be available in the login node, and he mount point will appear as a dead symbolic link in compute nodes. This is be design, you can only mount SMB shares in login nodes because SMB shares are meant to be used in interactive sections. Mounted shares will remain available as long as the login session where the share was mounted remains active. You can mount shares in a tmux session in a login node, and access the share from any other session in the login node. Details of the mounting process There exists an SMB share users where all staff member have a directory named after their user name ( name.surname ). All other projects have an SMB share named after the project name (in lowercase characters). The smb-storage scripts uses gio mount to mount SMB shares. Shares are mounted in a specially named mount point in /run/user/${UID}/gvfs . Then, smb-storage creates a symbolic link to the requested directory in project in the path specified in the --mountpoint option. During unmounting, the symbolic links are deleted by the smb-storage script and then the shares mounted in /run/user/${UID}/gvfs are unmounted and their mount points are removed using gio mount --unmount . If a session with mounted SMB shares terminates without unmounting the shares , the shares in /run/user/${UID}/gvfs will be unmounted and their mount points deleted, but the symbolic links created by smb-storage must be removed manually . Accessing SMB shares with smbclient \u00b6 The smbclient program is available in both login and compute nodes. In compute nodes the only way to access SMB shares is through the client program. With the SMB client one can connect to the users share and browse their personal directory with the command: smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu Project directories are accessed with the command: smbclient //atlas.uni.lux/project_name --user=name.surname@uni.lu Type help to get a list of all available commands or help (command_name) to get more information for a specific command. Some useful commands are ls to list all the files in a directory, mkdir (directory_name) to create a directory, rm (file_name) to remove a file, rmdir (directory_name) to remove a directory, scopy (source_full_path) (destination_full_path) to move a file within the SMN shared directory, get (file_name) [destination] to move a file from Atlas to the local machine (placed in the working directory, if the destination is not specified), and put (file_name) [destination] to move a file to Atlas from the local machine (placed in the working directory, if a full path is not specified), mget (file name pattern) [destination] to download multiple files, and mput (file name pattern) [destination] to upload multiple files. The patterns used in mget / mput are either normal file names, or globular expressions (e.g. *.txt ). Connecting into an interactive SMB session means that you will have to maintain a shell session dedicated to SMB. However, it saves you from entering your password for every operation. If you would like to perform a single operation and exit, you can avoid maintaining an interactive session with the --command flag. For instance, smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu --command='get \"full path/to/remote file.txt\" \"full path/to/local file.txt\"' copies a file from the SMB directory to the local machine. Notice the use of double quotes to handle file names with spaces. Similarly, smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu --command='put \"full path/to/local file.txt\" \"full path/to/remote file.txt\"' copies a file from the local machine to the SMB directory. Moving whole directories is a bit more involved, as it requires setting some state variables for the session, both for interactive and non-interactive sessions. To download a directory for instance, use smbclient //atlas.uni.lux/users --directory = 'name.surname' --user = name.surname@uni.lu --command = 'recurse ON; prompt OFF; mget \"full path/to/remote directory\" \"full path/to/local directory\"' and to upload a directory use smbclient //atlas.uni.lux/users --directory = 'name.surname' --user = name.surname@uni.lu --command = 'recurse ON; prompt OFF; mput \"full path/to/remote local\" \"full path/to/remote directory\"' respectively. The session option recurse ON enables recursion into directories, and the option prompt OFF disables prompting for confirmation before moving each file. Sources Cheat-sheet for SMB access from linux Special transfers \u00b6 Sometimes you may have the case that a lot of files need to go from point A to B over a Wide Area Network (eg. across the Atlantic). Since packet latency and other factors on the network will naturally slow down the transfers, you need to find workarounds, typically with either rsync or tar.","title":"Data Transfer"},{"location":"data/transfer/#data-transfer-tofromwithin-ul-hpc-clusters","text":"","title":"Data Transfer to/from/within UL HPC Clusters"},{"location":"data/transfer/#introduction","text":"Directories such as $HOME , $WORK or $SCRATCH are shared among the nodes of the cluster that you are using (including the login node) via shared filesystems (SpectrumScale, Lustre) meaning that: every file/directory pushed or created on the login node is available on the computing nodes every file/directory pushed or created on the computing nodes is available on the login node The two most common commands you can use for data transfers over SSH: scp : for the full transfer of files and directories (only works fine for single files or directories of small/trivial size) rsync : a software application which synchronizes files and directories from one location to another while minimizing data transfer as only the outdated or inexistent elements are transferred (practically required for lengthy complex transfers, which are more likely to be interrupted in the middle). scp or rsync? While both ensure a secure transfer of the data within an encrypted tunnel, rsync should be preferred : as mentionned in the from openSSH 8.0 release notes : \" The scp protocol is outdated , inflexible and not readily fixed . We recommend the use of more modern protocols like sftp and rsync for file transfer instead \". scp is also relatively slow when compared to rsync as exhibited for instance in the below sample Distem experience: You will find below notes on scp usage, but kindly prefer to use rsync . Consider scp as deprecated! Click nevertheless to get usage details scp (see scp(1) ) or secure copy is probably the easiest of all the methods. The basic syntax is as follows: scp [-P 8022] [-Cr] source_path destination_path the -P option specifies the SSH port to use (in this case 8022) the -C option activates the compression (actually, it passes the -C flag to ssh(1) to enable compression). the -r option states to recursively copy entire directories (in this case, scp follows symbolic links encountered in the tree traversal). Please note that in this case, you must specify the source file as a directory for this to work. The syntax for declaring a remote path is as follows on the cluster: yourlogin@iris-cluster:path/from/homedir Transfer from your local machine to the remote cluster login node For instance, let's assume you have a local directory ~/devel/myproject you want to transfer to the cluster, in your remote homedir. # /!\\ ADAPT yourlogin to... your ULHPC login $> scp -P 8022 -r ~/devel/myproject yourlogin@iris-cluster: This will transfer recursively your local directory ~/devel/myproject on the cluster login node (in your homedir). Note that if you configured (as advised elsewhere) the SSH connection in your ~/.ssh/config file, you can use a much simpler syntax: $> scp -r ~/devel/myproject iris-cluster: Transfer from the remote cluster front-end to your local machine Conversely, let's assume you want to retrieve the files ~/experiments/parallel_run/* $> scp -P 8022 yourlogin@iris-cluster:experiments/parallel_run/* /path/to/local/directory Again, if you configured the SSH connection in your ~/.ssh/config file, you can use a simpler syntax: $> scp iris-cluster:experiments/parallel_run/* /path/to/local/directory See the scp(1) man page or man scp for more details. Danger scp SHOULD NOT be used in the following cases: When you are copying more than a few files, as scp spawns a new process for each file and can be quite slow and resource intensive when copying a large number of files. When using the -r switch, scp does not know about symbolic links and will blindly follow them, even if it has already made a copy of the file. That can lead to scp copying an infinite amount of data and can easily fill up your hard disk (or worse, a system shared disk), so be careful. N.B. There are many alternative ways to transfer files in HPC platforms and you should check your options according to the problem at hand. Windows and OS X users may wish to transfer files from their systems to the clusters' login nodes with easy-to-use GUI applications such as: WinSCP (Windows only) FileZilla Client (Windows, OS X) Cyberduck (Windows, OS X) These applications will need to be configured to connect to the frontends with the same parameters as discussed on the SSH access page .","title":"Introduction"},{"location":"data/transfer/#using-rsync","text":"The clever alternative to scp is rsync , which has the advantage of transferring only the files which differ between the source and the destination. This feature is often referred to as fast incremental file transfer. Additionally, symbolic links can be preserved. The typical syntax of rsync (see rsync(1) ) for the cluster is similar to the one of scp : # /!\\ ADAPT and # From LOCAL directory (/path/to/local/source) toward REMOTE server rsync --rsh = 'ssh -p 8022' -avzu /path/to/local/source [ user@ ] hostname:/path/to/destination # Ex: from REMOTE server to LOCAL directory rsync --rsh = 'ssh -p 8022' -avzu [ user@ ] hostname:/path/to/source /path/to/local/destination the --rsh option specifies the connector to use (here SSH on port 8022) the -a option corresponds to the \"Archive\" mode. Most likely you should always keep this on as it preserves file permissions and does not follow symlinks. the -v option enables the verbose mode the -z option enable compression, this will compress each file as it gets sent over the pipe. This can greatly decrease time, depending on what sort of files you are copying. the -u option (or --update ) corresponds to an updating process which skips files that are newer on the receiver. At this level, you may prefer the more dangerous option --delete that deletes extraneous files from dest dirs. Just like scp , the syntax for qualifying a remote path is as follows on the cluster: yourlogin@iris-cluster:path/from/homedir","title":"Using rsync"},{"location":"data/transfer/#transfer-from-your-local-machine-to-the-remote-cluster","text":"Coming back to the previous examples, let's assume you have a local directory ~/devel/myproject you want to transfer to the cluster, in your remote homedir. In that case: # /!\\ ADAPT yourlogin to... your ULHPC login $> rsync --rsh = 'ssh -p 8022' -avzu ~/devel/myproject yourlogin@access-iris.uni.lu: This will synchronize your local directory ~/devel/myproject on the cluster front-end (in your homedir). Transfer to Iris, Aion or both? The above example target the access server of Iris. Actually, you could have targetted the access server of Aion: it doesn't matter since the storage is SHARED between both clusters. Note that if you configured (as advised above) your SSH connection in your ~/.ssh/config file with a dedicated SSH entry {iris,aion}-cluster , you can use a simpler syntax: $> rsync -avzu ~/devel/myproject iris-cluster: # OR (it doesn't matter) $> rsync -avzu ~/devel/myproject aion-cluster:","title":"Transfer from your local machine to the remote cluster"},{"location":"data/transfer/#transfer-from-your-local-machine-to-a-project-directory-on-the-remote-cluster","text":"When transferring data to a project directory you should keep the group and group permissions imposed by the project directory and quota. Therefore you need to add the options --no-p --no-g to your rsync command: $> rsync -avP --no-p --no-g ~/devel/myproject iris-cluster:/work/projects/myproject/","title":"Transfer from your local machine to a project directory on the remote cluster"},{"location":"data/transfer/#transfer-from-the-remote-cluster-to-your-local-machine","text":"Conversely, let's assume you want to synchronize (retrieve) the remote files ~/experiments/parallel_run/* on your local machine: # /!\\ ADAPT yourlogin to... your ULHPC login $> rsync --rsh = 'ssh -p 8022' -avzu yourlogin@access-iris.uni.lu:experiments/parallel_run /path/to/local/directory Again, if you configured the SSH connection in your ~/.ssh/config file, you can use a simpler syntax: $> rsync -avzu iris-cluster:experiments/parallel_run /path/to/local/directory # OR (it doesn't matter) $> rsync -avzu aion-cluster:experiments/parallel_run /path/to/local/directory As always, see the man page or man rsync for more details. Windows Subsystem for Linux (WSL) In WSL, the home directory in Linux virtual machines is not your home directory in Windows. If you want to access the files that you downloaded with rsync inside a Linux virtual machine, please consult the WSL documentation and the file system section in particular.","title":"Transfer from the remote cluster to your local machine"},{"location":"data/transfer/#data-transfer-within-project-directories","text":"The ULHPC facility features a Global Project directory $PROJECTHOME hosted within the GPFS/SpecrumScale file-system. You have to pay a particular attention when using rsync to transfer data within your project directory as depicted below. Access rights to project directory: Quota for clusterusers group in project directories is 0 !!! When a project is created, a group of the same name ( ) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory. Be aware that your default group as a user is clusterusers which has ( on purpose ) a quota in project directories set to 0 . You thus need to ensure you always write data in your project directory using the group (instead of yoru default one.). This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...] When using rsync to transfer file toward the project directory /work/projects/ as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to: give new files the destination-default permissions with --no-p ( --no-perms ), and use the default group of the destination dir with --no-g ( --no-group ) (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX Your full rsync command becomes (adapt accordingly): rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] /work/projects//[...] For the same reason detailed above, in case you are using a build command or more generally any command meant to write data in your project directory /work/projects/ , you want to use the sg as follows: # /!\\ ADAPT accordingly sg -c \" [...]\" This is particularly important if you are building dedicated software with Easybuild for members of the project - you typically want to do it as follows: # /!\\ ADAPT accordingly sg -c \"eb [...] -r --rebuild -D\" # Dry-run - enforce using the '' group sg -c \"eb [...] -r --rebuild\" # Dry-run - enforce using the '' group Debugging quota issues Sometimes, when copying files with rsync or scp commands and you are not careful with the options of these commands, you copy files with incorrect permissions and ownership. If a directory is copied with the wrong permissions and ownership, all files created within the directory may maintain the incorrect permissions and ownership. Typical issues that you may encounter include: If a directory is copied incorrectly from a project directory to your home directory , the contents of the directory may continue counting towards the group data instead of your personal data and data usage may be misquoted by the df-ulphc utility. Actual data usage takes into account the file group not only its location. If a directory is copied incorrectly from a personal directory or another machine to a project directory , you may be unable to create files, since the clusterusers group has no quota inside project directories. Note the group special permission (g\u00b1s) in directories ensures that all files created in the directory will have the group of the directory instead of the process that creates the file. Typical resolutions techniques involve resetting the correct file ownership and permissions: Files in project directories chown -R : find -type d | xargs -I % chmod g+s '%' Files in user home directories chown -R :clusterusers find -type d | xargs -I % chmod g-s '%'","title":"Data Transfer within Project directories"},{"location":"data/transfer/#using-mobaxterm-windows","text":"If you are under Windows and you have MobaXterm installed and configured , you probably want to use it to transfer your files to the clusters. Here are the steps to use rsync inside MobaXterm in Windows. Warning Be aware that you SHOULD enable MobaXterm SSH Agent -- see SSH Agent instructions for more instructions.","title":"Using MobaXterm (Windows)"},{"location":"data/transfer/#using-a-local-bash-transfer-your-files","text":"Open a local \"bash\" shell. Click on Start local terminal on the welcome page of MobaXterm. Find the location of the files you want to transfer. They should be located under /drives/ . You will have to use the Linux command line to move from one directory to the other. The cd command is used to change the current directory and ls to list files. For example, if your files are under C:\\\\Users\\janedoe\\Downloads\\ you should then go to /drives/c/Users/janedoe/Downloads/ with this command: cd /drives/c/Users/janedoe/Downloads/ Then list the files with ls command. You should see the list of your data files. When you have retrieved the location of your files, we can begin the transfer with rsync . For example /drives/c/Users/janedoe/Downloads/ (watch out, there is no / character at the end of the path, it is important). Launch the command rsync with this parameters to transfer all the content of the Downloads directory to the /isilon/projects/market_data/ directory on the cluster (the syntax is very important, be careful) rsync -avzpP -e \"ssh -p 8022\" /drives/c/Users/janedoe/Downloads/ yourlogin@access-iris.uni.lu:/isilon/projects/market_data/ You should see the output of transfer in progress. Wait for it to finish (it can be very long).","title":"Using a local bash, transfer your files"},{"location":"data/transfer/#interrupt-and-resume-a-transfer-in-progress","text":"If you want to interrupt the transfer to resume it later, press Ctrl-C and exit MobaXterm. To resume a transfer, go in the right location and execute the rsync command again. Only the files that have not been transferred will be transferred again.","title":"Interrupt and resume a transfer in progress"},{"location":"data/transfer/#alternative-approaches","text":"You can also consider alternative approaches to synchronize data with the cluster login node: rely on a versioning system such as Git ; this approach works well for source code trees; mount your remote homedir by SSHFS : on Mac OS X, you should consider installing MacFusion for this purpose, where as on Linux, just use the command-line sshfs or, mc ; use GUI tools like FileZilla , Cyberduck , or WindSCP (or proprietary options like ExpanDrive or ForkLift 3 ).","title":"Alternative approaches"},{"location":"data/transfer/#sshfs","text":"SSHFS (SSH Filesystem) is a file system client that mounts directories located on a remote server onto a local directory over a normal ssh connection. Install the requires packages if they are not already available in your system. Linux # Debian-like sudo apt-get install sshfs # RHEL-like sudo yum install sshfs You may need to add yourself to the fuse group. Mac OS X # Assuming HomeBrew -- see https://brew.sh brew install osxfuse sshfs You can also directly install macFUSE from: https://osxfuse.github.io/ . You must reboot for the installation of osxfuse to take effect. You can then update to the latest version. With SSHFS any user can mount their ULHPC home directory onto a local workstation through an ssh connection. The CLI format is as follows: sshfs [user@]host:[dir] mountpoint [options] Proceed as follows ( assuming you have a working SSH connection ): # Create a local directory for the mounting point, e.g. ~/ulhpc mkdir -p ~/ulhpc # Mount the remote file system sshfs iris-cluster: ~/ulhpc -o follow_symlinks,reconnect,dir_cache = no Note the leaving the [dir] argument blanck, mounts the user's home directory by default. The options ( -o ) used are: follow_symlinks presents symbolic links in the remote files system as regular files in the local file system, useful when the symbolic link points outside the mounted directory; reconnect allows the SSHFS client to automatically reconnect to server if connection is interrupted; dir_cache enables or disables the directory cache which holds the names of directory entries (can be slow for mounted remote directories with many files). When you no longer need the mounted remote directory, you must unmount your remote file system: Linux fusermount -u ~/ulhpc Mac OS X diskutil umount ~/ulhpc","title":"SSHFS"},{"location":"data/transfer/#transfers-between-long-term-storage-and-the-hpc-facilities","text":"The university provides central data storage services for all employees and students. The data are stored securely on the university campus and are managed by the IT department . The storage servers most commonly used at the university are Atlas (atlas.uni.lux) for staff members, and Poseidon (poseidon.uni.lux) for students. For more details on the university central storage, you can have a look at Usage of Atlas and Poseidon , and Backup of your files on Atlas . Connecting to central data storage services from a personal machine The examples presented here are targeted to the university HPC machines. To connect to the university central data storage with a (Linux) personal machine from outside of the university network, you need to start first a VPN connection. The SMB shares exported for directories in the central data storage are meant to be accessed interactively. Transfer your data manually before and after your jobs are run. You can mount directories from the central data storage in the login nodes, and access the central data storage through the interface of smbclient from both the compute nodes during interactive jobs and the login nodes. Never store your password in plain text Unlike mounting with sshfs , you will always need to enter your password to access a directory in an SMB share. Avoid, storing your password in any manner that it makes it recoverable from plain text. For instance, do not create job scripts that contain your password in plain text just to move data to Atlas within a job. The following commands target Atlas, but commands for Poseidon are similar.","title":"Transfers between long term storage and the HPC facilities"},{"location":"data/transfer/#mounting-an-smb-share-to-a-login-node","text":"The UL HPC team provides the smb-storage script to mount SMB shares in login nodes. There exists an SMB share users where all staff member have a directory named after their user name ( name.surname ). To mount your directory in an shell session at a login node execute the command smb-storage mount name.surname and your directory will be mounted to the default mount location: ~/atlas.uni.lux-users-name.surname To mount a project share project_name in a shell session at a login node execute the command smb-storage mount name.surname --project project_name and the share will be mounted in the default mount location: ~/atlas.uni.lux-project_name To unmount any share, simply call the unmount subcommand with the mount point path, for instance smb-storage unmount ~/atlas.uni.lux-users-name.surname or: smb-storage unmount ~/atlas.uni.lux-project_name The smb-storage script provides optional flags to modify the default options: --help prints information about the usage and options of he script; --server specifies the server from which the SMB share is mounted (defaults to --server atlas.uni.lux if not specified, use --server poseidon.uni.lux to mount a share from Poseidon); --project [] mounts the share and creates a symbolic link to the optionally provided location , or to the project root directory if a location is not provided (defaults to --project users name.surname if not specified); --mountpoint selects the path where the share directory will be available (defaults to ~/-- if nbot specified); --debug prints details of the operations performed by the mount script. Best practices Mounted SMB shares will be available in the login node, and he mount point will appear as a dead symbolic link in compute nodes. This is be design, you can only mount SMB shares in login nodes because SMB shares are meant to be used in interactive sections. Mounted shares will remain available as long as the login session where the share was mounted remains active. You can mount shares in a tmux session in a login node, and access the share from any other session in the login node. Details of the mounting process There exists an SMB share users where all staff member have a directory named after their user name ( name.surname ). All other projects have an SMB share named after the project name (in lowercase characters). The smb-storage scripts uses gio mount to mount SMB shares. Shares are mounted in a specially named mount point in /run/user/${UID}/gvfs . Then, smb-storage creates a symbolic link to the requested directory in project in the path specified in the --mountpoint option. During unmounting, the symbolic links are deleted by the smb-storage script and then the shares mounted in /run/user/${UID}/gvfs are unmounted and their mount points are removed using gio mount --unmount . If a session with mounted SMB shares terminates without unmounting the shares , the shares in /run/user/${UID}/gvfs will be unmounted and their mount points deleted, but the symbolic links created by smb-storage must be removed manually .","title":"Mounting an SMB share to a login node"},{"location":"data/transfer/#accessing-smb-shares-with-smbclient","text":"The smbclient program is available in both login and compute nodes. In compute nodes the only way to access SMB shares is through the client program. With the SMB client one can connect to the users share and browse their personal directory with the command: smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu Project directories are accessed with the command: smbclient //atlas.uni.lux/project_name --user=name.surname@uni.lu Type help to get a list of all available commands or help (command_name) to get more information for a specific command. Some useful commands are ls to list all the files in a directory, mkdir (directory_name) to create a directory, rm (file_name) to remove a file, rmdir (directory_name) to remove a directory, scopy (source_full_path) (destination_full_path) to move a file within the SMN shared directory, get (file_name) [destination] to move a file from Atlas to the local machine (placed in the working directory, if the destination is not specified), and put (file_name) [destination] to move a file to Atlas from the local machine (placed in the working directory, if a full path is not specified), mget (file name pattern) [destination] to download multiple files, and mput (file name pattern) [destination] to upload multiple files. The patterns used in mget / mput are either normal file names, or globular expressions (e.g. *.txt ). Connecting into an interactive SMB session means that you will have to maintain a shell session dedicated to SMB. However, it saves you from entering your password for every operation. If you would like to perform a single operation and exit, you can avoid maintaining an interactive session with the --command flag. For instance, smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu --command='get \"full path/to/remote file.txt\" \"full path/to/local file.txt\"' copies a file from the SMB directory to the local machine. Notice the use of double quotes to handle file names with spaces. Similarly, smbclient //atlas.uni.lux/users --directory='name.surname' --user=name.surname@uni.lu --command='put \"full path/to/local file.txt\" \"full path/to/remote file.txt\"' copies a file from the local machine to the SMB directory. Moving whole directories is a bit more involved, as it requires setting some state variables for the session, both for interactive and non-interactive sessions. To download a directory for instance, use smbclient //atlas.uni.lux/users --directory = 'name.surname' --user = name.surname@uni.lu --command = 'recurse ON; prompt OFF; mget \"full path/to/remote directory\" \"full path/to/local directory\"' and to upload a directory use smbclient //atlas.uni.lux/users --directory = 'name.surname' --user = name.surname@uni.lu --command = 'recurse ON; prompt OFF; mput \"full path/to/remote local\" \"full path/to/remote directory\"' respectively. The session option recurse ON enables recursion into directories, and the option prompt OFF disables prompting for confirmation before moving each file. Sources Cheat-sheet for SMB access from linux","title":"Accessing SMB shares with smbclient"},{"location":"data/transfer/#special-transfers","text":"Sometimes you may have the case that a lot of files need to go from point A to B over a Wide Area Network (eg. across the Atlantic). Since packet latency and other factors on the network will naturally slow down the transfers, you need to find workarounds, typically with either rsync or tar.","title":"Special transfers"},{"location":"data-center/","text":"ULHPC Data Center - Centre de Calcul (CDC) \u00b6 The ULHPC facilities are hosted within the University's \" Centre de Calcul \" (CDC) data center located in the Belval Campus. Power and Cooling Capacities \u00b6 Established over two floors underground (CDC-S01 and CDC-S02) of ~1000~100m 2 each, the CDC features five server rooms per level (each of them offering ~100m 2 as IT rooms surface). When the first level CDC-S01 is hosting administrative IT and research equipment, the second floor ( CDC-S02 ) is primarily targeting the hosting of HPC equipment (compute, storage and interconnect) . A power generation station supplies the HPC floor with up to 3 MW of electrical power, and 3 MW of cold water at a 12-18\u00b0C regime used for traditional Airflow with In-Row cooling. A separate hot water circuit (between 30 and 40\u00b0C) allow to implement Direct Liquid Cooling (DLC) solutions as for the Aion supercomputer in two dedicated server rooms. Location Cooling Usage Max Capa. CDC S-02-001 Airflow Future extension 280 kW CDC S-02-002 Airflow Future extension 280 kW CDC S-02-003 DLC Future extension - High Density/Energy efficient HPC 1050 kW CDC S-02-004 DLC High Density/Energy efficient HPC : aion 1050 kW CDC S-02-005 Airflow Storage / Traditional HPC : iris and common equipment 300 kW Data-Center Cooling technologies \u00b6 Airflow with In-Row cooling \u00b6 Most server rooms are designed for traditional airflow-based cooling and implement hot or cold aisle containment , as well as In-row cooling systems work within a row of standard server rack engineered to take up the smallest footprint and offer high-density cooling. Ducting and baffles ensure that the cooling air gets where it needs to go. Iris compute, storage and interconnect equipment are hosted in such a configuration [Direct] Liquid Cooling \u00b6 Traditional solutions implemented in most data centers use air as a medium to remove the heat from the servers and computing equipment and are not well suited to cutting-edge high-density HPC environments due to the limited thermal capacity of air. Liquids\u2019 thermal conductivity is higher than the air, thus concludes the liquid can absorb (through conductivity) more heat than the air. The replacement of air with a liquid cooling medium allows to drastically improve the energy-efficiency as well as the density of the implemented solution, especially with Direct Liquid Cooling (DLC) where the heat from the IT components is directly transferred to a liquid cooling medium through liquid-cooled plates. The Aion supercomputer based on the fan-less Atos BullSequana XH2000 DLC cell design relies on this water-cooled configuration.","title":"Centre de Calcul (CDC)"},{"location":"data-center/#ulhpc-data-center-centre-de-calcul-cdc","text":"The ULHPC facilities are hosted within the University's \" Centre de Calcul \" (CDC) data center located in the Belval Campus.","title":"ULHPC Data Center - Centre de Calcul (CDC)"},{"location":"data-center/#power-and-cooling-capacities","text":"Established over two floors underground (CDC-S01 and CDC-S02) of ~1000~100m 2 each, the CDC features five server rooms per level (each of them offering ~100m 2 as IT rooms surface). When the first level CDC-S01 is hosting administrative IT and research equipment, the second floor ( CDC-S02 ) is primarily targeting the hosting of HPC equipment (compute, storage and interconnect) . A power generation station supplies the HPC floor with up to 3 MW of electrical power, and 3 MW of cold water at a 12-18\u00b0C regime used for traditional Airflow with In-Row cooling. A separate hot water circuit (between 30 and 40\u00b0C) allow to implement Direct Liquid Cooling (DLC) solutions as for the Aion supercomputer in two dedicated server rooms. Location Cooling Usage Max Capa. CDC S-02-001 Airflow Future extension 280 kW CDC S-02-002 Airflow Future extension 280 kW CDC S-02-003 DLC Future extension - High Density/Energy efficient HPC 1050 kW CDC S-02-004 DLC High Density/Energy efficient HPC : aion 1050 kW CDC S-02-005 Airflow Storage / Traditional HPC : iris and common equipment 300 kW","title":"Power and Cooling Capacities"},{"location":"data-center/#data-center-cooling-technologies","text":"","title":"Data-Center Cooling technologies"},{"location":"data-center/#airflow-with-in-row-cooling","text":"Most server rooms are designed for traditional airflow-based cooling and implement hot or cold aisle containment , as well as In-row cooling systems work within a row of standard server rack engineered to take up the smallest footprint and offer high-density cooling. Ducting and baffles ensure that the cooling air gets where it needs to go. Iris compute, storage and interconnect equipment are hosted in such a configuration","title":"Airflow with In-Row cooling"},{"location":"data-center/#direct-liquid-cooling","text":"Traditional solutions implemented in most data centers use air as a medium to remove the heat from the servers and computing equipment and are not well suited to cutting-edge high-density HPC environments due to the limited thermal capacity of air. Liquids\u2019 thermal conductivity is higher than the air, thus concludes the liquid can absorb (through conductivity) more heat than the air. The replacement of air with a liquid cooling medium allows to drastically improve the energy-efficiency as well as the density of the implemented solution, especially with Direct Liquid Cooling (DLC) where the heat from the IT components is directly transferred to a liquid cooling medium through liquid-cooled plates. The Aion supercomputer based on the fan-less Atos BullSequana XH2000 DLC cell design relies on this water-cooled configuration.","title":"[Direct] Liquid Cooling"},{"location":"development/build-tools/easybuild/","text":"Building [custom] software with EasyBuild on the UL HPC platform \u00b6 EasyBuild can be used to ease, automate and script the build of software on the UL HPC platforms. Indeed, as researchers involved in many cutting-edge and hot topics, you probably have access to many theoretical resources to understand the surrounding concepts. Yet it should normally give you a wish to test the corresponding software. Traditionally, this part is rather time-consuming and frustrating, especially when the developers did not rely on a \"regular\" building framework such as CMake or the autotools ( i.e. with build instructions as configure --prefix && make && make install ). And when it comes to have a build adapted to an HPC system, you are somehow forced to make a custom build performed on the target machine to ensure you will get the best possible performance. EasyBuild is one approach to facilitate this step. EasyBuild is a tool that allows to perform automated and reproducible compilation and installation of software. A large number of scientific software are supported ( 1504 supported software packages in the last release 3.6.1) -- see also What is EasyBuild? All builds and installations are performed at user level, so you don't need the admin (i.e. root ) rights. The software are installed in your home directory (by default in $HOME/.local/easybuild/software/ ) and a module file is generated (by default in $HOME/.local/easybuild/modules/ ) to use the software. EasyBuild relies on two main concepts: Toolchains and EasyConfig files . A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. The two main toolchains frequently used on the UL HPC platform are the foss (\" Free and Open Source Software \") and the intel one. foss is based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.). intel is based on the Intel compiler and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.). An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedures (like configure , make and make install ), this file is very simple. Many EasyConfig files are already provided with EasyBuild. By default, EasyConfig files and generated modules are named using the following convention: --- . However, we use a hierarchical approach where the software are classified under a category (or class) -- see the CategorizedModuleNamingScheme option for the EASYBUILD_MODULE_NAMING_SCHEME environmental variable), meaning that the layout will respect the following hierarchy: //-- Additional details are available on EasyBuild website: EasyBuild homepage EasyBuild tutorial EasyBuild documentation What is EasyBuild? Toolchains EasyConfig files List of supported software packages a. Installation \u00b6 the official instructions . What is important for the installation of EasyBuild are the following variables: EASYBUILD_PREFIX : where to install local modules and software, i.e. $HOME/.local/easybuild EASYBUILD_MODULES_TOOL : the type of modules tool you are using, i.e. LMod in this case EASYBUILD_MODULE_NAMING_SCHEME : the way the software and modules should be organized (flat view or hierarchical) -- we're advising on CategorizedModuleNamingScheme Add the following entries to your ~/.bashrc (use your favorite CLI editor like nano or vim ): # Easybuild export EASYBUILD_PREFIX = $HOME /.local/easybuild export EASYBUILD_MODULES_TOOL = Lmod export EASYBUILD_MODULE_NAMING_SCHEME = CategorizedModuleNamingScheme # Use the below variable to run: # module use $LOCAL_MODULES # module load tools/EasyBuild export LOCAL_MODULES = ${ EASYBUILD_PREFIX } /modules/all alias ma = \"module avail\" alias ml = \"module list\" function mu (){ module use $LOCAL_MODULES module load tools/EasyBuild } Then source this file to expose the environment variables: $> source ~/.bashrc $> echo $EASYBUILD_PREFIX /home/users//.local/easybuild Now let's install EasyBuild following the official procedure . Install EasyBuild in a temporary directory and use this temporary installation to build an EasyBuild module in your $EASYBUILD_PREFIX : # pick installation prefix, and install EasyBuild into it export EB_TMPDIR = /tmp/ $USER /eb_tmp python3 -m pip install --ignore-installed --prefix $EB_TMPDIR easybuild # update environment to use this temporary EasyBuild installation export PATH = $EB_TMPDIR /bin: $PATH export PYTHONPATH = $( /bin/ls -rtd -1 $EB_TMPDIR /lib*/python*/site-packages | tail -1 ) : $PYTHONPATH export EB_PYTHON = python3 # install Easybuild in your $EASYBUILD_PREFIX eb --install-latest-eb-release --prefix $EASYBUILD_PREFIX Now you can use your freshly built software. The main EasyBuild command is eb : $> eb --version # expected ;) -bash: eb: command not found # Load the newly installed Easybuild $> echo $MODULEPATH /opt/apps/resif/data/stable/default/modules/all/ $> module use $LOCAL_MODULES $> echo $MODULEPATH /home/users//.local/easybuild/modules/all:/opt/apps/resif/data/stable/default/modules/all $> module spider Easybuild $> module load tools/EasyBuild # TAB is your friend... $> eb --version This is EasyBuild 3 .6.1 ( framework: 3 .6.1, easyblocks: 3 .6.1 ) on host iris-001. Since you are going to use quite often the above command to use locally built modules and load easybuild, an alias mu is provided and can be used from now on. Use it now . $> mu $> module avail # OR 'ma' To get help on the EasyBuild options, use the -h or -H option flags: $> eb -h $> eb -H b. Local vs. global usage \u00b6 As you probably guessed, we are going to use two places for the installed software: local builds ~/.local/easybuild (see $LOCAL_MODULES ) global builds (provided to you by the UL HPC team) in /opt/apps/resif/data/stable/default/modules/all (see default $MODULEPATH ). Default usage (with the eb command) would install your software and modules in ~/.local/easybuild . Before that, let's explore the basic usage of EasyBuild and the eb command. # Search for an Easybuild recipy with 'eb -S ' $> eb -S Spark CFGS1 = /opt/apps/resif/data/easyconfigs/ulhpc/default/easybuild/easyconfigs/s/Spark CFGS2 = /home/users//.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/s/Spark * $CFGS1 /Spark-2.1.1.eb * $CFGS1 /Spark-2.3.0-intel-2018a-Hadoop-2.7-Java-1.8.0_162-Python-3.6.4.eb * $CFGS2 /Spark-1.3.0.eb * $CFGS2 /Spark-1.4.1.eb * $CFGS2 /Spark-1.5.0.eb * $CFGS2 /Spark-1.6.0.eb * $CFGS2 /Spark-1.6.1.eb * $CFGS2 /Spark-2.0.0.eb * $CFGS2 /Spark-2.0.2.eb * $CFGS2 /Spark-2.2.0-Hadoop-2.6-Java-1.8.0_144.eb * $CFGS2 /Spark-2.2.0-Hadoop-2.6-Java-1.8.0_152.eb * $CFGS2 /Spark-2.2.0-intel-2017b-Hadoop-2.6-Java-1.8.0_152-Python-3.6.3.eb c. Build software using provided EasyConfig file \u00b6 In this part, we propose to build High Performance Linpack (HPL) using EasyBuild. HPL is supported by EasyBuild, this means that an EasyConfig file allowing to build HPL is already provided with EasyBuild. First of all, let's check if that software is not available by default: $> module spider HPL Lmod has detected the following error: Unable to find: \"HPL\" Then, search for available EasyConfig files with HPL in their name. The EasyConfig files are named with the .eb extension. # Search for an Easybuild recipy with 'eb -S ' $> eb -S HPL-2.2 CFGS1 = /home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/h/HPL * $CFGS1 /HPL-2.2-foss-2016.07.eb * $CFGS1 /HPL-2.2-foss-2016.09.eb * $CFGS1 /HPL-2.2-foss-2017a.eb * $CFGS1 /HPL-2.2-foss-2017b.eb * $CFGS1 /HPL-2.2-foss-2018a.eb * $CFGS1 /HPL-2.2-fosscuda-2018a.eb * $CFGS1 /HPL-2.2-giolf-2017b.eb * $CFGS1 /HPL-2.2-giolf-2018a.eb * $CFGS1 /HPL-2.2-giolfc-2017b.eb * $CFGS1 /HPL-2.2-gmpolf-2017.10.eb * $CFGS1 /HPL-2.2-goolfc-2016.08.eb * $CFGS1 /HPL-2.2-goolfc-2016.10.eb * $CFGS1 /HPL-2.2-intel-2017.00.eb * $CFGS1 /HPL-2.2-intel-2017.01.eb * $CFGS1 /HPL-2.2-intel-2017.02.eb * $CFGS1 /HPL-2.2-intel-2017.09.eb * $CFGS1 /HPL-2.2-intel-2017a.eb * $CFGS1 /HPL-2.2-intel-2017b.eb * $CFGS1 /HPL-2.2-intel-2018.00.eb * $CFGS1 /HPL-2.2-intel-2018.01.eb * $CFGS1 /HPL-2.2-intel-2018.02.eb * $CFGS1 /HPL-2.2-intel-2018a.eb * $CFGS1 /HPL-2.2-intelcuda-2016.10.eb * $CFGS1 /HPL-2.2-iomkl-2016.09-GCC-4.9.3-2.25.eb * $CFGS1 /HPL-2.2-iomkl-2016.09-GCC-5.4.0-2.26.eb * $CFGS1 /HPL-2.2-iomkl-2017.01.eb * $CFGS1 /HPL-2.2-intel-2017.02.eb * $CFGS1 /HPL-2.2-intel-2017.09.eb * $CFGS1 /HPL-2.2-intel-2017a.eb * $CFGS1 /HPL-2.2-intel-2017b.eb * $CFGS1 /HPL-2.2-intel-2018.00.eb * $CFGS1 /HPL-2.2-intel-2018.01.eb * $CFGS1 /HPL-2.2-intel-2018.02.eb * $CFGS1 /HPL-2.2-intel-2018a.eb * $CFGS1 /HPL-2.2-intelcuda-2016.10.eb * $CFGS1 /HPL-2.2-iomkl-2016.09-GCC-4.9.3-2.25.eb * $CFGS1 /HPL-2.2-iomkl-2016.09-GCC-5.4.0-2.26.eb * $CFGS1 /HPL-2.2-iomkl-2017.01.eb * $CFGS1 /HPL-2.2-iomkl-2017a.eb * $CFGS1 /HPL-2.2-iomkl-2017b.eb * $CFGS1 /HPL-2.2-iomkl-2018.02.eb * $CFGS1 /HPL-2.2-iomkl-2018a.eb * $CFGS1 /HPL-2.2-pomkl-2016.09.eb We are going to build HPL 2.2 against the intel toolchain, typically the 2017a version which is available by default on the platform. Pick the corresponding recipy (for instance HPL-2.2-intel-2017a.eb ), install it with eb .eb [-D] -r -D enables the dry-run mode to check what's going to be install -- ALWAYS try it first -r enables the robot mode to automatically install all dependencies while searching for easyconfigs in a set of pre-defined directories -- you can also prepend new directories to search for eb files (like the current directory $PWD ) using the option and syntax --robot-paths=$PWD: (do not forget the ':'). See Controlling the robot search path documentation The $CFGS/ prefix should be dropped unless you know what you're doing (and thus have previously defined the variable -- see the first output of the eb -S [...] command). So let's install HPL version 2.2 and FIRST check which dependencies are satisfied with -Dr : $> eb HPL-2.2-intel-2017a.eb -Dr == temporary log file in case of crash /tmp/eb-CTC2hq/easybuild-gfLf1W.log Dry run: printing build status of easyconfigs and dependencies CFGS = /home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs * [ x ] $CFGS /m/M4/M4-1.4.17.eb ( module: devel/M4/1.4.17 ) * [ x ] $CFGS /b/Bison/Bison-3.0.4.eb ( module: lang/Bison/3.0.4 ) * [ x ] $CFGS /f/flex/flex-2.6.0.eb ( module: lang/flex/2.6.0 ) * [ x ] $CFGS /z/zlib/zlib-1.2.8.eb ( module: lib/zlib/1.2.8 ) * [ x ] $CFGS /b/binutils/binutils-2.27.eb ( module: tools/binutils/2.27 ) * [ x ] $CFGS /g/GCCcore/GCCcore-6.3.0.eb ( module: compiler/GCCcore/6.3.0 ) * [ x ] $CFGS /m/M4/M4-1.4.18-GCCcore-6.3.0.eb ( module: devel/M4/1.4.18-GCCcore-6.3.0 ) * [ x ] $CFGS /z/zlib/zlib-1.2.11-GCCcore-6.3.0.eb ( module: lib/zlib/1.2.11-GCCcore-6.3.0 ) * [ x ] $CFGS /h/help2man/help2man-1.47.4-GCCcore-6.3.0.eb ( module: tools/help2man/1.47.4-GCCcore-6.3.0 ) * [ x ] $CFGS /b/Bison/Bison-3.0.4-GCCcore-6.3.0.eb ( module: lang/Bison/3.0.4-GCCcore-6.3.0 ) * [ x ] $CFGS /f/flex/flex-2.6.3-GCCcore-6.3.0.eb ( module: lang/flex/2.6.3-GCCcore-6.3.0 ) * [ x ] $CFGS /b/binutils/binutils-2.27-GCCcore-6.3.0.eb ( module: tools/binutils/2.27-GCCcore-6.3.0 ) * [ x ] $CFGS /i/icc/icc-2017.1.132-GCC-6.3.0-2.27.eb ( module: compiler/icc/2017.1.132-GCC-6.3.0-2.27 ) * [ x ] $CFGS /i/ifort/ifort-2017.1.132-GCC-6.3.0-2.27.eb ( module: compiler/ifort/2017.1.132-GCC-6.3.0-2.27 ) * [ x ] $CFGS /i/iccifort/iccifort-2017.1.132-GCC-6.3.0-2.27.eb ( module: toolchain/iccifort/2017.1.132-GCC-6.3.0-2.27 ) * [ x ] $CFGS /i/impi/impi-2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27.eb ( module: mpi/impi/2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27 ) * [ x ] $CFGS /i/iimpi/iimpi-2017a.eb ( module: toolchain/iimpi/2017a ) * [ x ] $CFGS /i/imkl/imkl-2017.1.132-iimpi-2017a.eb ( module: numlib/imkl/2017.1.132-iimpi-2017a ) * [ x ] $CFGS /i/intel/intel-2017a.eb ( module: toolchain/intel/2017a ) * [ ] $CFGS /h/HPL/HPL-2.2-intel-2017a.eb ( module: tools/HPL/2.2-intel-2017a ) == Temporary log file ( s ) /tmp/eb-CTC2hq/easybuild-gfLf1W.log* have been removed. == Temporary directory /tmp/eb-CTC2hq has been removed. As can be seen, there is a single element to install and this has not been done so far (box not checked). All the dependencies are already present (box checked). Let's really install the selected software -- you may want to prefix the eb command with the time to collect the installation time: $> time eb HPL-2.2-intel-2017a.eb -r # Remove the '-D' (dry-run) flags == temporary log file in case of crash /tmp/eb-nub_oL/easybuild-J8sNzx.log == resolving dependencies ... == processing EasyBuild easyconfig /home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/h/HPL/HPL-2.2-intel-2017a.eb == building and installing tools/HPL/2.2-intel-2017a... == fetching files... == creating build dir, resetting environment... == unpacking... == patching... == preparing... == configuring... == building... == testing... == installing... == taking care of extensions... == postprocessing... == sanity checking... == cleaning up... == creating module... == permissions... == packaging... == COMPLETED: Installation ended successfully == Results of the build can be found in the log file ( s ) /home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a/easybuild/easybuild-HPL-2.2-20180608.094831.log == Build succeeded for 1 out of 1 == Temporary log file ( s ) /tmp/eb-nub_oL/easybuild-J8sNzx.log* have been removed. == Temporary directory /tmp/eb-nub_oL has been removed. real 0m56.472s user 0m15.268s sys 0m19.998s Check the installed software: $> module av HPL ------------------------- /home/users//.local/easybuild/modules/all ------------------------- tools/HPL/2.2-intel-2017a Use \"module spider\" to find all possible modules. Use \"module keyword key1 key2 ...\" to search for all possible modules matching any of the \"keys\". $> module spider HPL ---------------------------------------------------------------------------------------------------- tools/HPL: tools/HPL/2.2-intel-2017a ---------------------------------------------------------------------------------------------------- Description: HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. This module can be loaded directly: module load tools/HPL/2.2-intel-2017a Help: Description =========== HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. More information ================ - Homepage: http://www.netlib.org/benchmark/hpl/ $> module show tools/HPL --------------------------------------------------------------------------------------------------- /home/users/svarrette/.local/easybuild/modules/all/tools/HPL/2.2-intel-2017a.lua: --------------------------------------------------------------------------------------------------- help([[ Description =========== HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. More information ================ - Homepage: http://www.netlib.org/benchmark/hpl/ ]]) whatis(\"Description: HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark.\") whatis(\"Homepage: http://www.netlib.org/benchmark/hpl/\") conflict(\"tools/HPL\") load(\"toolchain/intel/2017a\") prepend_path(\"PATH\",\"/home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a/bin\") setenv(\"EBROOTHPL\",\"/home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a\") setenv(\"EBVERSIONHPL\",\"2.2\") setenv(\"EBDEVELHPL\",\"/home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a/easybuild/tools-HPL-2.2-intel-2017a-easybuild-devel\") Note : to see the (locally) installed software, the MODULEPATH variable should include the $HOME/.local/easybuild/modules/all/ (of $LOCAL_MODULES ) path (which is what happens when using module use -- see the mu command) You can now load the freshly installed module like any other: $> module load tools/HPL $> module list Currently Loaded Modules: 1 ) tools/EasyBuild/3.6.1 7 ) mpi/impi/2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27 2 ) compiler/GCCcore/6.3.0 8 ) toolchain/iimpi/2017a 3 ) tools/binutils/2.27-GCCcore-6.3.0 9 ) numlib/imkl/2017.1.132-iimpi-2017a 4 ) compiler/icc/2017.1.132-GCC-6.3.0-2.27 10 ) toolchain/intel/2017a 5 ) compiler/ifort/2017.1.132-GCC-6.3.0-2.27 11 ) tools/HPL/2.2-intel-2017a 6 ) toolchain/iccifort/2017.1.132-GCC-6.3.0-2.27 Tips : When you load a module generated by Easybuild, it is installed within the directory reported by the $EBROOT variable. In the above case, you will find the generated binary for HPL in ${EBROOTHPL}/bin/xhpl . You may want to test the newly built HPL benchmark (you need to reserve at least 4 cores for that to succeed): # In another terminal, connect to the cluster frontend # Have an interactive job ############### iris cluster (slurm) ############### ( access-iris ) $> si -n 4 # this time reserve for 4 (mpi) tasks $> mu $> module load tools/HPL $> cd $EBROOTHPL $> ls $> cd bin $> ls $> srun -n $SLURM_NTASKS ./xhpl Running HPL benchmarks requires more attention -- a full tutorial is dedicated to it. Yet you can see that we obtained HPL 2.2 without writing any EasyConfig file. d. Build software using a customized EasyConfig file \u00b6 There are multiple ways to amend an EasyConfig file. Check the --try-* option flags for all the possibilities. Generally you want to do that when the up-to-date version of the software you want is not available as a recipy within Easybuild. For instance, a very popular building environment CMake has recently released a new version (3.11.3), which you want to give a try. It is not available as module, so let's build it. First let's check for available easyconfigs recipy if one exist for the expected version: $> eb -S Cmake-3 [...] * $CFGS2/CMake-3.9.1.eb * $CFGS2/CMake-3.9.4-GCCcore-6.4.0.eb * $CFGS2/CMake-3.9.5-GCCcore-6.4.0.eb We are going to reuse one of the latest EasyConfig available, for instance lets copy $CFGS2/CMake-3.9.1.eb # Work in a dedicated directory $> mkdir -p ~/software/CMake $> cd ~/software/CMake $> eb -S Cmake-3 | less # collect the definition of the CFGS2 variable $> CFGS2 = /home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/c/CMake $> cp $CFGS2 /CMake-3.9.1.eb . $> mv CMake-3.9.1.eb CMake-3.11.3.eb # Adapt version suffix to the lastest realse You need to perform the following changes (here: version upgrade, and adapted checksum) --- CMake-3.9.1.eb 2018-06-08 10:56:24.447699000 +0200 +++ CMake-3.11.3.eb 2018-06-08 11:07:39.716672000 +0200 @@ -1,7 +1,7 @@ easyblock = 'ConfigureMake' name = 'CMake' -version = '3.9.1' +version = '3.11.3' homepage = 'http://www.cmake.org' description = \"\"\"CMake, the cross-platform, open-source build system. @@ -11,7 +11,7 @@ source_urls = ['http://www.cmake.org/files/v%(version_major_minor)s'] sources = [SOURCELOWER_TAR_GZ] -checksums = ['d768ee83d217f91bb597b3ca2ac663da7a8603c97e1f1a5184bc01e0ad2b12bb'] +checksums = ['287135b6beb7ffc1ccd02707271080bbf14c21d80c067ae2c0040e5f3508c39a'] configopts = '-- -DCMAKE_USE_OPENSSL=1' If the checksum is not provided on the official software page , you will need to compute it yourself by downloading the sources and collect the checksum: $> gsha256sum ~/Download/cmake-3.11.3.tar.gz 287135b6beb7ffc1ccd02707271080bbf14c21d80c067ae2c0040e5f3508c39a cmake-3.11.3.tar.gz Let's build it: $> eb ./CMake-3.11.3.eb -Dr == temporary log file in case of crash /tmp/eb-UX7APP/easybuild-gxnyIv.log Dry run: printing build status of easyconfigs and dependencies CFGS = /mnt/irisgpfs/users//software/CMake * [ ] $CFGS /CMake-3.11.3.eb ( module: devel/CMake/3.11.3 ) == Temporary log file ( s ) /tmp/eb-UX7APP/easybuild-gxnyIv.log* have been removed. == Temporary directory /tmp/eb-UX7APP has been removed. Dependencies are fine, so let's build it: $> time eb ./CMake-3.11.3.eb -r == temporary log file in case of crash /tmp/eb-JjF92B/easybuild-RjzRjb.log == resolving dependencies ... == processing EasyBuild easyconfig /mnt/irisgpfs/users//software/CMake/CMake-3.11.3.eb == building and installing devel/CMake/3.11.3... == fetching files... == creating build dir, resetting environment... == unpacking... == patching... == preparing... == configuring... == building... == testing... == installing... == taking care of extensions... == postprocessing... == sanity checking... == cleaning up... == creating module... == permissions... == packaging... == COMPLETED: Installation ended successfully == Results of the build can be found in the log file ( s ) /home/users//.local/easybuild/software/devel/CMake/3.11.3/easybuild/easybuild-CMake-3.11.3-20180608.111611.log == Build succeeded for 1 out of 1 == Temporary log file ( s ) /tmp/eb-JjF92B/easybuild-RjzRjb.log* have been removed. == Temporary directory /tmp/eb-JjF92B has been removed. real 7m40.358s user 5m56.442s sys 1m15.185s Note you can follow the progress of the installation in a separate shell on the node: Check the result: $> module av CMake That's all ;-) Final remaks This workflow (copying an existing recipy, adapting the filename, the version and the source checksum) covers most of the test cases. Yet sometimes you need to work on a more complex dependency check, in which case you'll need to adapt many eb files. In this case, for each build, you need to instruct Easybuild to search for easyconfigs also in the current directory, in which case you will use: $> eb .eb --robot = $PWD : $EASYBUILD_ROBOT -D $> eb .eb --robot = $PWD : $EASYBUILD_ROBOT (OLD) Build software using your own EasyConfig file \u00b6 Below are obsolete instructions to write a full Easyconfig file, left for archiving and informal purposes. For this example, we create an EasyConfig file to build GZip 1.4 with the GOOLF toolchain. Open your favorite editor and create a file named gzip-1.4-goolf-1.4.10.eb with the following content: easyblock = 'ConfigureMake' name = 'gzip' version = '1.4' homepage = 'http://www.gnu.org/software/gzip/' description = \"gzip (GNU zip) is a popular data compression program as a replacement for compress\" # use the GOOLF toolchain toolchain = {'name': 'goolf', 'version': '1.4.10'} # specify that GCC compiler should be used to build gzip preconfigopts = \"CC='gcc'\" # source tarball filename sources = ['%s-%s.tar.gz'%(name,version)] # download location for source files source_urls = ['http://ftpmirror.gnu.org/gzip'] # make sure the gzip and gunzip binaries are available after installation sanity_check_paths = { 'files': [\"bin/gunzip\", \"bin/gzip\"], 'dirs': [] } # run 'gzip -h' and 'gzip --version' after installation sanity_check_commands = [True, ('gzip', '--version')] This is a simple EasyConfig. Most of the fields are self-descriptive. No build method is explicitely defined, so it uses by default the standard configure/make/make install approach. Let's build GZip with this EasyConfig file: $> time eb gzip-1.4-goolf-1.4.10.eb == temporary log file in case of crash /tmp/eb-hiyyN1/easybuild-ynLsHC.log == processing EasyBuild easyconfig /mnt/nfs/users/homedirs/mschmitt/gzip-1.4-goolf-1.4.10.eb == building and installing base/gzip/1.4-goolf-1.4.10... == fetching files... == creating build dir, resetting environment... == unpacking... == patching... == preparing... == configuring... == building... == testing... == installing... == taking care of extensions... == packaging... == postprocessing... == sanity checking... == cleaning up... == creating module... == COMPLETED: Installation ended successfully == Results of the build can be found in the log file /home/users/mschmitt/.local/easybuild/software/base/gzip/1.4-goolf-1.4.10/easybuild/easybuild-gzip-1.4-20150624.114745.log == Build succeeded for 1 out of 1 == temporary log file(s) /tmp/eb-hiyyN1/easybuild-ynLsHC.log* have been removed. == temporary directory /tmp/eb-hiyyN1 has been removed. real 1m39.982s user 0m52.743s sys 0m11.297s We can now check that our version of GZip is available via the modules: $> module avail gzip --------- /mnt/nfs/users/homedirs/mschmitt/.local/easybuild/modules/all --------- base/gzip/1.4-goolf-1.4.10 To go further into details \u00b6 Please refer to the following pointers to get additionnal features: EasyBuild homepage EasyBuild tutorial EasyBuild documentation Getting started Using EasyBuild Step-by-step guide","title":"Building [custom] software with EasyBuild on the UL HPC platform"},{"location":"development/build-tools/easybuild/#building-custom-software-with-easybuild-on-the-ul-hpc-platform","text":"EasyBuild can be used to ease, automate and script the build of software on the UL HPC platforms. Indeed, as researchers involved in many cutting-edge and hot topics, you probably have access to many theoretical resources to understand the surrounding concepts. Yet it should normally give you a wish to test the corresponding software. Traditionally, this part is rather time-consuming and frustrating, especially when the developers did not rely on a \"regular\" building framework such as CMake or the autotools ( i.e. with build instructions as configure --prefix && make && make install ). And when it comes to have a build adapted to an HPC system, you are somehow forced to make a custom build performed on the target machine to ensure you will get the best possible performance. EasyBuild is one approach to facilitate this step. EasyBuild is a tool that allows to perform automated and reproducible compilation and installation of software. A large number of scientific software are supported ( 1504 supported software packages in the last release 3.6.1) -- see also What is EasyBuild? All builds and installations are performed at user level, so you don't need the admin (i.e. root ) rights. The software are installed in your home directory (by default in $HOME/.local/easybuild/software/ ) and a module file is generated (by default in $HOME/.local/easybuild/modules/ ) to use the software. EasyBuild relies on two main concepts: Toolchains and EasyConfig files . A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. The two main toolchains frequently used on the UL HPC platform are the foss (\" Free and Open Source Software \") and the intel one. foss is based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.). intel is based on the Intel compiler and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.). An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedures (like configure , make and make install ), this file is very simple. Many EasyConfig files are already provided with EasyBuild. By default, EasyConfig files and generated modules are named using the following convention: --- . However, we use a hierarchical approach where the software are classified under a category (or class) -- see the CategorizedModuleNamingScheme option for the EASYBUILD_MODULE_NAMING_SCHEME environmental variable), meaning that the layout will respect the following hierarchy: //-- Additional details are available on EasyBuild website: EasyBuild homepage EasyBuild tutorial EasyBuild documentation What is EasyBuild? Toolchains EasyConfig files List of supported software packages","title":"Building [custom] software with EasyBuild on the UL HPC platform"},{"location":"development/build-tools/easybuild/#a-installation","text":"the official instructions . What is important for the installation of EasyBuild are the following variables: EASYBUILD_PREFIX : where to install local modules and software, i.e. $HOME/.local/easybuild EASYBUILD_MODULES_TOOL : the type of modules tool you are using, i.e. LMod in this case EASYBUILD_MODULE_NAMING_SCHEME : the way the software and modules should be organized (flat view or hierarchical) -- we're advising on CategorizedModuleNamingScheme Add the following entries to your ~/.bashrc (use your favorite CLI editor like nano or vim ): # Easybuild export EASYBUILD_PREFIX = $HOME /.local/easybuild export EASYBUILD_MODULES_TOOL = Lmod export EASYBUILD_MODULE_NAMING_SCHEME = CategorizedModuleNamingScheme # Use the below variable to run: # module use $LOCAL_MODULES # module load tools/EasyBuild export LOCAL_MODULES = ${ EASYBUILD_PREFIX } /modules/all alias ma = \"module avail\" alias ml = \"module list\" function mu (){ module use $LOCAL_MODULES module load tools/EasyBuild } Then source this file to expose the environment variables: $> source ~/.bashrc $> echo $EASYBUILD_PREFIX /home/users//.local/easybuild Now let's install EasyBuild following the official procedure . Install EasyBuild in a temporary directory and use this temporary installation to build an EasyBuild module in your $EASYBUILD_PREFIX : # pick installation prefix, and install EasyBuild into it export EB_TMPDIR = /tmp/ $USER /eb_tmp python3 -m pip install --ignore-installed --prefix $EB_TMPDIR easybuild # update environment to use this temporary EasyBuild installation export PATH = $EB_TMPDIR /bin: $PATH export PYTHONPATH = $( /bin/ls -rtd -1 $EB_TMPDIR /lib*/python*/site-packages | tail -1 ) : $PYTHONPATH export EB_PYTHON = python3 # install Easybuild in your $EASYBUILD_PREFIX eb --install-latest-eb-release --prefix $EASYBUILD_PREFIX Now you can use your freshly built software. The main EasyBuild command is eb : $> eb --version # expected ;) -bash: eb: command not found # Load the newly installed Easybuild $> echo $MODULEPATH /opt/apps/resif/data/stable/default/modules/all/ $> module use $LOCAL_MODULES $> echo $MODULEPATH /home/users//.local/easybuild/modules/all:/opt/apps/resif/data/stable/default/modules/all $> module spider Easybuild $> module load tools/EasyBuild # TAB is your friend... $> eb --version This is EasyBuild 3 .6.1 ( framework: 3 .6.1, easyblocks: 3 .6.1 ) on host iris-001. Since you are going to use quite often the above command to use locally built modules and load easybuild, an alias mu is provided and can be used from now on. Use it now . $> mu $> module avail # OR 'ma' To get help on the EasyBuild options, use the -h or -H option flags: $> eb -h $> eb -H","title":"a. Installation"},{"location":"development/build-tools/easybuild/#b-local-vs-global-usage","text":"As you probably guessed, we are going to use two places for the installed software: local builds ~/.local/easybuild (see $LOCAL_MODULES ) global builds (provided to you by the UL HPC team) in /opt/apps/resif/data/stable/default/modules/all (see default $MODULEPATH ). Default usage (with the eb command) would install your software and modules in ~/.local/easybuild . Before that, let's explore the basic usage of EasyBuild and the eb command. # Search for an Easybuild recipy with 'eb -S ' $> eb -S Spark CFGS1 = /opt/apps/resif/data/easyconfigs/ulhpc/default/easybuild/easyconfigs/s/Spark CFGS2 = /home/users//.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/s/Spark * $CFGS1 /Spark-2.1.1.eb * $CFGS1 /Spark-2.3.0-intel-2018a-Hadoop-2.7-Java-1.8.0_162-Python-3.6.4.eb * $CFGS2 /Spark-1.3.0.eb * $CFGS2 /Spark-1.4.1.eb * $CFGS2 /Spark-1.5.0.eb * $CFGS2 /Spark-1.6.0.eb * $CFGS2 /Spark-1.6.1.eb * $CFGS2 /Spark-2.0.0.eb * $CFGS2 /Spark-2.0.2.eb * $CFGS2 /Spark-2.2.0-Hadoop-2.6-Java-1.8.0_144.eb * $CFGS2 /Spark-2.2.0-Hadoop-2.6-Java-1.8.0_152.eb * $CFGS2 /Spark-2.2.0-intel-2017b-Hadoop-2.6-Java-1.8.0_152-Python-3.6.3.eb","title":"b. Local vs. global usage"},{"location":"development/build-tools/easybuild/#c-build-software-using-provided-easyconfig-file","text":"In this part, we propose to build High Performance Linpack (HPL) using EasyBuild. HPL is supported by EasyBuild, this means that an EasyConfig file allowing to build HPL is already provided with EasyBuild. First of all, let's check if that software is not available by default: $> module spider HPL Lmod has detected the following error: Unable to find: \"HPL\" Then, search for available EasyConfig files with HPL in their name. The EasyConfig files are named with the .eb extension. # Search for an Easybuild recipy with 'eb -S ' $> eb -S HPL-2.2 CFGS1 = /home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/h/HPL * $CFGS1 /HPL-2.2-foss-2016.07.eb * $CFGS1 /HPL-2.2-foss-2016.09.eb * $CFGS1 /HPL-2.2-foss-2017a.eb * $CFGS1 /HPL-2.2-foss-2017b.eb * $CFGS1 /HPL-2.2-foss-2018a.eb * $CFGS1 /HPL-2.2-fosscuda-2018a.eb * $CFGS1 /HPL-2.2-giolf-2017b.eb * $CFGS1 /HPL-2.2-giolf-2018a.eb * $CFGS1 /HPL-2.2-giolfc-2017b.eb * $CFGS1 /HPL-2.2-gmpolf-2017.10.eb * $CFGS1 /HPL-2.2-goolfc-2016.08.eb * $CFGS1 /HPL-2.2-goolfc-2016.10.eb * $CFGS1 /HPL-2.2-intel-2017.00.eb * $CFGS1 /HPL-2.2-intel-2017.01.eb * $CFGS1 /HPL-2.2-intel-2017.02.eb * $CFGS1 /HPL-2.2-intel-2017.09.eb * $CFGS1 /HPL-2.2-intel-2017a.eb * $CFGS1 /HPL-2.2-intel-2017b.eb * $CFGS1 /HPL-2.2-intel-2018.00.eb * $CFGS1 /HPL-2.2-intel-2018.01.eb * $CFGS1 /HPL-2.2-intel-2018.02.eb * $CFGS1 /HPL-2.2-intel-2018a.eb * $CFGS1 /HPL-2.2-intelcuda-2016.10.eb * $CFGS1 /HPL-2.2-iomkl-2016.09-GCC-4.9.3-2.25.eb * $CFGS1 /HPL-2.2-iomkl-2016.09-GCC-5.4.0-2.26.eb * $CFGS1 /HPL-2.2-iomkl-2017.01.eb * $CFGS1 /HPL-2.2-intel-2017.02.eb * $CFGS1 /HPL-2.2-intel-2017.09.eb * $CFGS1 /HPL-2.2-intel-2017a.eb * $CFGS1 /HPL-2.2-intel-2017b.eb * $CFGS1 /HPL-2.2-intel-2018.00.eb * $CFGS1 /HPL-2.2-intel-2018.01.eb * $CFGS1 /HPL-2.2-intel-2018.02.eb * $CFGS1 /HPL-2.2-intel-2018a.eb * $CFGS1 /HPL-2.2-intelcuda-2016.10.eb * $CFGS1 /HPL-2.2-iomkl-2016.09-GCC-4.9.3-2.25.eb * $CFGS1 /HPL-2.2-iomkl-2016.09-GCC-5.4.0-2.26.eb * $CFGS1 /HPL-2.2-iomkl-2017.01.eb * $CFGS1 /HPL-2.2-iomkl-2017a.eb * $CFGS1 /HPL-2.2-iomkl-2017b.eb * $CFGS1 /HPL-2.2-iomkl-2018.02.eb * $CFGS1 /HPL-2.2-iomkl-2018a.eb * $CFGS1 /HPL-2.2-pomkl-2016.09.eb We are going to build HPL 2.2 against the intel toolchain, typically the 2017a version which is available by default on the platform. Pick the corresponding recipy (for instance HPL-2.2-intel-2017a.eb ), install it with eb .eb [-D] -r -D enables the dry-run mode to check what's going to be install -- ALWAYS try it first -r enables the robot mode to automatically install all dependencies while searching for easyconfigs in a set of pre-defined directories -- you can also prepend new directories to search for eb files (like the current directory $PWD ) using the option and syntax --robot-paths=$PWD: (do not forget the ':'). See Controlling the robot search path documentation The $CFGS/ prefix should be dropped unless you know what you're doing (and thus have previously defined the variable -- see the first output of the eb -S [...] command). So let's install HPL version 2.2 and FIRST check which dependencies are satisfied with -Dr : $> eb HPL-2.2-intel-2017a.eb -Dr == temporary log file in case of crash /tmp/eb-CTC2hq/easybuild-gfLf1W.log Dry run: printing build status of easyconfigs and dependencies CFGS = /home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs * [ x ] $CFGS /m/M4/M4-1.4.17.eb ( module: devel/M4/1.4.17 ) * [ x ] $CFGS /b/Bison/Bison-3.0.4.eb ( module: lang/Bison/3.0.4 ) * [ x ] $CFGS /f/flex/flex-2.6.0.eb ( module: lang/flex/2.6.0 ) * [ x ] $CFGS /z/zlib/zlib-1.2.8.eb ( module: lib/zlib/1.2.8 ) * [ x ] $CFGS /b/binutils/binutils-2.27.eb ( module: tools/binutils/2.27 ) * [ x ] $CFGS /g/GCCcore/GCCcore-6.3.0.eb ( module: compiler/GCCcore/6.3.0 ) * [ x ] $CFGS /m/M4/M4-1.4.18-GCCcore-6.3.0.eb ( module: devel/M4/1.4.18-GCCcore-6.3.0 ) * [ x ] $CFGS /z/zlib/zlib-1.2.11-GCCcore-6.3.0.eb ( module: lib/zlib/1.2.11-GCCcore-6.3.0 ) * [ x ] $CFGS /h/help2man/help2man-1.47.4-GCCcore-6.3.0.eb ( module: tools/help2man/1.47.4-GCCcore-6.3.0 ) * [ x ] $CFGS /b/Bison/Bison-3.0.4-GCCcore-6.3.0.eb ( module: lang/Bison/3.0.4-GCCcore-6.3.0 ) * [ x ] $CFGS /f/flex/flex-2.6.3-GCCcore-6.3.0.eb ( module: lang/flex/2.6.3-GCCcore-6.3.0 ) * [ x ] $CFGS /b/binutils/binutils-2.27-GCCcore-6.3.0.eb ( module: tools/binutils/2.27-GCCcore-6.3.0 ) * [ x ] $CFGS /i/icc/icc-2017.1.132-GCC-6.3.0-2.27.eb ( module: compiler/icc/2017.1.132-GCC-6.3.0-2.27 ) * [ x ] $CFGS /i/ifort/ifort-2017.1.132-GCC-6.3.0-2.27.eb ( module: compiler/ifort/2017.1.132-GCC-6.3.0-2.27 ) * [ x ] $CFGS /i/iccifort/iccifort-2017.1.132-GCC-6.3.0-2.27.eb ( module: toolchain/iccifort/2017.1.132-GCC-6.3.0-2.27 ) * [ x ] $CFGS /i/impi/impi-2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27.eb ( module: mpi/impi/2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27 ) * [ x ] $CFGS /i/iimpi/iimpi-2017a.eb ( module: toolchain/iimpi/2017a ) * [ x ] $CFGS /i/imkl/imkl-2017.1.132-iimpi-2017a.eb ( module: numlib/imkl/2017.1.132-iimpi-2017a ) * [ x ] $CFGS /i/intel/intel-2017a.eb ( module: toolchain/intel/2017a ) * [ ] $CFGS /h/HPL/HPL-2.2-intel-2017a.eb ( module: tools/HPL/2.2-intel-2017a ) == Temporary log file ( s ) /tmp/eb-CTC2hq/easybuild-gfLf1W.log* have been removed. == Temporary directory /tmp/eb-CTC2hq has been removed. As can be seen, there is a single element to install and this has not been done so far (box not checked). All the dependencies are already present (box checked). Let's really install the selected software -- you may want to prefix the eb command with the time to collect the installation time: $> time eb HPL-2.2-intel-2017a.eb -r # Remove the '-D' (dry-run) flags == temporary log file in case of crash /tmp/eb-nub_oL/easybuild-J8sNzx.log == resolving dependencies ... == processing EasyBuild easyconfig /home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/h/HPL/HPL-2.2-intel-2017a.eb == building and installing tools/HPL/2.2-intel-2017a... == fetching files... == creating build dir, resetting environment... == unpacking... == patching... == preparing... == configuring... == building... == testing... == installing... == taking care of extensions... == postprocessing... == sanity checking... == cleaning up... == creating module... == permissions... == packaging... == COMPLETED: Installation ended successfully == Results of the build can be found in the log file ( s ) /home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a/easybuild/easybuild-HPL-2.2-20180608.094831.log == Build succeeded for 1 out of 1 == Temporary log file ( s ) /tmp/eb-nub_oL/easybuild-J8sNzx.log* have been removed. == Temporary directory /tmp/eb-nub_oL has been removed. real 0m56.472s user 0m15.268s sys 0m19.998s Check the installed software: $> module av HPL ------------------------- /home/users//.local/easybuild/modules/all ------------------------- tools/HPL/2.2-intel-2017a Use \"module spider\" to find all possible modules. Use \"module keyword key1 key2 ...\" to search for all possible modules matching any of the \"keys\". $> module spider HPL ---------------------------------------------------------------------------------------------------- tools/HPL: tools/HPL/2.2-intel-2017a ---------------------------------------------------------------------------------------------------- Description: HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. This module can be loaded directly: module load tools/HPL/2.2-intel-2017a Help: Description =========== HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. More information ================ - Homepage: http://www.netlib.org/benchmark/hpl/ $> module show tools/HPL --------------------------------------------------------------------------------------------------- /home/users/svarrette/.local/easybuild/modules/all/tools/HPL/2.2-intel-2017a.lua: --------------------------------------------------------------------------------------------------- help([[ Description =========== HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. More information ================ - Homepage: http://www.netlib.org/benchmark/hpl/ ]]) whatis(\"Description: HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark.\") whatis(\"Homepage: http://www.netlib.org/benchmark/hpl/\") conflict(\"tools/HPL\") load(\"toolchain/intel/2017a\") prepend_path(\"PATH\",\"/home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a/bin\") setenv(\"EBROOTHPL\",\"/home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a\") setenv(\"EBVERSIONHPL\",\"2.2\") setenv(\"EBDEVELHPL\",\"/home/users/svarrette/.local/easybuild/software/tools/HPL/2.2-intel-2017a/easybuild/tools-HPL-2.2-intel-2017a-easybuild-devel\") Note : to see the (locally) installed software, the MODULEPATH variable should include the $HOME/.local/easybuild/modules/all/ (of $LOCAL_MODULES ) path (which is what happens when using module use -- see the mu command) You can now load the freshly installed module like any other: $> module load tools/HPL $> module list Currently Loaded Modules: 1 ) tools/EasyBuild/3.6.1 7 ) mpi/impi/2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27 2 ) compiler/GCCcore/6.3.0 8 ) toolchain/iimpi/2017a 3 ) tools/binutils/2.27-GCCcore-6.3.0 9 ) numlib/imkl/2017.1.132-iimpi-2017a 4 ) compiler/icc/2017.1.132-GCC-6.3.0-2.27 10 ) toolchain/intel/2017a 5 ) compiler/ifort/2017.1.132-GCC-6.3.0-2.27 11 ) tools/HPL/2.2-intel-2017a 6 ) toolchain/iccifort/2017.1.132-GCC-6.3.0-2.27 Tips : When you load a module generated by Easybuild, it is installed within the directory reported by the $EBROOT variable. In the above case, you will find the generated binary for HPL in ${EBROOTHPL}/bin/xhpl . You may want to test the newly built HPL benchmark (you need to reserve at least 4 cores for that to succeed): # In another terminal, connect to the cluster frontend # Have an interactive job ############### iris cluster (slurm) ############### ( access-iris ) $> si -n 4 # this time reserve for 4 (mpi) tasks $> mu $> module load tools/HPL $> cd $EBROOTHPL $> ls $> cd bin $> ls $> srun -n $SLURM_NTASKS ./xhpl Running HPL benchmarks requires more attention -- a full tutorial is dedicated to it. Yet you can see that we obtained HPL 2.2 without writing any EasyConfig file.","title":"c. Build software using provided EasyConfig file"},{"location":"development/build-tools/easybuild/#d-build-software-using-a-customized-easyconfig-file","text":"There are multiple ways to amend an EasyConfig file. Check the --try-* option flags for all the possibilities. Generally you want to do that when the up-to-date version of the software you want is not available as a recipy within Easybuild. For instance, a very popular building environment CMake has recently released a new version (3.11.3), which you want to give a try. It is not available as module, so let's build it. First let's check for available easyconfigs recipy if one exist for the expected version: $> eb -S Cmake-3 [...] * $CFGS2/CMake-3.9.1.eb * $CFGS2/CMake-3.9.4-GCCcore-6.4.0.eb * $CFGS2/CMake-3.9.5-GCCcore-6.4.0.eb We are going to reuse one of the latest EasyConfig available, for instance lets copy $CFGS2/CMake-3.9.1.eb # Work in a dedicated directory $> mkdir -p ~/software/CMake $> cd ~/software/CMake $> eb -S Cmake-3 | less # collect the definition of the CFGS2 variable $> CFGS2 = /home/users/svarrette/.local/easybuild/software/tools/EasyBuild/3.6.1/lib/python2.7/site-packages/easybuild_easyconfigs-3.6.1-py2.7.egg/easybuild/easyconfigs/c/CMake $> cp $CFGS2 /CMake-3.9.1.eb . $> mv CMake-3.9.1.eb CMake-3.11.3.eb # Adapt version suffix to the lastest realse You need to perform the following changes (here: version upgrade, and adapted checksum) --- CMake-3.9.1.eb 2018-06-08 10:56:24.447699000 +0200 +++ CMake-3.11.3.eb 2018-06-08 11:07:39.716672000 +0200 @@ -1,7 +1,7 @@ easyblock = 'ConfigureMake' name = 'CMake' -version = '3.9.1' +version = '3.11.3' homepage = 'http://www.cmake.org' description = \"\"\"CMake, the cross-platform, open-source build system. @@ -11,7 +11,7 @@ source_urls = ['http://www.cmake.org/files/v%(version_major_minor)s'] sources = [SOURCELOWER_TAR_GZ] -checksums = ['d768ee83d217f91bb597b3ca2ac663da7a8603c97e1f1a5184bc01e0ad2b12bb'] +checksums = ['287135b6beb7ffc1ccd02707271080bbf14c21d80c067ae2c0040e5f3508c39a'] configopts = '-- -DCMAKE_USE_OPENSSL=1' If the checksum is not provided on the official software page , you will need to compute it yourself by downloading the sources and collect the checksum: $> gsha256sum ~/Download/cmake-3.11.3.tar.gz 287135b6beb7ffc1ccd02707271080bbf14c21d80c067ae2c0040e5f3508c39a cmake-3.11.3.tar.gz Let's build it: $> eb ./CMake-3.11.3.eb -Dr == temporary log file in case of crash /tmp/eb-UX7APP/easybuild-gxnyIv.log Dry run: printing build status of easyconfigs and dependencies CFGS = /mnt/irisgpfs/users//software/CMake * [ ] $CFGS /CMake-3.11.3.eb ( module: devel/CMake/3.11.3 ) == Temporary log file ( s ) /tmp/eb-UX7APP/easybuild-gxnyIv.log* have been removed. == Temporary directory /tmp/eb-UX7APP has been removed. Dependencies are fine, so let's build it: $> time eb ./CMake-3.11.3.eb -r == temporary log file in case of crash /tmp/eb-JjF92B/easybuild-RjzRjb.log == resolving dependencies ... == processing EasyBuild easyconfig /mnt/irisgpfs/users//software/CMake/CMake-3.11.3.eb == building and installing devel/CMake/3.11.3... == fetching files... == creating build dir, resetting environment... == unpacking... == patching... == preparing... == configuring... == building... == testing... == installing... == taking care of extensions... == postprocessing... == sanity checking... == cleaning up... == creating module... == permissions... == packaging... == COMPLETED: Installation ended successfully == Results of the build can be found in the log file ( s ) /home/users//.local/easybuild/software/devel/CMake/3.11.3/easybuild/easybuild-CMake-3.11.3-20180608.111611.log == Build succeeded for 1 out of 1 == Temporary log file ( s ) /tmp/eb-JjF92B/easybuild-RjzRjb.log* have been removed. == Temporary directory /tmp/eb-JjF92B has been removed. real 7m40.358s user 5m56.442s sys 1m15.185s Note you can follow the progress of the installation in a separate shell on the node: Check the result: $> module av CMake That's all ;-) Final remaks This workflow (copying an existing recipy, adapting the filename, the version and the source checksum) covers most of the test cases. Yet sometimes you need to work on a more complex dependency check, in which case you'll need to adapt many eb files. In this case, for each build, you need to instruct Easybuild to search for easyconfigs also in the current directory, in which case you will use: $> eb .eb --robot = $PWD : $EASYBUILD_ROBOT -D $> eb .eb --robot = $PWD : $EASYBUILD_ROBOT","title":"d. Build software using a customized EasyConfig file"},{"location":"development/build-tools/easybuild/#old-build-software-using-your-own-easyconfig-file","text":"Below are obsolete instructions to write a full Easyconfig file, left for archiving and informal purposes. For this example, we create an EasyConfig file to build GZip 1.4 with the GOOLF toolchain. Open your favorite editor and create a file named gzip-1.4-goolf-1.4.10.eb with the following content: easyblock = 'ConfigureMake' name = 'gzip' version = '1.4' homepage = 'http://www.gnu.org/software/gzip/' description = \"gzip (GNU zip) is a popular data compression program as a replacement for compress\" # use the GOOLF toolchain toolchain = {'name': 'goolf', 'version': '1.4.10'} # specify that GCC compiler should be used to build gzip preconfigopts = \"CC='gcc'\" # source tarball filename sources = ['%s-%s.tar.gz'%(name,version)] # download location for source files source_urls = ['http://ftpmirror.gnu.org/gzip'] # make sure the gzip and gunzip binaries are available after installation sanity_check_paths = { 'files': [\"bin/gunzip\", \"bin/gzip\"], 'dirs': [] } # run 'gzip -h' and 'gzip --version' after installation sanity_check_commands = [True, ('gzip', '--version')] This is a simple EasyConfig. Most of the fields are self-descriptive. No build method is explicitely defined, so it uses by default the standard configure/make/make install approach. Let's build GZip with this EasyConfig file: $> time eb gzip-1.4-goolf-1.4.10.eb == temporary log file in case of crash /tmp/eb-hiyyN1/easybuild-ynLsHC.log == processing EasyBuild easyconfig /mnt/nfs/users/homedirs/mschmitt/gzip-1.4-goolf-1.4.10.eb == building and installing base/gzip/1.4-goolf-1.4.10... == fetching files... == creating build dir, resetting environment... == unpacking... == patching... == preparing... == configuring... == building... == testing... == installing... == taking care of extensions... == packaging... == postprocessing... == sanity checking... == cleaning up... == creating module... == COMPLETED: Installation ended successfully == Results of the build can be found in the log file /home/users/mschmitt/.local/easybuild/software/base/gzip/1.4-goolf-1.4.10/easybuild/easybuild-gzip-1.4-20150624.114745.log == Build succeeded for 1 out of 1 == temporary log file(s) /tmp/eb-hiyyN1/easybuild-ynLsHC.log* have been removed. == temporary directory /tmp/eb-hiyyN1 has been removed. real 1m39.982s user 0m52.743s sys 0m11.297s We can now check that our version of GZip is available via the modules: $> module avail gzip --------- /mnt/nfs/users/homedirs/mschmitt/.local/easybuild/modules/all --------- base/gzip/1.4-goolf-1.4.10","title":"(OLD) Build software using your own EasyConfig file"},{"location":"development/build-tools/easybuild/#to-go-further-into-details","text":"Please refer to the following pointers to get additionnal features: EasyBuild homepage EasyBuild tutorial EasyBuild documentation Getting started Using EasyBuild Step-by-step guide","title":"To go further into details"},{"location":"development/build-tools/spack/","text":"","title":"Spack"},{"location":"development/performance-debugging-tools/advisor/","text":"Intel Advisor \u00b6 Intel Advisor provides two workflows to help ensure that Fortran, C, and C++ applications can make the most of modern Intel processors. Advisor contains three key capabilities: Vectorization Advisor identifies loops that will benefit most from vectorization, specifies what is blocking effective vectorization, finds the benefit of alternative data reorganizations, and increases the confidence that vectorization is safe. Threading Advisor is used for threading design and prototyping and to analyze, design, tune, and check threading design options without disrupting normal code development. Advisor Roofline enables visualization of actual performance against hardware-imposed performance ceilings (rooflines) such as memory bandwidth and compute capacity - which provide an ideal roadmap of potential optimization steps. The links to each capability above provide detailed information regarding how to use each feature in Advisor. For more information on Intel Advisor, visit this page . Environmental models for Advisor on UL-HPC\u00b6 \u00b6 module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load perf/Advisor/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 Interactive mode \u00b6 # Compilation $ icc -qopenmp example.c # Code execution $ export OMP_NUM_THREADS = 16 $ advixe-cl -collect survey -project-dir my_result -- ./a.out # Report collection $ advixe-cl -report survey -project-dir my_result # To see the result in GUI $ advixe-gui my_result $ advixe-cl will list out the analysis types and $ advixe-cl -hlep report will list out available reports in Advisor. Batch mode \u00b6 Shared memory programming model (OpenMP) \u00b6 Example for the batch script: #!/bin/bash -l #SBATCH -J Advisor #SBATCH -N 1 ###SBATCH -A #SBATCH -c 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load perf/Advisor/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 export OMP_NUM_THREADS = 16 advixe-cl -collect survey -project-dir my_result -- ./a.out Distributed memory programming model (MPI) \u00b6 To compile just MPI application run $ mpiicc example.c and for MPI+OpenMP run $ mpiicc -qopenmp example.c Example for the batch script: #!/bin/bash -l #SBATCH -J Advisor #SBATCH -N 2 ###SBATCH -A #SBATCH --ntasks-per-node=28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load perf/Advisor/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 srun -n ${ SLURM_NTASKS } advixe-cl --collect survey --project-dir result -- ./a.out To collect the result and see the result in GUI use the below commands # Report collection $ advixe-cl --report survey --project-dir result # Result visualization $ advixe-gui result The below figure shows the hybrid(MPI+OpenMP) programming analysis results: Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Intel Advisor"},{"location":"development/performance-debugging-tools/advisor/#intel-advisor","text":"Intel Advisor provides two workflows to help ensure that Fortran, C, and C++ applications can make the most of modern Intel processors. Advisor contains three key capabilities: Vectorization Advisor identifies loops that will benefit most from vectorization, specifies what is blocking effective vectorization, finds the benefit of alternative data reorganizations, and increases the confidence that vectorization is safe. Threading Advisor is used for threading design and prototyping and to analyze, design, tune, and check threading design options without disrupting normal code development. Advisor Roofline enables visualization of actual performance against hardware-imposed performance ceilings (rooflines) such as memory bandwidth and compute capacity - which provide an ideal roadmap of potential optimization steps. The links to each capability above provide detailed information regarding how to use each feature in Advisor. For more information on Intel Advisor, visit this page .","title":"Intel Advisor"},{"location":"development/performance-debugging-tools/advisor/#environmental-models-for-advisor-on-ul-hpc","text":"module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load perf/Advisor/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0","title":"Environmental models for Advisor on UL-HPC\u00b6"},{"location":"development/performance-debugging-tools/advisor/#interactive-mode","text":"# Compilation $ icc -qopenmp example.c # Code execution $ export OMP_NUM_THREADS = 16 $ advixe-cl -collect survey -project-dir my_result -- ./a.out # Report collection $ advixe-cl -report survey -project-dir my_result # To see the result in GUI $ advixe-gui my_result $ advixe-cl will list out the analysis types and $ advixe-cl -hlep report will list out available reports in Advisor.","title":"Interactive mode"},{"location":"development/performance-debugging-tools/advisor/#batch-mode","text":"","title":"Batch mode"},{"location":"development/performance-debugging-tools/advisor/#shared-memory-programming-model-openmp","text":"Example for the batch script: #!/bin/bash -l #SBATCH -J Advisor #SBATCH -N 1 ###SBATCH -A #SBATCH -c 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load perf/Advisor/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 export OMP_NUM_THREADS = 16 advixe-cl -collect survey -project-dir my_result -- ./a.out","title":"Shared memory programming model (OpenMP)"},{"location":"development/performance-debugging-tools/advisor/#distributed-memory-programming-model-mpi","text":"To compile just MPI application run $ mpiicc example.c and for MPI+OpenMP run $ mpiicc -qopenmp example.c Example for the batch script: #!/bin/bash -l #SBATCH -J Advisor #SBATCH -N 2 ###SBATCH -A #SBATCH --ntasks-per-node=28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load perf/Advisor/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 srun -n ${ SLURM_NTASKS } advixe-cl --collect survey --project-dir result -- ./a.out To collect the result and see the result in GUI use the below commands # Report collection $ advixe-cl --report survey --project-dir result # Result visualization $ advixe-gui result The below figure shows the hybrid(MPI+OpenMP) programming analysis results: Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Distributed memory programming model (MPI)"},{"location":"development/performance-debugging-tools/aps/","text":"Application Performance Snapshot (APS) \u00b6 Application Performance Snapshot (APS) is a lightweight open source profiling tool developed by the Intel VTune developers. Use Application Performance Snapshot for a quick view into a shared memory or MPI application's use of available hardware (CPU, FPU, and memory). Application Performance Snapshot analyzes your application's time spent in MPI, MPI and OpenMP imbalance, memory access efficiency, FPU usage, and I/O and memory footprint. After analysis, it displays basic performance enhancement opportunities for systems using Intel platforms. Use this tool as a first step in application performance analysis to get a simple snapshot of key optimization areas and learn about profiling tools that specialize in particular aspects of application performance. Prerequisites \u00b6 Optional Configuration Optional: Use the following software to get an advanced metric set when running Application Performance Snapshot: Recommended compilers: Intel C/C++ or Fortran Compiler (other compilers can be used, but information about OpenMP imbalance is only available from the Intel OpenMP library) Use Intel MPI library version 2017 or later. Other MPICH-based MPI implementations can be used, but information about MPI imbalance is only available from the Intel MPI library. There is no support for OpenMPI. Optional: Enable system-wide monitoring to reduce collection overhead and collect memory bandwidth measurements. Use one of these options to enable system-wide monitoring: Set the /proc/sys/kernel/perf_event_paranoid value to 0 (or less), or Install the Intel VTune Amplifier drivers. Driver sources are available in /internal/sepdk/src . Installation instructions are available online at https://software.intel.com/en-us/vtune-amplifier-help-building-and-installing-the-sampling-drivers-for-linux-targets . Before running the tool, set up your environment appropriately: module purge module load swenv/default-env/v1.2-20191021-production module load tools/VTune/2019_update4 module load toolchain/intel/2019a Analyzing Shared Memory Applications \u00b6 Run the following commands (interactive mode): # Compilation $ icc -qopenmp example.c # Code execution aps --collection-mode = all -r report_output ./a.out aps -help will list out --collection-mode= available in APS. # To create a .html file aps-report -g report_output # To open an APS results in the browser firefox report_output_.html The below figure shows the example of result can be seen in the browser: # To see the command line output $ aps-report Example for the batch script: #!/bin/bash -l #SBATCH -J APS #SBATCH -N 1 ###SBATCH -A #SBATCH -c 28 #SBATCH --time=00:10:00 #SBATCH -p batch #SBATCH --nodelist=node0xx module purge module load swenv/default-env/v1.2-20191021-production module load tools/VTune/2019_update4 module load toolchain/intel/2019a export OMP_NUM_THREADS = 16 aps --collection-mode = all -r report_output ./a.out Analyzing MPI Applications \u00b6 To compile just MPI application run $ mpiicc example.c and for MPI+OpenMP run $ mpiicc -qopenmp example.c Example for the batch script: #!/bin/bash -l #SBATCH -J APS #SBATCH -N 2 ###SBATCH -A #SBATCH --ntasks-per-node=14 #SBATCH -c 2 #SBATCH --time=00:10:00 #SBATCH -p batch #SBATCH --reservation= module purge module load swenv/default-env/v1.2-20191021-production module load tools/VTune/2019_update4 module load toolchain/intel/2019a # To collect all the results export MPS_STAT_LEVEL = ${ SLURM_CPUS_PER_TASK :- 1 } # An option for the OpenMP+MPI application export OMP_NUM_THREADS = ${ SLURM_CPUS_PER_TASK :- 1 } srun -n ${ SLURM_NTASKS } aps --collection-mode = mpi -r result_output ./a.out The below figure shows the hybrid(MPI+OpenMP) programming analysis results: Next Steps \u00b6 Intel Trace Analyzer and Collector is a graphical tool for understanding MPI application behavior, quickly identifying bottlenecks, improving correctness, and achieving high performance for parallel cluster applications running on Intel architecture. Improve weak and strong scaling for applications. Get started . Intel VTune Amplifier provides a deep insight into node-level performance including algorithmic hotspot analysis, OpenMP threading, general exploration microarchitecture analysis, memory access efficiency, and more. It supports C/C++, Fortran, Java, Python, and profiling in containers. Get started . Intel Advisor provides two tools to help ensure your Fortran, C, and C++ applications realize full performance potential on modern processors. Get started . Vectorization Advisor is an optimization tool to identify loops that will benefit most from vectorization, analyze what is blocking effective vectorization, and forecast the benefit of alternative data reorganizations Threading Advisor is a threading design and prototyping tool to analyze, design, tune, and check threading design options without disrupting a regular environment Quick Metrics Reference The following metrics are collected with Application Performance Snapshot. Additional detail about each of these metrics is available in the Intel VTune Amplifier online help . Elapsed Time : Execution time of specified application in seconds. SP GFLOPS : Number of single precision giga-floating point operations calculated per second. All double operations are converted to two single operations. SP GFLOPS metrics are only available for 3 rd Generation Intel Core processors, 5 th Generation Intel processors, and 6 th Generation Intel processors. Cycles per Instruction Retired (CPI) : The amount of time each executed instruction took measured by cycles. A CPI of 1 is considered acceptable for high performance computing (HPC) applications, but different application domains will have varied expected values. The CPI value tends to be greater when there is long-latency memory, floating-point, or SIMD operations, non-retired instructions due to branch mispredictions, or instruction starvation at the front end. MPI Time : Average time per process spent in MPI calls. This metric does not include the time spent in MPI_Finalize . High values could be caused by high wait times inside the library, active communications, or sub-optimal settings of the MPI library. The metric is available for MPICH-based MPIs. MPI Imbalance : CPU time spent by ranks spinning in waits on communication operations. A high value can be caused by application workload imbalance between ranks, or non-optimal communication schema or MPI library settings. This metric is available only for Intel MPI Library version 2017 and later. OpenMP Imbalance : Percentage of elapsed time that your application wastes at OpenMP synchronization barriers because of load imbalance. This metric is only available for the Intel OpenMP Runtime Library. CPU Utilization : Estimate of the utilization of all logical CPU cores on the system by your application. Use this metric to help evaluate the parallel efficiency of your application. A utilization of 100% means that your application keeps all of the logical CPU cores busy for the entire time that it runs. Note that the metric does not distinguish between useful application work and the time that is spent in parallel runtimes. Memory Stalls : Indicates how memory subsystem issues affect application performance. This metric measures a fraction of slots where pipeline could be stalled due to demand load or store instructions. If the metric value is high, review the Cache and DRAM Stalls and the percent of remote accesses metrics to understand the nature of memory-related performance bottlenecks. If the average memory bandwidth numbers are close to the system bandwidth limit, optimization techniques for memory bound applications may be required to avoid memory stalls. FPU Utilization : The effective FPU usage while the application was running. Use the FPU Utilization value to evaluate the vector efficiency of your application. The value is calculated by estimating the percentage of operations that are performed by the FPU. A value of 100% means that the FPU is fully loaded. Any value over 50% requires additional analysis. FPU metrics are only available for 3 rd Generation Intel Core processors, 5 th Generation Intel processors, and 6 th Generation Intel processors. I/O Operations : The time spent by the application while reading data from the disk or writing data to the disk. Read and Write values denote mean and maximum amounts of data read and written during the elapsed time. This metric is only available for MPI applications. Memory Footprint : Average per-rank and per-node consumption of both virtual and resident memory. Documentation and Resources \u00b6 Intel Performance Snapshot User Forum : User forum dedicated to all Intel Performance Snapshot tools, including Application Performance Snapshot Application Performance Snapshot : Application Performance Snapshot product page, see this page for support and online documentation Application Performance Snapshot User's Guide : Learn more about Application Performance Snapshot, including details on specific metrics and best practices for application optimization Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"APS"},{"location":"development/performance-debugging-tools/aps/#application-performance-snapshot-aps","text":"Application Performance Snapshot (APS) is a lightweight open source profiling tool developed by the Intel VTune developers. Use Application Performance Snapshot for a quick view into a shared memory or MPI application's use of available hardware (CPU, FPU, and memory). Application Performance Snapshot analyzes your application's time spent in MPI, MPI and OpenMP imbalance, memory access efficiency, FPU usage, and I/O and memory footprint. After analysis, it displays basic performance enhancement opportunities for systems using Intel platforms. Use this tool as a first step in application performance analysis to get a simple snapshot of key optimization areas and learn about profiling tools that specialize in particular aspects of application performance.","title":"Application Performance Snapshot (APS)"},{"location":"development/performance-debugging-tools/aps/#prerequisites","text":"Optional Configuration Optional: Use the following software to get an advanced metric set when running Application Performance Snapshot: Recommended compilers: Intel C/C++ or Fortran Compiler (other compilers can be used, but information about OpenMP imbalance is only available from the Intel OpenMP library) Use Intel MPI library version 2017 or later. Other MPICH-based MPI implementations can be used, but information about MPI imbalance is only available from the Intel MPI library. There is no support for OpenMPI. Optional: Enable system-wide monitoring to reduce collection overhead and collect memory bandwidth measurements. Use one of these options to enable system-wide monitoring: Set the /proc/sys/kernel/perf_event_paranoid value to 0 (or less), or Install the Intel VTune Amplifier drivers. Driver sources are available in /internal/sepdk/src . Installation instructions are available online at https://software.intel.com/en-us/vtune-amplifier-help-building-and-installing-the-sampling-drivers-for-linux-targets . Before running the tool, set up your environment appropriately: module purge module load swenv/default-env/v1.2-20191021-production module load tools/VTune/2019_update4 module load toolchain/intel/2019a","title":"Prerequisites"},{"location":"development/performance-debugging-tools/aps/#analyzing-shared-memory-applications","text":"Run the following commands (interactive mode): # Compilation $ icc -qopenmp example.c # Code execution aps --collection-mode = all -r report_output ./a.out aps -help will list out --collection-mode= available in APS. # To create a .html file aps-report -g report_output # To open an APS results in the browser firefox report_output_.html The below figure shows the example of result can be seen in the browser: # To see the command line output $ aps-report Example for the batch script: #!/bin/bash -l #SBATCH -J APS #SBATCH -N 1 ###SBATCH -A #SBATCH -c 28 #SBATCH --time=00:10:00 #SBATCH -p batch #SBATCH --nodelist=node0xx module purge module load swenv/default-env/v1.2-20191021-production module load tools/VTune/2019_update4 module load toolchain/intel/2019a export OMP_NUM_THREADS = 16 aps --collection-mode = all -r report_output ./a.out","title":"Analyzing Shared Memory Applications"},{"location":"development/performance-debugging-tools/aps/#analyzing-mpi-applications","text":"To compile just MPI application run $ mpiicc example.c and for MPI+OpenMP run $ mpiicc -qopenmp example.c Example for the batch script: #!/bin/bash -l #SBATCH -J APS #SBATCH -N 2 ###SBATCH -A #SBATCH --ntasks-per-node=14 #SBATCH -c 2 #SBATCH --time=00:10:00 #SBATCH -p batch #SBATCH --reservation= module purge module load swenv/default-env/v1.2-20191021-production module load tools/VTune/2019_update4 module load toolchain/intel/2019a # To collect all the results export MPS_STAT_LEVEL = ${ SLURM_CPUS_PER_TASK :- 1 } # An option for the OpenMP+MPI application export OMP_NUM_THREADS = ${ SLURM_CPUS_PER_TASK :- 1 } srun -n ${ SLURM_NTASKS } aps --collection-mode = mpi -r result_output ./a.out The below figure shows the hybrid(MPI+OpenMP) programming analysis results:","title":"Analyzing MPI Applications"},{"location":"development/performance-debugging-tools/aps/#next-steps","text":"Intel Trace Analyzer and Collector is a graphical tool for understanding MPI application behavior, quickly identifying bottlenecks, improving correctness, and achieving high performance for parallel cluster applications running on Intel architecture. Improve weak and strong scaling for applications. Get started . Intel VTune Amplifier provides a deep insight into node-level performance including algorithmic hotspot analysis, OpenMP threading, general exploration microarchitecture analysis, memory access efficiency, and more. It supports C/C++, Fortran, Java, Python, and profiling in containers. Get started . Intel Advisor provides two tools to help ensure your Fortran, C, and C++ applications realize full performance potential on modern processors. Get started . Vectorization Advisor is an optimization tool to identify loops that will benefit most from vectorization, analyze what is blocking effective vectorization, and forecast the benefit of alternative data reorganizations Threading Advisor is a threading design and prototyping tool to analyze, design, tune, and check threading design options without disrupting a regular environment Quick Metrics Reference The following metrics are collected with Application Performance Snapshot. Additional detail about each of these metrics is available in the Intel VTune Amplifier online help . Elapsed Time : Execution time of specified application in seconds. SP GFLOPS : Number of single precision giga-floating point operations calculated per second. All double operations are converted to two single operations. SP GFLOPS metrics are only available for 3 rd Generation Intel Core processors, 5 th Generation Intel processors, and 6 th Generation Intel processors. Cycles per Instruction Retired (CPI) : The amount of time each executed instruction took measured by cycles. A CPI of 1 is considered acceptable for high performance computing (HPC) applications, but different application domains will have varied expected values. The CPI value tends to be greater when there is long-latency memory, floating-point, or SIMD operations, non-retired instructions due to branch mispredictions, or instruction starvation at the front end. MPI Time : Average time per process spent in MPI calls. This metric does not include the time spent in MPI_Finalize . High values could be caused by high wait times inside the library, active communications, or sub-optimal settings of the MPI library. The metric is available for MPICH-based MPIs. MPI Imbalance : CPU time spent by ranks spinning in waits on communication operations. A high value can be caused by application workload imbalance between ranks, or non-optimal communication schema or MPI library settings. This metric is available only for Intel MPI Library version 2017 and later. OpenMP Imbalance : Percentage of elapsed time that your application wastes at OpenMP synchronization barriers because of load imbalance. This metric is only available for the Intel OpenMP Runtime Library. CPU Utilization : Estimate of the utilization of all logical CPU cores on the system by your application. Use this metric to help evaluate the parallel efficiency of your application. A utilization of 100% means that your application keeps all of the logical CPU cores busy for the entire time that it runs. Note that the metric does not distinguish between useful application work and the time that is spent in parallel runtimes. Memory Stalls : Indicates how memory subsystem issues affect application performance. This metric measures a fraction of slots where pipeline could be stalled due to demand load or store instructions. If the metric value is high, review the Cache and DRAM Stalls and the percent of remote accesses metrics to understand the nature of memory-related performance bottlenecks. If the average memory bandwidth numbers are close to the system bandwidth limit, optimization techniques for memory bound applications may be required to avoid memory stalls. FPU Utilization : The effective FPU usage while the application was running. Use the FPU Utilization value to evaluate the vector efficiency of your application. The value is calculated by estimating the percentage of operations that are performed by the FPU. A value of 100% means that the FPU is fully loaded. Any value over 50% requires additional analysis. FPU metrics are only available for 3 rd Generation Intel Core processors, 5 th Generation Intel processors, and 6 th Generation Intel processors. I/O Operations : The time spent by the application while reading data from the disk or writing data to the disk. Read and Write values denote mean and maximum amounts of data read and written during the elapsed time. This metric is only available for MPI applications. Memory Footprint : Average per-rank and per-node consumption of both virtual and resident memory.","title":"Next Steps"},{"location":"development/performance-debugging-tools/aps/#documentation-and-resources","text":"Intel Performance Snapshot User Forum : User forum dedicated to all Intel Performance Snapshot tools, including Application Performance Snapshot Application Performance Snapshot : Application Performance Snapshot product page, see this page for support and online documentation Application Performance Snapshot User's Guide : Learn more about Application Performance Snapshot, including details on specific metrics and best practices for application optimization Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Documentation and Resources"},{"location":"development/performance-debugging-tools/arm-forge/","text":"Arm Forge is the leading server and HPC development tool suite in research, industry, and academia for C, C++, Fortran, and Python high performance code on Linux. Arm Forge includes Arm DDT, the best debugger for time-saving high performance application debugging, Arm MAP, the trusted performance profiler for invaluable optimization advice, and Arm Performance Reports to help you analyze your HPC application runs. Environmental models for Arm Forge in ULHPC \u00b6 module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/ArmForge/19.1 module load tools/ArmReports/19.1 Interactive Mode \u00b6 To compile $ icc -qopenmp example.c For debugging, profiling and analysing # for debugging $ ddt ./a .out # for profiling $ map ./a .out # for analysis $ perf-report ./a .out Batch Mode \u00b6 Shared memory programming model (OpenMP) \u00b6 Example for the batch script: #!/bin/bash -l #SBATCH -J ArmForge #SBATCH -N 1 ###SBATCH -A #SBATCH -c 16 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/ArmForge/19.1 module load tools/ArmReports/19.1 export OMP_NUM_THREADS = ${ SLURM_CPUS_PER_TASK :- 1 } # for debugging $ ddt ./a .out # for profiling $ map ./a .out # for analysis $ perf-report ./a .out Distributed memory programming model (MPI) \u00b6 Example for the batch script: #!/bin/bash -l #SBATCH -J ArmForge ###SBATCH -A #SBATCH -N 2 #SBATCH --ntasks-per-node 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/ArmForge/19.1 module load tools/ArmReports/19.1 # for debugging $ ddt srun -n ${ SLURM_NTASKS } ./a .out # for profiling $ map srun -n ${ SLURM_NTASKS } ./a .out # for analysis $ perf-report srun -n ${ SLURM_NTASKS } ./a .out To see the result Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Arm Forge"},{"location":"development/performance-debugging-tools/arm-forge/#environmental-models-for-arm-forge-in-ulhpc","text":"module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/ArmForge/19.1 module load tools/ArmReports/19.1","title":"Environmental models for Arm Forge in ULHPC"},{"location":"development/performance-debugging-tools/arm-forge/#interactive-mode","text":"To compile $ icc -qopenmp example.c For debugging, profiling and analysing # for debugging $ ddt ./a .out # for profiling $ map ./a .out # for analysis $ perf-report ./a .out","title":"Interactive Mode"},{"location":"development/performance-debugging-tools/arm-forge/#batch-mode","text":"","title":"Batch Mode"},{"location":"development/performance-debugging-tools/arm-forge/#shared-memory-programming-model-openmp","text":"Example for the batch script: #!/bin/bash -l #SBATCH -J ArmForge #SBATCH -N 1 ###SBATCH -A #SBATCH -c 16 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/ArmForge/19.1 module load tools/ArmReports/19.1 export OMP_NUM_THREADS = ${ SLURM_CPUS_PER_TASK :- 1 } # for debugging $ ddt ./a .out # for profiling $ map ./a .out # for analysis $ perf-report ./a .out","title":"Shared memory programming model (OpenMP)"},{"location":"development/performance-debugging-tools/arm-forge/#distributed-memory-programming-model-mpi","text":"Example for the batch script: #!/bin/bash -l #SBATCH -J ArmForge ###SBATCH -A #SBATCH -N 2 #SBATCH --ntasks-per-node 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/ArmForge/19.1 module load tools/ArmReports/19.1 # for debugging $ ddt srun -n ${ SLURM_NTASKS } ./a .out # for profiling $ map srun -n ${ SLURM_NTASKS } ./a .out # for analysis $ perf-report srun -n ${ SLURM_NTASKS } ./a .out To see the result Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Distributed memory programming model (MPI)"},{"location":"development/performance-debugging-tools/inspector/","text":"Intel Inspector \u00b6 Intel Inspector is a memory and threading error checking tool for users developing serial and multithreaded applications on Windows and Linux operating systems. The essential features of Intel Inspector for Linux are: Standalone GUI and command-line environments Preset analysis configurations (with some configurable settings) and the ability to create custom analysis configurations to help the user control analysis scope and cost Interactive debugging capability so one can investigate problems more deeply during the analysis A large number of reported memory errors, including on-demand memory leak detection Memory growth measurement to help ensure that the application uses no more memory than expected Data race, deadlock, lock hierarchy violation, and cross-thread stack access error detection Options for the Collect Action \u00b6 Option Description mi1 Detect memory leaks mi2 Detect memory problems mi3 Locate memory problems ti1 Detect deadlocks ti2 Detect deadlocks and data races ti3 Locate deadlocks and data races Options for the Report Action \u00b6 Option Description summary A brief statement of the total number of new problems found grouped by problem type problems A detailed report of detected problem sets in the result, along with their location in the source code observations A detailed report of all code locations used to form new problem sets status A brief statement of the total number of detected problems and the number that are not investigated , grouped by category For more information on Intel Inspector, please visit https://software.intel.com/en-us/intel-inspector-xe . Environmental models for Inspector on UL-HPC \u00b6 module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/Inspector/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 Interactive Mode \u00b6 To launch Inspector on Iris, we recommend that you use the command line tool inspxe-cl to collect data via batch jobs and then display results using the GUI, inspxe-gui , on a login node. # Compilation $ icc -qopenmp example.cc # Result collection $ inspxe-cl -collect mi1 -result-dir mi1 -- ./a.out # Result view $ cat inspxe-cl.txt === Start: [ 2020 /04/08 02 :11:50 ] === 2 new problem ( s ) found 1 Memory leak problem ( s ) detected 1 Memory not deallocated problem ( s ) detected === End: [ 2020 /04/08 02 :11:55 ] === Batch Mode \u00b6 Shared memory programming model (OpenMP) \u00b6 Example for the batch script: #!/bin/bash -l #SBATCH -J Inspector #SBATCH -N 1 ###SBATCH -A #SBATCH -c 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/Inspector/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 inspxe-cl -collect mi1 -result-dir mi1 -- ./a.out ` To see the result: # Result view $ cat inspxe-cl.txt === Start: [ 2020 /04/08 02 :11:50 ] === 2 new problem ( s ) found 1 Memory leak problem ( s ) detected 1 Memory not deallocated problem ( s ) detected === End: [ 2020 /04/08 02 :11:55 ] === Distributed memory programming model (MPI) \u00b6 To compile: # Compilation $ mpiicc -qopenmp example.cc Example for batch script: #!/bin/bash -l #SBATCH -J Inspector #SBATCH -N 2 ###SBATCH -A #SBATCH --ntasks-per-node 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/Inspector/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 srun -n { SLURM_NTASKS } inspxe-cl -collect = ti2 -r result ./a.out To see result output: $ cat inspxe-cl.txt 0 new problem ( s ) found === End: [ 2020 /04/08 16 :41:56 ] === === End: [ 2020 /04/08 16 :41:56 ] === 0 new problem ( s ) found === End: [ 2020 /04/08 16 :41:56 ] === Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Intel Inspector"},{"location":"development/performance-debugging-tools/inspector/#intel-inspector","text":"Intel Inspector is a memory and threading error checking tool for users developing serial and multithreaded applications on Windows and Linux operating systems. The essential features of Intel Inspector for Linux are: Standalone GUI and command-line environments Preset analysis configurations (with some configurable settings) and the ability to create custom analysis configurations to help the user control analysis scope and cost Interactive debugging capability so one can investigate problems more deeply during the analysis A large number of reported memory errors, including on-demand memory leak detection Memory growth measurement to help ensure that the application uses no more memory than expected Data race, deadlock, lock hierarchy violation, and cross-thread stack access error detection","title":"Intel Inspector"},{"location":"development/performance-debugging-tools/inspector/#options-for-the-collect-action","text":"Option Description mi1 Detect memory leaks mi2 Detect memory problems mi3 Locate memory problems ti1 Detect deadlocks ti2 Detect deadlocks and data races ti3 Locate deadlocks and data races","title":"Options for the Collect Action"},{"location":"development/performance-debugging-tools/inspector/#options-for-the-report-action","text":"Option Description summary A brief statement of the total number of new problems found grouped by problem type problems A detailed report of detected problem sets in the result, along with their location in the source code observations A detailed report of all code locations used to form new problem sets status A brief statement of the total number of detected problems and the number that are not investigated , grouped by category For more information on Intel Inspector, please visit https://software.intel.com/en-us/intel-inspector-xe .","title":"Options for the Report Action"},{"location":"development/performance-debugging-tools/inspector/#environmental-models-for-inspector-on-ul-hpc","text":"module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/Inspector/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0","title":"Environmental models for Inspector on UL-HPC"},{"location":"development/performance-debugging-tools/inspector/#interactive-mode","text":"To launch Inspector on Iris, we recommend that you use the command line tool inspxe-cl to collect data via batch jobs and then display results using the GUI, inspxe-gui , on a login node. # Compilation $ icc -qopenmp example.cc # Result collection $ inspxe-cl -collect mi1 -result-dir mi1 -- ./a.out # Result view $ cat inspxe-cl.txt === Start: [ 2020 /04/08 02 :11:50 ] === 2 new problem ( s ) found 1 Memory leak problem ( s ) detected 1 Memory not deallocated problem ( s ) detected === End: [ 2020 /04/08 02 :11:55 ] ===","title":"Interactive Mode"},{"location":"development/performance-debugging-tools/inspector/#batch-mode","text":"","title":"Batch Mode"},{"location":"development/performance-debugging-tools/inspector/#shared-memory-programming-model-openmp","text":"Example for the batch script: #!/bin/bash -l #SBATCH -J Inspector #SBATCH -N 1 ###SBATCH -A #SBATCH -c 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/Inspector/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 inspxe-cl -collect mi1 -result-dir mi1 -- ./a.out ` To see the result: # Result view $ cat inspxe-cl.txt === Start: [ 2020 /04/08 02 :11:50 ] === 2 new problem ( s ) found 1 Memory leak problem ( s ) detected 1 Memory not deallocated problem ( s ) detected === End: [ 2020 /04/08 02 :11:55 ] ===","title":"Shared memory programming model (OpenMP)"},{"location":"development/performance-debugging-tools/inspector/#distributed-memory-programming-model-mpi","text":"To compile: # Compilation $ mpiicc -qopenmp example.cc Example for batch script: #!/bin/bash -l #SBATCH -J Inspector #SBATCH -N 2 ###SBATCH -A #SBATCH --ntasks-per-node 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/Inspector/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 srun -n { SLURM_NTASKS } inspxe-cl -collect = ti2 -r result ./a.out To see result output: $ cat inspxe-cl.txt 0 new problem ( s ) found === End: [ 2020 /04/08 16 :41:56 ] === === End: [ 2020 /04/08 16 :41:56 ] === 0 new problem ( s ) found === End: [ 2020 /04/08 16 :41:56 ] === Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Distributed memory programming model (MPI)"},{"location":"development/performance-debugging-tools/itac/","text":"Intel Trace Analyzer and Collector (ITAC) are two tools used for analyzing MPI behavior in parallel applications. ITAC identifies MPI load imbalance and communication hotspots in order to help developers optimize MPI parallelization and minimize communication and synchronization in their applications. Using Trace Collector on Cori must be done with a command line interface, while Trace Analyzer supports both a command line and graphical user interface which analyzes the data from Trace Collector. Environmental models for ITAC in ULHPC \u00b6 module load purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/itac/2019.4.036 module load vis/GTK+/3.24.8-GCCcore-8.2.0 Interactive mode \u00b6 # Compilation $ icc -qopenmp -trance example.c # Code execution $ export OMP_NUM_THREADS = 16 $ -trace-collective ./a.out # Report collection $ export VT_STATISTICS = ON $ stftool tracefile.stf --print-statistics Batch mode \u00b6 Shared memory programming model (OpenMP) \u00b6 Example for the batch script: #!/bin/bash -l #SBATCH -J ITAC ###SBATCH -A #SBATCH -N 1 #SBATCH -c 16 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/itac/2019.4.036 module load vis/GTK+/3.24.8-GCCcore-8.2.0 $ export OMP_NUM_THREADS = 16 $ -trace-collective ./a.out To see the result $ export VT_STATISTICS = ON $ stftool tracefile.stf --print-statistics Distributed memory programming model (MPI) \u00b6 To compile $ mpiicc -trace example.c Example for the batch script: #!/bin/bash -l #SBATCH -J ITAC ###SBATCH -A #SBATCH -N 2 #SBATCH --ntasks-per-node=28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/itac/2019.4.036 module load vis/GTK+/3.24.8-GCCcore-8.2.0 srun -n ${ SLURM_NTASKS } -trace-collective ./a.out To collect the result and see the result in GUI use the below commands $ export VT_STATISTICS = ON $ stftool tracefile.stf --print-statistics Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Intel Trace Analyzer and Collector"},{"location":"development/performance-debugging-tools/itac/#environmental-models-for-itac-in-ulhpc","text":"module load purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/itac/2019.4.036 module load vis/GTK+/3.24.8-GCCcore-8.2.0","title":"Environmental models for ITAC in ULHPC"},{"location":"development/performance-debugging-tools/itac/#interactive-mode","text":"# Compilation $ icc -qopenmp -trance example.c # Code execution $ export OMP_NUM_THREADS = 16 $ -trace-collective ./a.out # Report collection $ export VT_STATISTICS = ON $ stftool tracefile.stf --print-statistics","title":"Interactive mode"},{"location":"development/performance-debugging-tools/itac/#batch-mode","text":"","title":"Batch mode"},{"location":"development/performance-debugging-tools/itac/#shared-memory-programming-model-openmp","text":"Example for the batch script: #!/bin/bash -l #SBATCH -J ITAC ###SBATCH -A #SBATCH -N 1 #SBATCH -c 16 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/itac/2019.4.036 module load vis/GTK+/3.24.8-GCCcore-8.2.0 $ export OMP_NUM_THREADS = 16 $ -trace-collective ./a.out To see the result $ export VT_STATISTICS = ON $ stftool tracefile.stf --print-statistics","title":"Shared memory programming model (OpenMP)"},{"location":"development/performance-debugging-tools/itac/#distributed-memory-programming-model-mpi","text":"To compile $ mpiicc -trace example.c Example for the batch script: #!/bin/bash -l #SBATCH -J ITAC ###SBATCH -A #SBATCH -N 2 #SBATCH --ntasks-per-node=28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/itac/2019.4.036 module load vis/GTK+/3.24.8-GCCcore-8.2.0 srun -n ${ SLURM_NTASKS } -trace-collective ./a.out To collect the result and see the result in GUI use the below commands $ export VT_STATISTICS = ON $ stftool tracefile.stf --print-statistics Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Distributed memory programming model (MPI)"},{"location":"development/performance-debugging-tools/scalasca/","text":"Scalasca is a performance analysis tool that supports large-scale systems, including IBM Blue Gene and CrayXT and small systems. The Scalasca provides information about the communication and synchronization among the processors. This information will help to do the performance analysis, optimization, and tunning of scientificcodes. Scalasca supports OpenMP, MPI, and hybrid programming model, and a analysis can be done by using the GUI which can be seen in below figure. Environmental models for Scalasca on ULHPC \u00b6 module load purge module load swenv/default-env/v1.1-20180716-production module load toolchain/foss/2018a module load perf/Scalasca/2.3.1-foss-2018a module load perf/Score-P/3.1-foss-2018a Interactive Mode \u00b6 Work flow: # instrument $ scorep mpicxx example.cc # analyze scalasca -analyze mpirun -n 28 ./a.out # examine $ scalasca -examine -s scorep_a_28_sum INFO: Post-processing runtime summarization report... INFO: Score report written to ./scorep_a_28_sum/scorep.score # graphical visualization $ scalasca -examine result_folder Batch mode \u00b6 Shared memory programming (OpenMP) \u00b6 #!/bin/bash -l #SBATCH -J Scalasca ###SBATCH -A #SBATCH -N 1 #SBATCH -c 16 #SBATCH --time=00:10:00 #SBATCH -p batch module load purge module load swenv/default-env/v1.1-20180716-production module load toolchain/foss/2018a module load perf/Scalasca/2.3.1-foss-2018a module load perf/Score-P/3.1-foss-2018a export OMP_NUM_THREADS = 16 # analyze scalasca -analyze ./a.out Report collection and visualization # examine $ scalasca -examine -s scorep_a_28_sum INFO: Post-processing runtime summarization report... INFO: Score report written to ./scorep_a_28_sum/scorep.score # graphical visualization $ scalasca -examine result_folder Distributed memory programming (MPI) \u00b6 #!/bin/bash -l #SBATCH -J Scalasca ###SBATCH -A #SBATCH -N 2 #SBATCH --ntasks-per-node=28 #SBATCH --time=00:10:00 #SBATCH -p batch module load purge module load swenv/default-env/v1.1-20180716-production module load toolchain/foss/2018a module load perf/Scalasca/2.3.1-foss-2018a module load perf/Score-P/3.1-foss-2018a scalasca -analyze srun -n ${ SLURM_NTASKS } ./a.out Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Scalasca"},{"location":"development/performance-debugging-tools/scalasca/#environmental-models-for-scalasca-on-ulhpc","text":"module load purge module load swenv/default-env/v1.1-20180716-production module load toolchain/foss/2018a module load perf/Scalasca/2.3.1-foss-2018a module load perf/Score-P/3.1-foss-2018a","title":"Environmental models for Scalasca on ULHPC"},{"location":"development/performance-debugging-tools/scalasca/#interactive-mode","text":"Work flow: # instrument $ scorep mpicxx example.cc # analyze scalasca -analyze mpirun -n 28 ./a.out # examine $ scalasca -examine -s scorep_a_28_sum INFO: Post-processing runtime summarization report... INFO: Score report written to ./scorep_a_28_sum/scorep.score # graphical visualization $ scalasca -examine result_folder","title":"Interactive Mode"},{"location":"development/performance-debugging-tools/scalasca/#batch-mode","text":"","title":"Batch mode"},{"location":"development/performance-debugging-tools/scalasca/#shared-memory-programming-openmp","text":"#!/bin/bash -l #SBATCH -J Scalasca ###SBATCH -A #SBATCH -N 1 #SBATCH -c 16 #SBATCH --time=00:10:00 #SBATCH -p batch module load purge module load swenv/default-env/v1.1-20180716-production module load toolchain/foss/2018a module load perf/Scalasca/2.3.1-foss-2018a module load perf/Score-P/3.1-foss-2018a export OMP_NUM_THREADS = 16 # analyze scalasca -analyze ./a.out Report collection and visualization # examine $ scalasca -examine -s scorep_a_28_sum INFO: Post-processing runtime summarization report... INFO: Score report written to ./scorep_a_28_sum/scorep.score # graphical visualization $ scalasca -examine result_folder","title":"Shared memory programming (OpenMP)"},{"location":"development/performance-debugging-tools/scalasca/#distributed-memory-programming-mpi","text":"#!/bin/bash -l #SBATCH -J Scalasca ###SBATCH -A #SBATCH -N 2 #SBATCH --ntasks-per-node=28 #SBATCH --time=00:10:00 #SBATCH -p batch module load purge module load swenv/default-env/v1.1-20180716-production module load toolchain/foss/2018a module load perf/Scalasca/2.3.1-foss-2018a module load perf/Score-P/3.1-foss-2018a scalasca -analyze srun -n ${ SLURM_NTASKS } ./a.out Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Distributed memory programming (MPI)"},{"location":"development/performance-debugging-tools/valgrind/","text":"The Valgrind tool suite provides a number of debugging and profiling tools that help you make your programs faster and more correct. The most popular of these tools is called Memcheck which can detect many memory-related errors and memory leaks. Prepare Your Program \u00b6 Compile your program with -g to include debugging information so that Memcheck's error messages include exact line numbers. Using -O0 is also a good idea, if you can tolerate the slowdown. With -O1 line numbers in error messages can be inaccurate, although generally speaking running Memcheck on code compiled at -O1 works fairly well, and the speed improvement compared to running -O0 is quite significant. Use of -O2 and above is not recommended as Memcheck occasionally reports uninitialised-value errors which don't really exist. Environmental models for Valgrind in ULHPC \u00b6 $ module purge $ module load debugger/Valgrind/3.15.0-intel-2019a Interactive mode \u00b6 Example code: #include using namespace std ; int main () { const int SIZE = 1000 ; int * array = new int ( SIZE ); for ( int i = 0 ; i < SIZE ; i ++ ) array [ i ] = i + 1 ; // delete[] array return 0 ; } # Compilation $ icc -g example.cc # Code execution $ valgrind --leak-check = full --show-leak-kinds = all ./a.out Result output (with leak) If we do not delete delete[] array the memory, then there will be a memory leak. == 26756 == Memcheck, a memory error detector == 26756 == Copyright ( C ) 2002 -2017, and GNU GPL 'd, by Julian Seward et al. ==26756== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info ==26756== Command: ./a.out ==26756== ==26756== Invalid write of size 4 ==26756== at 0x401275: main (mem-leak.cc:10) ==26756== Address 0x5309c84 is 0 bytes after a block of size 4 alloc' d == 26756 == at 0x402DBE9: operator new ( unsigned long ) ( vg_replace_malloc.c:344 ) == 26756 == by 0x401265: main ( mem-leak.cc:8 ) == 26756 == == 26756 == == 26756 == HEAP SUMMARY: == 26756 == in use at exit: 4 bytes in 1 blocks == 26756 == total heap usage: 2 allocs, 1 frees, 72 ,708 bytes allocated == 26756 == == 26756 == 4 bytes in 1 blocks are definitely lost in loss record 1 of 1 == 26756 == at 0x402DBE9: operator new ( unsigned long ) ( vg_replace_malloc.c:344 ) == 26756 == by 0x401265: main ( mem-leak.cc:8 ) == 26756 == == 26756 == LEAK SUMMARY: == 26756 == definitely lost: 4 bytes in 1 blocks == 26756 == indirectly lost: 0 bytes in 0 blocks == 26756 == possibly lost: 0 bytes in 0 blocks == 26756 == still reachable: 0 bytes in 0 blocks == 26756 == suppressed: 0 bytes in 0 blocks == 26756 == == 26756 == For lists of detected and suppressed errors, rerun with: -s == 26756 == ERROR SUMMARY: 1000 errors from 2 contexts ( suppressed: 0 from 0 ) Result output (without leak) When we delete delete[] array the allocated memory, there will not be leaked memory. == 26172 == Memcheck, a memory error detector == 26172 == Copyright ( C ) 2002 -2017, and GNU GPL ' d, by Julian Seward et al. == 26172 == Using Valgrind-3.15.0 and LibVEX ; rerun with -h for copyright info == 26172 == Command: ./a.out == 26172 == == 26172 == == 26172 == HEAP SUMMARY: == 26172 == in use at exit: 4 bytes in 1 blocks == 26172 == total heap usage: 2 allocs, 1 frees, 72 ,708 bytes allocated == 26172 == == 26172 == 4 bytes in 1 blocks are definitely lost in loss record 1 of 1 == 26172 == at 0x402DBE9: operator new ( unsigned long ) ( vg_replace_malloc.c:344 ) == 26172 == by 0x401283: main ( in /mnt/irisgpfs/users/ekrishnasamy/BPG/Valgrind/a.out ) == 26172 == == 26172 == LEAK SUMMARY: == 26172 == definitely lost: 4 bytes in 1 blocks == 26172 == indirectly lost: 0 bytes in 0 blocks == 26172 == possibly lost: 0 bytes in 0 blocks == 26172 == still reachable: 0 bytes in 0 blocks == 26172 == suppressed: 0 bytes in 0 blocks == 26172 == == 26172 == For lists of detected and suppressed errors, rerun with: -s == 26172 == ERROR SUMMARY: 1 errors from 1 contexts ( suppressed: 0 from 0 ) Additional information \u00b6 This page is based on the \"Valgrind Quick Start Page\". For more information about valgrind, please refer to http://valgrind.org/ . Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Valgrind"},{"location":"development/performance-debugging-tools/valgrind/#prepare-your-program","text":"Compile your program with -g to include debugging information so that Memcheck's error messages include exact line numbers. Using -O0 is also a good idea, if you can tolerate the slowdown. With -O1 line numbers in error messages can be inaccurate, although generally speaking running Memcheck on code compiled at -O1 works fairly well, and the speed improvement compared to running -O0 is quite significant. Use of -O2 and above is not recommended as Memcheck occasionally reports uninitialised-value errors which don't really exist.","title":"Prepare Your Program"},{"location":"development/performance-debugging-tools/valgrind/#environmental-models-for-valgrind-in-ulhpc","text":"$ module purge $ module load debugger/Valgrind/3.15.0-intel-2019a","title":"Environmental models for Valgrind in ULHPC"},{"location":"development/performance-debugging-tools/valgrind/#interactive-mode","text":"Example code: #include using namespace std ; int main () { const int SIZE = 1000 ; int * array = new int ( SIZE ); for ( int i = 0 ; i < SIZE ; i ++ ) array [ i ] = i + 1 ; // delete[] array return 0 ; } # Compilation $ icc -g example.cc # Code execution $ valgrind --leak-check = full --show-leak-kinds = all ./a.out Result output (with leak) If we do not delete delete[] array the memory, then there will be a memory leak. == 26756 == Memcheck, a memory error detector == 26756 == Copyright ( C ) 2002 -2017, and GNU GPL 'd, by Julian Seward et al. ==26756== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info ==26756== Command: ./a.out ==26756== ==26756== Invalid write of size 4 ==26756== at 0x401275: main (mem-leak.cc:10) ==26756== Address 0x5309c84 is 0 bytes after a block of size 4 alloc' d == 26756 == at 0x402DBE9: operator new ( unsigned long ) ( vg_replace_malloc.c:344 ) == 26756 == by 0x401265: main ( mem-leak.cc:8 ) == 26756 == == 26756 == == 26756 == HEAP SUMMARY: == 26756 == in use at exit: 4 bytes in 1 blocks == 26756 == total heap usage: 2 allocs, 1 frees, 72 ,708 bytes allocated == 26756 == == 26756 == 4 bytes in 1 blocks are definitely lost in loss record 1 of 1 == 26756 == at 0x402DBE9: operator new ( unsigned long ) ( vg_replace_malloc.c:344 ) == 26756 == by 0x401265: main ( mem-leak.cc:8 ) == 26756 == == 26756 == LEAK SUMMARY: == 26756 == definitely lost: 4 bytes in 1 blocks == 26756 == indirectly lost: 0 bytes in 0 blocks == 26756 == possibly lost: 0 bytes in 0 blocks == 26756 == still reachable: 0 bytes in 0 blocks == 26756 == suppressed: 0 bytes in 0 blocks == 26756 == == 26756 == For lists of detected and suppressed errors, rerun with: -s == 26756 == ERROR SUMMARY: 1000 errors from 2 contexts ( suppressed: 0 from 0 ) Result output (without leak) When we delete delete[] array the allocated memory, there will not be leaked memory. == 26172 == Memcheck, a memory error detector == 26172 == Copyright ( C ) 2002 -2017, and GNU GPL ' d, by Julian Seward et al. == 26172 == Using Valgrind-3.15.0 and LibVEX ; rerun with -h for copyright info == 26172 == Command: ./a.out == 26172 == == 26172 == == 26172 == HEAP SUMMARY: == 26172 == in use at exit: 4 bytes in 1 blocks == 26172 == total heap usage: 2 allocs, 1 frees, 72 ,708 bytes allocated == 26172 == == 26172 == 4 bytes in 1 blocks are definitely lost in loss record 1 of 1 == 26172 == at 0x402DBE9: operator new ( unsigned long ) ( vg_replace_malloc.c:344 ) == 26172 == by 0x401283: main ( in /mnt/irisgpfs/users/ekrishnasamy/BPG/Valgrind/a.out ) == 26172 == == 26172 == LEAK SUMMARY: == 26172 == definitely lost: 4 bytes in 1 blocks == 26172 == indirectly lost: 0 bytes in 0 blocks == 26172 == possibly lost: 0 bytes in 0 blocks == 26172 == still reachable: 0 bytes in 0 blocks == 26172 == suppressed: 0 bytes in 0 blocks == 26172 == == 26172 == For lists of detected and suppressed errors, rerun with: -s == 26172 == ERROR SUMMARY: 1 errors from 1 contexts ( suppressed: 0 from 0 )","title":"Interactive mode"},{"location":"development/performance-debugging-tools/valgrind/#additional-information","text":"This page is based on the \"Valgrind Quick Start Page\". For more information about valgrind, please refer to http://valgrind.org/ . Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Additional information"},{"location":"development/performance-debugging-tools/vtune/","text":"VTune \u00b6 Use Intel VTune Profiler to profile serial and multithreaded applications that are executed on a variety of hardware platforms (CPU, GPU, FPGA). The tool is delivered as a Performance Profiler with Intel Performance Snapshots and supports local and remote target analysis on the Windows , Linux , and Android* platforms. Without the right data, you\u2019re guessing about how to improve software performance and are unlikely to make the most effective improvements. Intel\u00ae VTune\u2122 Profiler collects key profiling data and presents it with a powerful interface that simplifies its analysis and interpretation. Environmental models for VTune on ULHPC: \u00b6 module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/VTune/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 Interactive Mode \u00b6 # Compilation $ icc -qopenmp example.c # Code execution $ export OMP_NUM_THREADS = 16 $ amplxe-cl -collect hotspots -r my_result ./a.out To see the result in GUI $ amplxe-gui my_result $ amplxe-cl will list out the analysis types and $ amplxe-cl -hlep report will list out available reports in VTune. Batch Mode \u00b6 Shared Memory Programming Model (OpenMP) \u00b6 #!/bin/bash -l #SBATCH -J VTune ###SBATCH -A #SBATCH -N 1 #SBATCH -c 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/VTune/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 export OMP_NUM_THREADS = 16 amplxe-cl -collect hotspots-r my_result ./a.out Distributed Memory Programming Model \u00b6 To compile just MPI application run $ mpiicc example.c and for MPI+OpenMP run $ mpiicc -qopenmp example.c #!/bin/bash -l #SBATCH -J VTune ###SBATCH -A #SBATCH -N 2 #SBATCH --ntasks-per-node=28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/VTune/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 srun -n ${ SLURM_NTASKS } amplxe-cl -collect uarch-exploration -r vtune_mpi -- ./a.out # Report collection $ amplxe-cl -report uarch-exploration -report-output output -r vtune_mpi # Result visualization $ amplxe-gui vtune_mpi The below figure shows the hybrid(MPI+OpenMP) programming analysis results: Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Intel VTune"},{"location":"development/performance-debugging-tools/vtune/#vtune","text":"Use Intel VTune Profiler to profile serial and multithreaded applications that are executed on a variety of hardware platforms (CPU, GPU, FPGA). The tool is delivered as a Performance Profiler with Intel Performance Snapshots and supports local and remote target analysis on the Windows , Linux , and Android* platforms. Without the right data, you\u2019re guessing about how to improve software performance and are unlikely to make the most effective improvements. Intel\u00ae VTune\u2122 Profiler collects key profiling data and presents it with a powerful interface that simplifies its analysis and interpretation.","title":"VTune"},{"location":"development/performance-debugging-tools/vtune/#environmental-models-for-vtune-on-ulhpc","text":"module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/VTune/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0","title":"Environmental models for VTune on ULHPC:"},{"location":"development/performance-debugging-tools/vtune/#interactive-mode","text":"# Compilation $ icc -qopenmp example.c # Code execution $ export OMP_NUM_THREADS = 16 $ amplxe-cl -collect hotspots -r my_result ./a.out To see the result in GUI $ amplxe-gui my_result $ amplxe-cl will list out the analysis types and $ amplxe-cl -hlep report will list out available reports in VTune.","title":"Interactive Mode"},{"location":"development/performance-debugging-tools/vtune/#batch-mode","text":"","title":"Batch Mode"},{"location":"development/performance-debugging-tools/vtune/#shared-memory-programming-model-openmp","text":"#!/bin/bash -l #SBATCH -J VTune ###SBATCH -A #SBATCH -N 1 #SBATCH -c 28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/VTune/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 export OMP_NUM_THREADS = 16 amplxe-cl -collect hotspots-r my_result ./a.out","title":"Shared Memory Programming Model (OpenMP)"},{"location":"development/performance-debugging-tools/vtune/#distributed-memory-programming-model","text":"To compile just MPI application run $ mpiicc example.c and for MPI+OpenMP run $ mpiicc -qopenmp example.c #!/bin/bash -l #SBATCH -J VTune ###SBATCH -A #SBATCH -N 2 #SBATCH --ntasks-per-node=28 #SBATCH --time=00:10:00 #SBATCH -p batch module purge module load swenv/default-env/v1.2-20191021-production module load toolchain/intel/2019a module load tools/VTune/2019_update4 module load vis/GTK+/3.24.8-GCCcore-8.2.0 srun -n ${ SLURM_NTASKS } amplxe-cl -collect uarch-exploration -r vtune_mpi -- ./a.out # Report collection $ amplxe-cl -report uarch-exploration -report-output output -r vtune_mpi # Result visualization $ amplxe-gui vtune_mpi The below figure shows the hybrid(MPI+OpenMP) programming analysis results: Tip If you find some issues with the instructions above, please report it to us using support ticket .","title":"Distributed Memory Programming Model"},{"location":"environment/","text":"ULHPC User Environment \u00b6 Your typical journey on the ULHPC facility is illustrated in the below figure. Typical workflow on UL HPC resources You daily interaction with the ULHPC facility includes the following actions: Preliminary setup Connect to the access/login servers This can be done either by ssh ( recommended ) or via the ULHPC OOD portal ( advanced users ) at this point, you probably want to create (or reattach) to a screen or tmux session Synchronize you code and/or transfer your input data using rsync/svn/git typically recall that the different storage filesystems are shared (via a high-speed interconnect network ) among the computational resources of the ULHPC facilities. In particular, it is sufficient to exchange data with the access servers to make them available on the clusters Reserve a few interactive resources with salloc -p interactive [...] recall that the module command (used to load the ULHPC User software ) is only available on the compute nodes ( eventually ) build your program, typically using gcc/icc/mpicc/nvcc.. Test your workflow / HPC analysis on a small size problem ( srun/python/sh... ) Prepare a launcher script .{sh|py} Then you can proceed with your Real Experiments : Reserve passive resources : sbatch [...] Grab the results and (eventually) transfer back your output results using rsync/svn/git For more information: Getting Started Connecting to ULHPC supercomputers ULHPC Storage Systems Overview '-bash: module : command not found' on access/login servers Recall that by default, the module command is ( on purpose ) NOT available on the access/login servers . You HAVE to be on a computing node (within a slurm job ) Home and Directories Layout \u00b6 All ULHPC systems use global home directories . You also have access to several other pre-defined directories setup over several different File Systems which co-exist on the ULHPC facility and are configured for different purposes. They are listed below: Directory Env. file system backup /home/users/ $HOME GPFS/Spectrumscale no /work/projects/ - GPFS/Spectrumscale yes (partial, backup subdirectory) /scratch/users/ $SCRATCH Lustre no /mnt/isilon/projects/ - OneFS yes (live sync and snapshots) Shell and Dotfiles \u00b6 The default login shell is bash -- see /etc/shells for supported shells. ULHPC dotfiles vs. default dotfiles The ULHPC team DOES NOT populate shell initialization files (also known as dotfiles) on users' home directories - the default system ones are used in your home -- you can check them in /etc/skel/.* on the access/login servers . However, you may want to install the ULHPC/dotfiles available as a Github repository . See installation notes . A working copy of that repository exists in /etc/dotfiles.d on the access/login servers . You can thus use it: $ /etc/dotfiles.d/install.sh -h # Example to install ULHPC GNU screen configuration file $ /etc/dotfiles.d/install.sh -d /etc/dotfiles.d/ --screen -n # Dry-run $ /etc/dotfiles.d/install.sh -d /etc/dotfiles.d/ --screen # real install Changing Default Login Shell (or NOT) If you want to change your your default login shell, you should set that up using the ULHPC IPA portal (change the Login Shell attribute). Note however that we STRONGLY discourage you to do so. You may hit unexpected issues with system profile scripts expecting bash as running shell. System Profile \u00b6 /etc/profile contains Linux system wide environment and startup programs. Specific scripts are set to improve your ULHPC experience, in particular those set in the ULHPC/tools repository, for instance: /etc/profile.d/slurm-prompt.sh : provide info of your running Slurm job on your prompt /etc/profile.d/slurm.sh : several helper function to Customizing Shell Environment \u00b6 You can create dotfiles (e.g., .bashrc , .bash_profile , or .profile , etc) in your $HOME directory to put your personal shell modifications. Custom Bash Initialisation Files On ULHPC system ~/.bash_profile and ~/.profile are sourced by login shells, while ~/.bashrc is sourced by most of the shell invocations including the login shells. In general you can put the environment variables, such as PATH , which are inheritable to subshells in ~/.bash_profile or ~/.profile and functions and aliases in the ~/.bashrc file in order to make them available in subshells. ULHPC/dotfiles bash configuration even source the following files for that specific purpose: ~/.bash_private : custom private functions ~/.bash_aliases : custom private aliases. Understanding Bash Startup Files order See reference documentation . That's somehow hard to understand. Some tried to explicit it under the form of a \"simple\" graph -- credits for the one below to Ian Miell ( another one ) This explains why normally all ULHPC launcher scripts start with the following sha-bang ( #! ) header #!/bin/bash -l # #SBATCH [...] [ ... ] That's indeed the only way ( i.e. using /bin/bash -l instead of the classical /bin/bash ) to ensure that /etc/profile is sourced natively, and thus that all ULHPC environments variables and modules are loaded. If you don't proceed that way (i.e. following the classical approach), you MUST then use the following template you may see from other HPC centers: #!/bin/bash # #SBATCH [...] [ ... ] # Load ULHPC Profile if [ -f /etc/profile ] ; then . /etc/profile fi Since all ULHPC systems share the Global HOME filesystem, the same $HOME is available regardless of the platform. To make system specific customizations use the pre-defined environment ULHPC_CLUSTER variable: Example of cluster specific settings case $ULHPC_CLUSTER in \"iris\" ) : # Settings for iris export MYVARIABLE = \"value-for-iris\" ;; \"aion\" ) : # settings for aion export MYVARIABLE = \"value-for-aion\" ;; * ) : # default value for export MYVARIABLE = \"default-value\" ;; esac Operating Systems \u00b6 The ULHPC facility runs RedHat-based Linux Distributions , in particular: the Iris cluster run CentOS and RedHat (RHEL) Linux operating system, version 7 the Aion cluster run RedHat (RHEL) Linux operating system, version 8 Experimental Grid5000 cluster run Debian Linux, version 10 Thus, you are more than encouraged to become familiar - if not yet - with Linux commands . We can recommend the following sites and resources: Software Carpentry: The Unix Shell Unix/Linux Command Reference Impact of CentOS project shifting focus starting 2021 from CentOS Linux to CentOS Stream You may have followed the official announcement on Dec 8, 2020 where Red Hat announced that it will discontinue CentOS 8 by the end of 2021 and instead will focus on CentOS Stream going forward. Fortunately CentOS 7 will continue to be updated until 2024 and is therefore not affected by this change. While CentOS traditionally has been a rebuild of RHEL, CentOS Stream will be more or less a testing ground for changes that will eventually go into RHEL. Unfortunately this means that CentOS Stream will likely become incompatible with RHEL (e.g. binaries compiled on CentOS Stream will not necessarily run on RHEL and vice versa). It is also questionable whether CentOS Stream is a suitable environment for running production systems. For all these reasons, the migration to CentOS 8 for Iris (initially planned for Q1 2021) has been cancelled . Alternative approaches are under investigation , including an homogeneous setup between Iris and Aion over Redhat 8. Discovering, visualizing and reserving UL HPC resources \u00b6 See ULHPC Tutorial / Getting Started ULHPC User Software Environment \u00b6 The UL HPC facility provides a large variety of scientific applications to its user community, either domain-specific codes and general purpose development tools which enable research and innovation excellence across a wide set of computational fields. -- see software list . We use the Environment Modules / LMod framework which provided the module utility on Compute nodes to manage nearly all software. There are two main advantages of the module approach: ULHPC can provide many different versions and/or installations of a single software package on a given machine, including a default version as well as several older and newer version. Users can easily switch to different versions or installations without having to explicitly specify different paths. With modules, the MANPATH and related environment variables are automatically managed. ULHPC modules are in practice automatically generated by Easybuild . EasyBuild (EB for short) is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. A large number of scientific software are supported ( at least 2175 supported software packages since the 4.3.2 release) - see also What is EasyBuild? . For several years now, Easybuild is used to manage the ULHPC User Software Set and generate automatically the module files available to you on our computational resources in either prod (default) or devel (early development/testing) environment -- see ULHPC Toolchains and Software Set Versioning . This enables users to easily extend the global Software Set with their own local software builds, either performed within their global home directory or ( better ) in a shared project directory though Easybuild , which generate automatically module files compliant with the ULHPC module setup . ULHPC Environment modules Using Easybuild on ULHPC Clusters Self management of work environments in UL HPC with Conda \u00b6 Packages provided through the standard channels of modules and containers are optimized for the ULHPC clusters to ensure their performance and stability. However, many packages where performance is not critical and are used by few users are not provided through the standard channels. These packages can still be installed locally by the users through an environment management system such as Conda. Contact the ULHPC before installing any software with Conda Prefer binaries provided through modules or containers . Conda installs generic binaries that may be suboptimal for the configuration of the ULHPC clusters. Furthermore, installing packages locally with Conda consumes quotas in your or your project's account in terms of storage space and number of files . Contact the ULHPC High Level Support Team in the service portal [Home > Research > HPC > Software environment > Request expertise] to discuss possible options before installing any software. Conda is an open source environment and package management system. With Conda you can create independent environments, where you can install applications such as python and R, together with any packages which will be used by these applications. The environments are independent, with the Conda package manager managing the binaries, resolving dependencies, and ensuring that package used in multiple environments are stored only once. In a typical setting, each user has their own installation of a Conda and a set of personal environments. Management of work environments with Conda","title":"Overview"},{"location":"environment/#ulhpc-user-environment","text":"Your typical journey on the ULHPC facility is illustrated in the below figure. Typical workflow on UL HPC resources You daily interaction with the ULHPC facility includes the following actions: Preliminary setup Connect to the access/login servers This can be done either by ssh ( recommended ) or via the ULHPC OOD portal ( advanced users ) at this point, you probably want to create (or reattach) to a screen or tmux session Synchronize you code and/or transfer your input data using rsync/svn/git typically recall that the different storage filesystems are shared (via a high-speed interconnect network ) among the computational resources of the ULHPC facilities. In particular, it is sufficient to exchange data with the access servers to make them available on the clusters Reserve a few interactive resources with salloc -p interactive [...] recall that the module command (used to load the ULHPC User software ) is only available on the compute nodes ( eventually ) build your program, typically using gcc/icc/mpicc/nvcc.. Test your workflow / HPC analysis on a small size problem ( srun/python/sh... ) Prepare a launcher script .{sh|py} Then you can proceed with your Real Experiments : Reserve passive resources : sbatch [...] Grab the results and (eventually) transfer back your output results using rsync/svn/git For more information: Getting Started Connecting to ULHPC supercomputers ULHPC Storage Systems Overview '-bash: module : command not found' on access/login servers Recall that by default, the module command is ( on purpose ) NOT available on the access/login servers . You HAVE to be on a computing node (within a slurm job )","title":"ULHPC User Environment"},{"location":"environment/#home-and-directories-layout","text":"All ULHPC systems use global home directories . You also have access to several other pre-defined directories setup over several different File Systems which co-exist on the ULHPC facility and are configured for different purposes. They are listed below: Directory Env. file system backup /home/users/ $HOME GPFS/Spectrumscale no /work/projects/ - GPFS/Spectrumscale yes (partial, backup subdirectory) /scratch/users/ $SCRATCH Lustre no /mnt/isilon/projects/ - OneFS yes (live sync and snapshots)","title":"Home and Directories Layout"},{"location":"environment/#shell-and-dotfiles","text":"The default login shell is bash -- see /etc/shells for supported shells. ULHPC dotfiles vs. default dotfiles The ULHPC team DOES NOT populate shell initialization files (also known as dotfiles) on users' home directories - the default system ones are used in your home -- you can check them in /etc/skel/.* on the access/login servers . However, you may want to install the ULHPC/dotfiles available as a Github repository . See installation notes . A working copy of that repository exists in /etc/dotfiles.d on the access/login servers . You can thus use it: $ /etc/dotfiles.d/install.sh -h # Example to install ULHPC GNU screen configuration file $ /etc/dotfiles.d/install.sh -d /etc/dotfiles.d/ --screen -n # Dry-run $ /etc/dotfiles.d/install.sh -d /etc/dotfiles.d/ --screen # real install Changing Default Login Shell (or NOT) If you want to change your your default login shell, you should set that up using the ULHPC IPA portal (change the Login Shell attribute). Note however that we STRONGLY discourage you to do so. You may hit unexpected issues with system profile scripts expecting bash as running shell.","title":"Shell and Dotfiles"},{"location":"environment/#system-profile","text":"/etc/profile contains Linux system wide environment and startup programs. Specific scripts are set to improve your ULHPC experience, in particular those set in the ULHPC/tools repository, for instance: /etc/profile.d/slurm-prompt.sh : provide info of your running Slurm job on your prompt /etc/profile.d/slurm.sh : several helper function to","title":"System Profile"},{"location":"environment/#customizing-shell-environment","text":"You can create dotfiles (e.g., .bashrc , .bash_profile , or .profile , etc) in your $HOME directory to put your personal shell modifications. Custom Bash Initialisation Files On ULHPC system ~/.bash_profile and ~/.profile are sourced by login shells, while ~/.bashrc is sourced by most of the shell invocations including the login shells. In general you can put the environment variables, such as PATH , which are inheritable to subshells in ~/.bash_profile or ~/.profile and functions and aliases in the ~/.bashrc file in order to make them available in subshells. ULHPC/dotfiles bash configuration even source the following files for that specific purpose: ~/.bash_private : custom private functions ~/.bash_aliases : custom private aliases. Understanding Bash Startup Files order See reference documentation . That's somehow hard to understand. Some tried to explicit it under the form of a \"simple\" graph -- credits for the one below to Ian Miell ( another one ) This explains why normally all ULHPC launcher scripts start with the following sha-bang ( #! ) header #!/bin/bash -l # #SBATCH [...] [ ... ] That's indeed the only way ( i.e. using /bin/bash -l instead of the classical /bin/bash ) to ensure that /etc/profile is sourced natively, and thus that all ULHPC environments variables and modules are loaded. If you don't proceed that way (i.e. following the classical approach), you MUST then use the following template you may see from other HPC centers: #!/bin/bash # #SBATCH [...] [ ... ] # Load ULHPC Profile if [ -f /etc/profile ] ; then . /etc/profile fi Since all ULHPC systems share the Global HOME filesystem, the same $HOME is available regardless of the platform. To make system specific customizations use the pre-defined environment ULHPC_CLUSTER variable: Example of cluster specific settings case $ULHPC_CLUSTER in \"iris\" ) : # Settings for iris export MYVARIABLE = \"value-for-iris\" ;; \"aion\" ) : # settings for aion export MYVARIABLE = \"value-for-aion\" ;; * ) : # default value for export MYVARIABLE = \"default-value\" ;; esac","title":"Customizing Shell Environment"},{"location":"environment/#operating-systems","text":"The ULHPC facility runs RedHat-based Linux Distributions , in particular: the Iris cluster run CentOS and RedHat (RHEL) Linux operating system, version 7 the Aion cluster run RedHat (RHEL) Linux operating system, version 8 Experimental Grid5000 cluster run Debian Linux, version 10 Thus, you are more than encouraged to become familiar - if not yet - with Linux commands . We can recommend the following sites and resources: Software Carpentry: The Unix Shell Unix/Linux Command Reference Impact of CentOS project shifting focus starting 2021 from CentOS Linux to CentOS Stream You may have followed the official announcement on Dec 8, 2020 where Red Hat announced that it will discontinue CentOS 8 by the end of 2021 and instead will focus on CentOS Stream going forward. Fortunately CentOS 7 will continue to be updated until 2024 and is therefore not affected by this change. While CentOS traditionally has been a rebuild of RHEL, CentOS Stream will be more or less a testing ground for changes that will eventually go into RHEL. Unfortunately this means that CentOS Stream will likely become incompatible with RHEL (e.g. binaries compiled on CentOS Stream will not necessarily run on RHEL and vice versa). It is also questionable whether CentOS Stream is a suitable environment for running production systems. For all these reasons, the migration to CentOS 8 for Iris (initially planned for Q1 2021) has been cancelled . Alternative approaches are under investigation , including an homogeneous setup between Iris and Aion over Redhat 8.","title":"Operating Systems "},{"location":"environment/#discovering-visualizing-and-reserving-ul-hpc-resources","text":"See ULHPC Tutorial / Getting Started","title":"Discovering, visualizing and reserving UL HPC resources"},{"location":"environment/#ulhpc-user-software-environment","text":"The UL HPC facility provides a large variety of scientific applications to its user community, either domain-specific codes and general purpose development tools which enable research and innovation excellence across a wide set of computational fields. -- see software list . We use the Environment Modules / LMod framework which provided the module utility on Compute nodes to manage nearly all software. There are two main advantages of the module approach: ULHPC can provide many different versions and/or installations of a single software package on a given machine, including a default version as well as several older and newer version. Users can easily switch to different versions or installations without having to explicitly specify different paths. With modules, the MANPATH and related environment variables are automatically managed. ULHPC modules are in practice automatically generated by Easybuild . EasyBuild (EB for short) is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. A large number of scientific software are supported ( at least 2175 supported software packages since the 4.3.2 release) - see also What is EasyBuild? . For several years now, Easybuild is used to manage the ULHPC User Software Set and generate automatically the module files available to you on our computational resources in either prod (default) or devel (early development/testing) environment -- see ULHPC Toolchains and Software Set Versioning . This enables users to easily extend the global Software Set with their own local software builds, either performed within their global home directory or ( better ) in a shared project directory though Easybuild , which generate automatically module files compliant with the ULHPC module setup . ULHPC Environment modules Using Easybuild on ULHPC Clusters","title":"ULHPC User Software Environment"},{"location":"environment/#self-management-of-work-environments-in-ul-hpc-with-conda","text":"Packages provided through the standard channels of modules and containers are optimized for the ULHPC clusters to ensure their performance and stability. However, many packages where performance is not critical and are used by few users are not provided through the standard channels. These packages can still be installed locally by the users through an environment management system such as Conda. Contact the ULHPC before installing any software with Conda Prefer binaries provided through modules or containers . Conda installs generic binaries that may be suboptimal for the configuration of the ULHPC clusters. Furthermore, installing packages locally with Conda consumes quotas in your or your project's account in terms of storage space and number of files . Contact the ULHPC High Level Support Team in the service portal [Home > Research > HPC > Software environment > Request expertise] to discuss possible options before installing any software. Conda is an open source environment and package management system. With Conda you can create independent environments, where you can install applications such as python and R, together with any packages which will be used by these applications. The environments are independent, with the Conda package manager managing the binaries, resolving dependencies, and ensuring that package used in multiple environments are stored only once. In a typical setting, each user has their own installation of a Conda and a set of personal environments. Management of work environments with Conda","title":"Self management of work environments in UL HPC with Conda"},{"location":"environment/conda/","text":"Self management of work environments in UL HPC with Conda \u00b6 Packages provided through the standard channels of modules and containers are optimized for the ULHPC clusters to ensure their performance and stability. However, many packages where performance is not critical and are used by few users are not provided through the standard channels. These packages can still be installed locally by the users through an environment management system such as Conda. Contact the ULHPC before installing any software with Conda Prefer binaries provided through modules or containers . Conda installs generic binaries that may be suboptimal for the configuration of the ULHPC clusters. Furthermore, installing packages locally with Conda consumes quotas in your or your project's account in terms of storage space and number of files . Contact the ULHPC High Level Support Team in the service portal [Home > Research > HPC > Software environment > Request expertise] to discuss possible options before installing any software. Conda is an open source environment and package management system. With Conda you can create independent environments, where you can install applications such as python and R, together with any packages which will be used by these applications. The environments are independent, with the Conda package manager managing the binaries, resolving dependencies, and ensuring that package used in multiple environments are stored only once. In a typical setting, each user has their own installation of a Conda and a set of personal environments. TL;DR: install and use the Micromamba package manager . A brief introduction to Conda \u00b6 A few concepts are necessary to start working with Conda. In brief, these are package managers which are the programs used to create and manage environments, channels which are the repositories that contain the packages from which environments are composed, and distributions which are methods for shipping package managers. Package managers \u00b6 Package managers are the programs that install and manage the Conda environments. There are multiple package managers, such as conda , mamba , and micromamba . The UL HPC centre supports the use of micromamba for the creation and management of personal Conda environments. Channels \u00b6 Conda channels are the locations where packages are stored. There are also multiple channels, with some important channels being: defaults , the default channel, anaconda , a mirror of the default channel, bioconda , a distribution of bioinformatics software, and conda-forge , a community-led collection of recipes, build infrastructure, and distributions for the conda package manager. The most useful channel that comes pre-installed in all distributions, is Conda-Forge. Channels are usually hosted in the official Anaconda page , but in some rare occasions custom channels may be used. For instance the default channel is hosted independently from the official Anaconda page. Many channels also maintain web pages with documentation both for their usage and for packages they distribute: Default Conda channel Bioconda Conda-Forge Distributions \u00b6 Quite often, the package manager is not distributed on its own, but with a set of packages that are required for the package manager to work, or even with some additional packages that required for most applications. For instance, the conda package manager is distributed with the Miniconda and Anaconda distributions. Miniconda contains the bare minimum packages for the conda package manager to work, and Anaconda contains multiple commonly used packages and a graphical user interface. The relation between these distributions and the package manager is depicted in the following diagram. The situation is similar for Mamba distributions. These distributions are supported by Conda-Forge , and their default installation options set-up conda-forge as the default and only channel during installation. The defaults or its mirror anaconda must be explicitly added if required. The distribution using the Mamba package manager was originally distributed as Mambaforge and was recently renamed to Miniforge. Miniforge comes with a minimal set of python packages required by the Mamba package manager. The distribution using the Micromamba package manager ships no accompanying packages, as Micromamba is a standalone executable with no dependencies. Micromamba is using libmamba , a C++ library implementing the Conda API. The Micromamba package manager \u00b6 The Micromaba package manager is a minimal yet fairly complete implementation of the Conda interface in C++, that is shipped as a standalone executable. The package manager operates strictly on the user-space and thus it requires no special permissions are required to install packages. It maintains all its files in a couple of places, so uninstalling the package manager itself is also easy. Finally, the package manager is also lightweight and fast. UL HPC provides support only for the Micromamba package manager. Installation \u00b6 A complete guide regarding Micromamba installation can be found in the official documentation . To install micromamaba in the HPC clusters, log in to Aion or Iris. Working on a login node, run the installation script, \" ${ SHELL } \" < ( curl -L micro.mamba.pm/install.sh ) which will install the executable and setup the environment. There are 4 options to select during the installation of Micromamba: The directory for the installation of the binary file: Micromamba binary folder? [~/.local/bin] Leave empty and press enter to select the default displayed within brackets. Your .bashrc script should include ~/.local/bin in the $PATH by default. The option to add to the environment autocomplete options for micromamba : Init shell (bash)? [Y/n] Press enter to select the default option Y . This will append a clearly marked section in the .bashrc shell. Do not forget to remove this section when uninstalling Micromamba. The option to configure the channels by adding conda-forge: Configure conda-forge? [Y/n] Press enter to select the default option Y . This will setup the ~/.condarc file with conda-forge as the default channel. Note that Mamba and Micromamba will not use the defaults channel if it is not present in ~/.condarc like conda . The option to select the directory where environment information and packages will be stored: Prefix location? [~/micromamba] Press enter to select the default option displayed within brackets. To setup the environment log-out and log-in again. Now you can use micromamba , including the auto-completion feature. Managing environments \u00b6 As an example, the creation and use of an environment for R jobs is presented. The command, micromamba create --name R-project creates an environment named R-project . The environment is activated with the command micromamba activate R-project anywhere in the file system. Next, install the base R environment package that contains the R program, and any R packages required by the project. To install packages, first ensure that the R-project environment is active, and then install any package with the command micromamba install all the required packages. Quite often, the channel name must also be specified: micromamba install --chanell Packages can be found by searching the conda-forge channel . For instance, the basic functionality of the R software environment is contained in the r-base package. Calling micromamba install --channel conda-forge r-base will install all the components required to run standalone R scripts. More involved scripts use functionality defined in various packages. The R packages are prepended with a prefix 'r-'. Thus, plm becomes r-plm and so on. After all the required packages have been installed, the environment is ready for use. Packages in the conda-forge channel come with instructions for their installation. Quite often the channel is specified in the installation instructions, -c conda-forge or --channel conda-forge . While the Micromamba installer sets-up conda-forge as the default channel, latter modification in ~/.condarc may change the channel priority. Thus it is a good practice to explicitly specify the source channel when installing a package. After work in an environment is complete, deactivate the environment, micromamba deactivate to ensure that it does not interfere with any other operations. In contrast to modules , Conda is designed to operate with a single environment active at a time. Create one environment for each project, and Conda will ensure that any package that is shared between multiple environments is installed once. Micromamba supports almost all the subcommands of Conda. For more details see the official documentation . Using environments in submission scripts \u00b6 Since all computationally heavy operations must be performed in compute nodes, Conda environments are also used in jobs submitted to the queuing system . Returning to the R example, a submission script running a single core R job can use the R-project_name environment as follows: #SBATCH --job-name R-test-job #SBATCH --nodes 1 #SBATCH --ntasks-per-node 1 #SBATCH --cpus-per-task 1 #SBATCH --time=0-02:00:00 #SBATCH --partition batch #SBATCH --qos normal echo \"Launched at $(date)\" echo \"Job ID: ${SLURM_JOBID}\" echo \"Node list: ${SLURM_NODELIST}\" echo \"Submit dir.: ${SLURM_SUBMIT_DIR}\" echo \"Numb. of cores: ${SLURM_CPUS_PER_TASK}\" micromamba activate R-project export OMP_NUM_THREADS=1 srun Rscript --no-save --no-restore script.R micromamba deactivate Useful scripting resources Formatting submission scripts for R (and other systems) Cleaning up package data \u00b6 The Conda environment managers download and store a sizable amount of data to provided packages to the various environments. Even though the package data are shared between the various environments, they still consume space in your or your project's account. There are limits in the storage space and number of files that are available to projects and users in the cluster. Since Conda packages are self managed, you need to clean unused data yourself . There are two main sources of unused data, the compressed archives of the packages that Conda stores in its cache when downloading a package, and the data of removed packages. All unused data in Micromoamba can be removed with the command micromamba clean --all that opens up an interactive dialogue with details about the operations performed. You can follow the default option, unless you have manually edited any files in you package data directory (default location ${HOME}/micromamba ). Updating environments to remove old package versions As we create new environments, we often install the latest version of each package. However, if the environments are not updated regularly, we may end up with different versions of the same package across multiple environments. If we have the same version of a package installed in all environments, we can save space by removing unused older versions. To update a package across all environments, use the command for e in $( micromamba env list | awk 'FNR>2 {print $1}' ) ; do micromamba update --name $e ; done and to update all packages across all environments for e in $( micromamba env list | awk 'FNR>2 {print $1}' ) ; do micromamba update --name $e --all ; done where FNR>2 removes the headers in the output of micromamba env list , and is thus sensitive to changes in the user interface of Micromamba. After updating packages, the clean command can be called to removed the data of unused older package versions. Sources Oficial Conda clean documentation Understanding Conda clean Combining Conda with other package and environment management tools \u00b6 It may be desirable to use Conda to manage environments but a different tool to manage packages, such as pip . Or subenvironments may need to be used inside a Conda environment, as for instance with tools for creating and managing isolated Python installation, such as virtualenv , or with tools for integrating managed Python installations and packages in project directories, such as Pipenv and Poetry . Conda integrates well with any such tool. Some of the most frequent cases are described bellow. Managing packages with external tools \u00b6 Quite often a package that is required in an environment is not available through a Conda channel, but it is available through some other distribution channel, such as the Python Package Index (PyPI) . In these cases the only solution is to create a Conda environment and install the required packages with pip from the Python Package Index. Using an external packaging tool is possible because of the method that Conda uses to install packages. Conda installs package versions in a central directory (e.g. ~/micromamba/pkgs ). Any environment that requires a package links to the central directory with hard links . Links are added to the home directory (e.g. ~/micromamba/envs/R-project for the R-project environment) of any environment that requires them. When using an external package tool, package components are installed in the same directory where Conda would install the corresponding link. Thus, external package management tools integrate seamlessly with Conda, with a couple of caveats: each package must be managed by one tool, otherwise package components will get overwritten, and packages installed by the package tool are specific to an environment and cannot be shared as with Conda, since components are installed directly and not with links. Prefer Conda over external package managers Installing the same package in multiple environments with an external package tool consumes quotas in terms of storage space and number of files , so prefer Conda when possible. This is particularly important for the inode limit, since some packages install a large number of files, and the hard links used by Conda do not consume inodes or disk space . Pip \u00b6 In this example pip is used to manage packages in a Conda environment with MkDocs related packages. To install the packages, create an environment micromamba env create --name mkdocs activate the environment, micromamba activate mkdocs and install pip micromamba install --channel conda-forge pip which will be used to install the remaining packages. The pip will be the only package that will be managed with Conda. For instance, to update Pip activate the environment, micromamba activate mkdocs and run micromaba update --all to update all installed packaged (only pip in our case). All other packages are managed by pip . For instance, assume that a mkdocs project requires the following packages: mkdocs mkdocs-minify-plugin The package mkdocs-minify-plugin is less popular and thus is is not available though a Conda channel, but it is available in PyPI. To install it, activate the mkdocs environment micromamba activate mkdocs and install the required packages with pip pip install --upgrade mkdocs mkdocs-minify-plugin inside the environment. The packages will be installed inside a directory that micromamba created for the Conda environment, for instance ${HOME}/micromamba/envs/mkdocs along side packages installed by micromamba . As a results, 'system-wide' installations with pip inside a Conda environment do not interfere with system packages. Do not install packages in Conda environments with pip as a user User installed packages (e.g. pip install --user --upgrade mkdocs-minify-plugin ) are installed in the same directory for all environments, typically in ~/.local/ , and can interfere with other versions of the same package installed from other Conda environments. Pkg \u00b6 The Julia programming language provides its own package and environment manager, Pkg. The package manager of Julia provides many useful capabilities and it is recommended that it is used with Julia projects. Details about the use of Pkg can be found in the official documentation . The Pkg package manager comes packages with Julia. Start by creating an environment, mocromamba env create --name julia activate the environment, micromamba activate julia and install Julia, micromamba install --channel conda-forge julia to start using Pkg. In order to install a Julia package, activate the Julia environment, and start an interactive REPL session, $ julia julia> by just calling julia without any input files. Enter the Pkg package manager by pressing ] . Exit the package manager by clearing all the input from the line with backspace, and then pressing backspace one more time. In the package manager you can see the status of the current environment, ( @julia ) pkg > status Status `~/micromamba/envs/julia/share/julia/environments/julia/Project.toml` ( empty project ) add or remove packages, ( @julia ) pkg > add Example ( @julia ) pkg > remove Example update the packages in the environment, ( @julia ) pkg > update and perform many other operations, such as exporting and importing environments from plain text files which describe the environment setup, and pinning packages to specific versions. The Pkg package manager maintains a global environment, but also supports the creation and use of local environments that are used within a project directory. The use of local environments is highly recommended, please read the documentation for more information. After installing the Julia language in a Conda environment, the language distribution itself should be managed with micromamba and all packages in global or local environments with the Pkg package manager. To update Julia activate the Conda environment where Julia is stored and call micromamba update julia where as to update packages installed with Pgk use the update command of Pkg. The packages for local and global environments are stored in the Julia installation directory, typically ${HOME}/micromamba/envs/julia/share if the default location for the Micromamba environment directory is used. Advanced management of package data Julia packages will consume storage and number of files quota . Pkg uses automatic garbage collection to cleanup packages that are no longer is use. In general you don't need to manage then package data, simply remove the package and its data will be deleted automatically after some time. However, when you exceed your quota you need to delete files immediately. The immediate removal of the data of uninstalled packages can be forced with the command: using Pkg using Dates Pkg . gc (; collect_delay = Dates . Day ( 0 )) Make sure that the packages have been removed from all the environments that use them Sources : Immediate package data clean up Useful resources Pkg documentation Combining Conda with external environment management tools \u00b6 Quite often it is required to create isolated environments using external tools. For instance, tools such as virtualenv can install and manage a Python distribution in a given directory and export and import environment descriptions from text files. This functionalities allows for instance the shipping of a description of the Python environment as part of a project. Higher level tools such as pipenv automate the process by managing the Python environment as part of a project directory. The description of the environment is stored in version controlled files, and the Python packages are stored in a non-tracked directory within the project directory. Some wholistic project management tools, such as poetry , further integrate the management of the Python environment withing the project management workflow. Installing and using in Conda environments tools that create isolated environments is relatively straight forward. Create an environment where only the required that tool is installed, and manage any subenvironments using the installed tool. Create a different environment for each tool While this is not a requirement it is a good practice. For instance, pipenv and poetry used to and may still have conflicting dependencies; Conda detects the dependency and aborts the conflicting installation. Pipenv \u00b6 To demonstrate the usage of pipenv , create a Conda environment, micromamba enc create --name pipenv activate it micromamba activate pipenv and install the pipenv package micromamba install --channel conda-forge pipenv as the only package in this environment. Now the pipenv is managed with Conda, for instance to update pipenv activate the environment micromamba activate pipenv and call micromamba update --all to update the single installed package. Inside the environment use pipenv as usual to create and manage project environments.","title":"Conda"},{"location":"environment/conda/#self-management-of-work-environments-in-ul-hpc-with-conda","text":"Packages provided through the standard channels of modules and containers are optimized for the ULHPC clusters to ensure their performance and stability. However, many packages where performance is not critical and are used by few users are not provided through the standard channels. These packages can still be installed locally by the users through an environment management system such as Conda. Contact the ULHPC before installing any software with Conda Prefer binaries provided through modules or containers . Conda installs generic binaries that may be suboptimal for the configuration of the ULHPC clusters. Furthermore, installing packages locally with Conda consumes quotas in your or your project's account in terms of storage space and number of files . Contact the ULHPC High Level Support Team in the service portal [Home > Research > HPC > Software environment > Request expertise] to discuss possible options before installing any software. Conda is an open source environment and package management system. With Conda you can create independent environments, where you can install applications such as python and R, together with any packages which will be used by these applications. The environments are independent, with the Conda package manager managing the binaries, resolving dependencies, and ensuring that package used in multiple environments are stored only once. In a typical setting, each user has their own installation of a Conda and a set of personal environments. TL;DR: install and use the Micromamba package manager .","title":"Self management of work environments in UL HPC with Conda"},{"location":"environment/conda/#a-brief-introduction-to-conda","text":"A few concepts are necessary to start working with Conda. In brief, these are package managers which are the programs used to create and manage environments, channels which are the repositories that contain the packages from which environments are composed, and distributions which are methods for shipping package managers.","title":"A brief introduction to Conda"},{"location":"environment/conda/#package-managers","text":"Package managers are the programs that install and manage the Conda environments. There are multiple package managers, such as conda , mamba , and micromamba . The UL HPC centre supports the use of micromamba for the creation and management of personal Conda environments.","title":"Package managers"},{"location":"environment/conda/#channels","text":"Conda channels are the locations where packages are stored. There are also multiple channels, with some important channels being: defaults , the default channel, anaconda , a mirror of the default channel, bioconda , a distribution of bioinformatics software, and conda-forge , a community-led collection of recipes, build infrastructure, and distributions for the conda package manager. The most useful channel that comes pre-installed in all distributions, is Conda-Forge. Channels are usually hosted in the official Anaconda page , but in some rare occasions custom channels may be used. For instance the default channel is hosted independently from the official Anaconda page. Many channels also maintain web pages with documentation both for their usage and for packages they distribute: Default Conda channel Bioconda Conda-Forge","title":"Channels"},{"location":"environment/conda/#distributions","text":"Quite often, the package manager is not distributed on its own, but with a set of packages that are required for the package manager to work, or even with some additional packages that required for most applications. For instance, the conda package manager is distributed with the Miniconda and Anaconda distributions. Miniconda contains the bare minimum packages for the conda package manager to work, and Anaconda contains multiple commonly used packages and a graphical user interface. The relation between these distributions and the package manager is depicted in the following diagram. The situation is similar for Mamba distributions. These distributions are supported by Conda-Forge , and their default installation options set-up conda-forge as the default and only channel during installation. The defaults or its mirror anaconda must be explicitly added if required. The distribution using the Mamba package manager was originally distributed as Mambaforge and was recently renamed to Miniforge. Miniforge comes with a minimal set of python packages required by the Mamba package manager. The distribution using the Micromamba package manager ships no accompanying packages, as Micromamba is a standalone executable with no dependencies. Micromamba is using libmamba , a C++ library implementing the Conda API.","title":"Distributions"},{"location":"environment/conda/#the-micromamba-package-manager","text":"The Micromaba package manager is a minimal yet fairly complete implementation of the Conda interface in C++, that is shipped as a standalone executable. The package manager operates strictly on the user-space and thus it requires no special permissions are required to install packages. It maintains all its files in a couple of places, so uninstalling the package manager itself is also easy. Finally, the package manager is also lightweight and fast. UL HPC provides support only for the Micromamba package manager.","title":"The Micromamba package manager"},{"location":"environment/conda/#installation","text":"A complete guide regarding Micromamba installation can be found in the official documentation . To install micromamaba in the HPC clusters, log in to Aion or Iris. Working on a login node, run the installation script, \" ${ SHELL } \" < ( curl -L micro.mamba.pm/install.sh ) which will install the executable and setup the environment. There are 4 options to select during the installation of Micromamba: The directory for the installation of the binary file: Micromamba binary folder? [~/.local/bin] Leave empty and press enter to select the default displayed within brackets. Your .bashrc script should include ~/.local/bin in the $PATH by default. The option to add to the environment autocomplete options for micromamba : Init shell (bash)? [Y/n] Press enter to select the default option Y . This will append a clearly marked section in the .bashrc shell. Do not forget to remove this section when uninstalling Micromamba. The option to configure the channels by adding conda-forge: Configure conda-forge? [Y/n] Press enter to select the default option Y . This will setup the ~/.condarc file with conda-forge as the default channel. Note that Mamba and Micromamba will not use the defaults channel if it is not present in ~/.condarc like conda . The option to select the directory where environment information and packages will be stored: Prefix location? [~/micromamba] Press enter to select the default option displayed within brackets. To setup the environment log-out and log-in again. Now you can use micromamba , including the auto-completion feature.","title":"Installation"},{"location":"environment/conda/#managing-environments","text":"As an example, the creation and use of an environment for R jobs is presented. The command, micromamba create --name R-project creates an environment named R-project . The environment is activated with the command micromamba activate R-project anywhere in the file system. Next, install the base R environment package that contains the R program, and any R packages required by the project. To install packages, first ensure that the R-project environment is active, and then install any package with the command micromamba install all the required packages. Quite often, the channel name must also be specified: micromamba install --chanell Packages can be found by searching the conda-forge channel . For instance, the basic functionality of the R software environment is contained in the r-base package. Calling micromamba install --channel conda-forge r-base will install all the components required to run standalone R scripts. More involved scripts use functionality defined in various packages. The R packages are prepended with a prefix 'r-'. Thus, plm becomes r-plm and so on. After all the required packages have been installed, the environment is ready for use. Packages in the conda-forge channel come with instructions for their installation. Quite often the channel is specified in the installation instructions, -c conda-forge or --channel conda-forge . While the Micromamba installer sets-up conda-forge as the default channel, latter modification in ~/.condarc may change the channel priority. Thus it is a good practice to explicitly specify the source channel when installing a package. After work in an environment is complete, deactivate the environment, micromamba deactivate to ensure that it does not interfere with any other operations. In contrast to modules , Conda is designed to operate with a single environment active at a time. Create one environment for each project, and Conda will ensure that any package that is shared between multiple environments is installed once. Micromamba supports almost all the subcommands of Conda. For more details see the official documentation .","title":"Managing environments"},{"location":"environment/conda/#using-environments-in-submission-scripts","text":"Since all computationally heavy operations must be performed in compute nodes, Conda environments are also used in jobs submitted to the queuing system . Returning to the R example, a submission script running a single core R job can use the R-project_name environment as follows: #SBATCH --job-name R-test-job #SBATCH --nodes 1 #SBATCH --ntasks-per-node 1 #SBATCH --cpus-per-task 1 #SBATCH --time=0-02:00:00 #SBATCH --partition batch #SBATCH --qos normal echo \"Launched at $(date)\" echo \"Job ID: ${SLURM_JOBID}\" echo \"Node list: ${SLURM_NODELIST}\" echo \"Submit dir.: ${SLURM_SUBMIT_DIR}\" echo \"Numb. of cores: ${SLURM_CPUS_PER_TASK}\" micromamba activate R-project export OMP_NUM_THREADS=1 srun Rscript --no-save --no-restore script.R micromamba deactivate Useful scripting resources Formatting submission scripts for R (and other systems)","title":"Using environments in submission scripts"},{"location":"environment/conda/#cleaning-up-package-data","text":"The Conda environment managers download and store a sizable amount of data to provided packages to the various environments. Even though the package data are shared between the various environments, they still consume space in your or your project's account. There are limits in the storage space and number of files that are available to projects and users in the cluster. Since Conda packages are self managed, you need to clean unused data yourself . There are two main sources of unused data, the compressed archives of the packages that Conda stores in its cache when downloading a package, and the data of removed packages. All unused data in Micromoamba can be removed with the command micromamba clean --all that opens up an interactive dialogue with details about the operations performed. You can follow the default option, unless you have manually edited any files in you package data directory (default location ${HOME}/micromamba ). Updating environments to remove old package versions As we create new environments, we often install the latest version of each package. However, if the environments are not updated regularly, we may end up with different versions of the same package across multiple environments. If we have the same version of a package installed in all environments, we can save space by removing unused older versions. To update a package across all environments, use the command for e in $( micromamba env list | awk 'FNR>2 {print $1}' ) ; do micromamba update --name $e ; done and to update all packages across all environments for e in $( micromamba env list | awk 'FNR>2 {print $1}' ) ; do micromamba update --name $e --all ; done where FNR>2 removes the headers in the output of micromamba env list , and is thus sensitive to changes in the user interface of Micromamba. After updating packages, the clean command can be called to removed the data of unused older package versions. Sources Oficial Conda clean documentation Understanding Conda clean","title":"Cleaning up package data"},{"location":"environment/conda/#combining-conda-with-other-package-and-environment-management-tools","text":"It may be desirable to use Conda to manage environments but a different tool to manage packages, such as pip . Or subenvironments may need to be used inside a Conda environment, as for instance with tools for creating and managing isolated Python installation, such as virtualenv , or with tools for integrating managed Python installations and packages in project directories, such as Pipenv and Poetry . Conda integrates well with any such tool. Some of the most frequent cases are described bellow.","title":"Combining Conda with other package and environment management tools"},{"location":"environment/conda/#managing-packages-with-external-tools","text":"Quite often a package that is required in an environment is not available through a Conda channel, but it is available through some other distribution channel, such as the Python Package Index (PyPI) . In these cases the only solution is to create a Conda environment and install the required packages with pip from the Python Package Index. Using an external packaging tool is possible because of the method that Conda uses to install packages. Conda installs package versions in a central directory (e.g. ~/micromamba/pkgs ). Any environment that requires a package links to the central directory with hard links . Links are added to the home directory (e.g. ~/micromamba/envs/R-project for the R-project environment) of any environment that requires them. When using an external package tool, package components are installed in the same directory where Conda would install the corresponding link. Thus, external package management tools integrate seamlessly with Conda, with a couple of caveats: each package must be managed by one tool, otherwise package components will get overwritten, and packages installed by the package tool are specific to an environment and cannot be shared as with Conda, since components are installed directly and not with links. Prefer Conda over external package managers Installing the same package in multiple environments with an external package tool consumes quotas in terms of storage space and number of files , so prefer Conda when possible. This is particularly important for the inode limit, since some packages install a large number of files, and the hard links used by Conda do not consume inodes or disk space .","title":"Managing packages with external tools"},{"location":"environment/conda/#pip","text":"In this example pip is used to manage packages in a Conda environment with MkDocs related packages. To install the packages, create an environment micromamba env create --name mkdocs activate the environment, micromamba activate mkdocs and install pip micromamba install --channel conda-forge pip which will be used to install the remaining packages. The pip will be the only package that will be managed with Conda. For instance, to update Pip activate the environment, micromamba activate mkdocs and run micromaba update --all to update all installed packaged (only pip in our case). All other packages are managed by pip . For instance, assume that a mkdocs project requires the following packages: mkdocs mkdocs-minify-plugin The package mkdocs-minify-plugin is less popular and thus is is not available though a Conda channel, but it is available in PyPI. To install it, activate the mkdocs environment micromamba activate mkdocs and install the required packages with pip pip install --upgrade mkdocs mkdocs-minify-plugin inside the environment. The packages will be installed inside a directory that micromamba created for the Conda environment, for instance ${HOME}/micromamba/envs/mkdocs along side packages installed by micromamba . As a results, 'system-wide' installations with pip inside a Conda environment do not interfere with system packages. Do not install packages in Conda environments with pip as a user User installed packages (e.g. pip install --user --upgrade mkdocs-minify-plugin ) are installed in the same directory for all environments, typically in ~/.local/ , and can interfere with other versions of the same package installed from other Conda environments.","title":"Pip"},{"location":"environment/conda/#pkg","text":"The Julia programming language provides its own package and environment manager, Pkg. The package manager of Julia provides many useful capabilities and it is recommended that it is used with Julia projects. Details about the use of Pkg can be found in the official documentation . The Pkg package manager comes packages with Julia. Start by creating an environment, mocromamba env create --name julia activate the environment, micromamba activate julia and install Julia, micromamba install --channel conda-forge julia to start using Pkg. In order to install a Julia package, activate the Julia environment, and start an interactive REPL session, $ julia julia> by just calling julia without any input files. Enter the Pkg package manager by pressing ] . Exit the package manager by clearing all the input from the line with backspace, and then pressing backspace one more time. In the package manager you can see the status of the current environment, ( @julia ) pkg > status Status `~/micromamba/envs/julia/share/julia/environments/julia/Project.toml` ( empty project ) add or remove packages, ( @julia ) pkg > add Example ( @julia ) pkg > remove Example update the packages in the environment, ( @julia ) pkg > update and perform many other operations, such as exporting and importing environments from plain text files which describe the environment setup, and pinning packages to specific versions. The Pkg package manager maintains a global environment, but also supports the creation and use of local environments that are used within a project directory. The use of local environments is highly recommended, please read the documentation for more information. After installing the Julia language in a Conda environment, the language distribution itself should be managed with micromamba and all packages in global or local environments with the Pkg package manager. To update Julia activate the Conda environment where Julia is stored and call micromamba update julia where as to update packages installed with Pgk use the update command of Pkg. The packages for local and global environments are stored in the Julia installation directory, typically ${HOME}/micromamba/envs/julia/share if the default location for the Micromamba environment directory is used. Advanced management of package data Julia packages will consume storage and number of files quota . Pkg uses automatic garbage collection to cleanup packages that are no longer is use. In general you don't need to manage then package data, simply remove the package and its data will be deleted automatically after some time. However, when you exceed your quota you need to delete files immediately. The immediate removal of the data of uninstalled packages can be forced with the command: using Pkg using Dates Pkg . gc (; collect_delay = Dates . Day ( 0 )) Make sure that the packages have been removed from all the environments that use them Sources : Immediate package data clean up Useful resources Pkg documentation","title":"Pkg"},{"location":"environment/conda/#combining-conda-with-external-environment-management-tools","text":"Quite often it is required to create isolated environments using external tools. For instance, tools such as virtualenv can install and manage a Python distribution in a given directory and export and import environment descriptions from text files. This functionalities allows for instance the shipping of a description of the Python environment as part of a project. Higher level tools such as pipenv automate the process by managing the Python environment as part of a project directory. The description of the environment is stored in version controlled files, and the Python packages are stored in a non-tracked directory within the project directory. Some wholistic project management tools, such as poetry , further integrate the management of the Python environment withing the project management workflow. Installing and using in Conda environments tools that create isolated environments is relatively straight forward. Create an environment where only the required that tool is installed, and manage any subenvironments using the installed tool. Create a different environment for each tool While this is not a requirement it is a good practice. For instance, pipenv and poetry used to and may still have conflicting dependencies; Conda detects the dependency and aborts the conflicting installation.","title":"Combining Conda with external environment management tools"},{"location":"environment/conda/#pipenv","text":"To demonstrate the usage of pipenv , create a Conda environment, micromamba enc create --name pipenv activate it micromamba activate pipenv and install the pipenv package micromamba install --channel conda-forge pipenv as the only package in this environment. Now the pipenv is managed with Conda, for instance to update pipenv activate the environment micromamba activate pipenv and call micromamba update --all to update the single installed package. Inside the environment use pipenv as usual to create and manage project environments.","title":"Pipenv"},{"location":"environment/easybuild/","text":"Easybuild \u00b6 EasyBuild (EB for short) is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. A large number of scientific software are supported ( at least 2175 supported software packages since the 4.3.2 release) - see also What is EasyBuild? . For several years now, Easybuild is used to manage the ULHPC User Software Set and generate automatically the module files available to you on our computational resources in either prod (default) or devel (early development/testing) environment -- see ULHPC Toolchains and Software Set Versioning . This enables users to easily extend the global Software Set with their own local software builds, either performed within their global home directory or ( better ) in a shared project directory though Easybuild , which generate automatically module files compliant with the ULHPC module setup . Why using an automatic building tool on HPC environment like Easybuild or Spack ? Well that may seem obvious to some of you, but scientific software is often difficult to build. Not all rely on standard building tools like Autotools/Automake (and the famous configure; make; make install ) or CMake. And even in that case, parsing the available option to ensure matching the hardware configuration of the computing resources used for the execution is time consuming and error-prone. Most of the time unfortunately, scientific software embed hardcoded parameters and/or poor/outdated documentation with incomplete build procedures. In this context, software build and installation frameworks like Easybuild or Spack helps to facilitate the building task in a consistent and automatic way, while generating also the LMod modulefiles. We select Easybuild as primary building tool to ensure the best optimized builds. Some HPC sites use both -- see this talk from William Lucas at EPCC for instance. It does not prevent from maintaining your own build instructions notes . Easybuild Concepts and terminology \u00b6 Official Easybuild Tutorial EasyBuild relies on two main concepts: Toolchains and EasyConfig files . A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. The two main toolchains frequently used on the UL HPC platform are the foss (\" Free and Open Source Software \") and the intel one. foss , based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.). intel , based on the Intel compiler suit ([])and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.). An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedures (like configure , make and make install ), this file is very simple. Many EasyConfig files are already provided with EasyBuild. ULHPC Easybuild Configuration \u00b6 To build software with Easybuild compliant with the configuration in place on the ULHPC facility, you need to be aware of the following setup: Modules tool ( $EASYBUILD_MODULES_TOOL ): Lmod (see docs ) Module Naming Scheme ( EASYBUILD_MODULE_NAMING_SCHEME ): we use a special hierarchical organization where the software are classified/ categorized under a pre-defined class. These variables are defined at the global profile level, under /etc/profile.d/ulhpc_resif.sh on the compute nodes as follows: export EASYBUILD_MODULES_TOOL = Lmod export EASYBUILD_MODULE_NAMING_SCHEME = CategorizedModuleNamingScheme All builds and installations are performed at user level, so you don't need the admin (i.e. root ) rights. Another very important configuration variable is the Overall Easybuild prefix path $EASYBUILD_PREFIX which affects the default value of several configuration options: built software are placed under ${EASYBUILD_PREFIX}/software/ modules install path: ${EASYBUILD_PREFIX}/modules/all (determined via Overall prefix path (--prefix), --subdir-modules and --suffix-modules-path) You can thus extend the ULHPC Software set with your own local builds by setting appropriately the variable $EASYBUILD_PREFIX : For installation in your home directory: export EASYBUILD_PREFIX=$HOME/.local/easybuild For installation in a shared project directory : export EASYBUILD_PREFIX=$PROJECTHOME//easybuild Adapting you custom build to cluster, the toolchain version and the architecture Just like the ULHPC software set ( installed in EASYBUILD_PREFIX=/opt/apps/resif/// ), you may want to isolate your local builds to take into account the cluster $ULHPC_CLUSTER (\"iris\" or \"aion\"), the toolchain version (Ex: 2019b, 2020b etc.) you build upon and eventually the architecture . In that case, you can use the following helper scripts: resif-load-home-swset-prod which is roughly equivalent to the following code: # EASYBUILD_PREFIX: [basedir]/// # Ex: Default EASYBUILD_PREFIX in your home - Adapt to project directory if needed _EB_PREFIX = $HOME /.local/easybuild # ... eventually complemented with cluster [ -n \" ${ ULHPC_CLUSTER } \" ] && _EB_PREFIX = \" ${ _EB_PREFIX } / ${ ULHPC_CLUSTER } \" # ... eventually complemented with software set version _EB_PREFIX = \" ${ _EB_PREFIX } / ${ RESIF_VERSION_PROD } \" # ... eventually complemented with arch [ -n \" ${ RESIF_ARCH } \" ] && _EB_PREFIX = \" ${ _EB_PREFIX } / ${ RESIF_ARCH } \" export EASYBUILD_PREFIX = \" ${ _EB_PREFIX } \" export LOCAL_MODULES = ${ EASYBUILD_PREFIX } /modules/all For a shared project directory located under $PROJECTHOME/ , you can use the following following helper scripts: resif-load-project-swset-prod $PROJECTHOME / ACM PEARC'21: RESIF 3.0 For more details on the way we setup and deploy the User Software Environment on ULHPC systems through the RESIF 3 framework, see the ACM PEARC'21 conference paper presented on July 22, 2021. ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides | Github : Sebastien Varrette, Emmanuel Kieffer, Frederic Pinel, Ezhilmathi Krishnasamy, Sarah Peter, Hyacinthe Cartiaux, and Xavier Besseron. 2021. RESIF 3.0: Toward a Flexible & Automated Management of User Software Environment on HPC facility. In Practice and Experience in Advanced Research Computing (PEARC '21) . Association for Computing Machinery (ACM), New York, NY, USA, Article 33, 1\u20134. https://doi.org/10.1145/3437359.3465600 Installation / Update local Easybuild \u00b6 You can of course use the default Easubuild that comes with the ULHPC software set with module load tools/EasyBuild . But as soon as you want to install your local builds, you have interest to install the up-to-date release of EasyBuild in your local $EASYBUILD_PREFIX . For this purpose, you can follow the official instructions .","title":"Easybuild"},{"location":"environment/easybuild/#easybuild","text":"EasyBuild (EB for short) is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. A large number of scientific software are supported ( at least 2175 supported software packages since the 4.3.2 release) - see also What is EasyBuild? . For several years now, Easybuild is used to manage the ULHPC User Software Set and generate automatically the module files available to you on our computational resources in either prod (default) or devel (early development/testing) environment -- see ULHPC Toolchains and Software Set Versioning . This enables users to easily extend the global Software Set with their own local software builds, either performed within their global home directory or ( better ) in a shared project directory though Easybuild , which generate automatically module files compliant with the ULHPC module setup . Why using an automatic building tool on HPC environment like Easybuild or Spack ? Well that may seem obvious to some of you, but scientific software is often difficult to build. Not all rely on standard building tools like Autotools/Automake (and the famous configure; make; make install ) or CMake. And even in that case, parsing the available option to ensure matching the hardware configuration of the computing resources used for the execution is time consuming and error-prone. Most of the time unfortunately, scientific software embed hardcoded parameters and/or poor/outdated documentation with incomplete build procedures. In this context, software build and installation frameworks like Easybuild or Spack helps to facilitate the building task in a consistent and automatic way, while generating also the LMod modulefiles. We select Easybuild as primary building tool to ensure the best optimized builds. Some HPC sites use both -- see this talk from William Lucas at EPCC for instance. It does not prevent from maintaining your own build instructions notes .","title":"Easybuild"},{"location":"environment/easybuild/#easybuild-concepts-and-terminology","text":"Official Easybuild Tutorial EasyBuild relies on two main concepts: Toolchains and EasyConfig files . A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. The two main toolchains frequently used on the UL HPC platform are the foss (\" Free and Open Source Software \") and the intel one. foss , based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.). intel , based on the Intel compiler suit ([])and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.). An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedures (like configure , make and make install ), this file is very simple. Many EasyConfig files are already provided with EasyBuild.","title":"Easybuild Concepts and terminology"},{"location":"environment/easybuild/#ulhpc-easybuild-configuration","text":"To build software with Easybuild compliant with the configuration in place on the ULHPC facility, you need to be aware of the following setup: Modules tool ( $EASYBUILD_MODULES_TOOL ): Lmod (see docs ) Module Naming Scheme ( EASYBUILD_MODULE_NAMING_SCHEME ): we use a special hierarchical organization where the software are classified/ categorized under a pre-defined class. These variables are defined at the global profile level, under /etc/profile.d/ulhpc_resif.sh on the compute nodes as follows: export EASYBUILD_MODULES_TOOL = Lmod export EASYBUILD_MODULE_NAMING_SCHEME = CategorizedModuleNamingScheme All builds and installations are performed at user level, so you don't need the admin (i.e. root ) rights. Another very important configuration variable is the Overall Easybuild prefix path $EASYBUILD_PREFIX which affects the default value of several configuration options: built software are placed under ${EASYBUILD_PREFIX}/software/ modules install path: ${EASYBUILD_PREFIX}/modules/all (determined via Overall prefix path (--prefix), --subdir-modules and --suffix-modules-path) You can thus extend the ULHPC Software set with your own local builds by setting appropriately the variable $EASYBUILD_PREFIX : For installation in your home directory: export EASYBUILD_PREFIX=$HOME/.local/easybuild For installation in a shared project directory : export EASYBUILD_PREFIX=$PROJECTHOME//easybuild Adapting you custom build to cluster, the toolchain version and the architecture Just like the ULHPC software set ( installed in EASYBUILD_PREFIX=/opt/apps/resif/// ), you may want to isolate your local builds to take into account the cluster $ULHPC_CLUSTER (\"iris\" or \"aion\"), the toolchain version (Ex: 2019b, 2020b etc.) you build upon and eventually the architecture . In that case, you can use the following helper scripts: resif-load-home-swset-prod which is roughly equivalent to the following code: # EASYBUILD_PREFIX: [basedir]/// # Ex: Default EASYBUILD_PREFIX in your home - Adapt to project directory if needed _EB_PREFIX = $HOME /.local/easybuild # ... eventually complemented with cluster [ -n \" ${ ULHPC_CLUSTER } \" ] && _EB_PREFIX = \" ${ _EB_PREFIX } / ${ ULHPC_CLUSTER } \" # ... eventually complemented with software set version _EB_PREFIX = \" ${ _EB_PREFIX } / ${ RESIF_VERSION_PROD } \" # ... eventually complemented with arch [ -n \" ${ RESIF_ARCH } \" ] && _EB_PREFIX = \" ${ _EB_PREFIX } / ${ RESIF_ARCH } \" export EASYBUILD_PREFIX = \" ${ _EB_PREFIX } \" export LOCAL_MODULES = ${ EASYBUILD_PREFIX } /modules/all For a shared project directory located under $PROJECTHOME/ , you can use the following following helper scripts: resif-load-project-swset-prod $PROJECTHOME / ACM PEARC'21: RESIF 3.0 For more details on the way we setup and deploy the User Software Environment on ULHPC systems through the RESIF 3 framework, see the ACM PEARC'21 conference paper presented on July 22, 2021. ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides | Github : Sebastien Varrette, Emmanuel Kieffer, Frederic Pinel, Ezhilmathi Krishnasamy, Sarah Peter, Hyacinthe Cartiaux, and Xavier Besseron. 2021. RESIF 3.0: Toward a Flexible & Automated Management of User Software Environment on HPC facility. In Practice and Experience in Advanced Research Computing (PEARC '21) . Association for Computing Machinery (ACM), New York, NY, USA, Article 33, 1\u20134. https://doi.org/10.1145/3437359.3465600","title":"ULHPC Easybuild Configuration"},{"location":"environment/easybuild/#installation-update-local-easybuild","text":"You can of course use the default Easubuild that comes with the ULHPC software set with module load tools/EasyBuild . But as soon as you want to install your local builds, you have interest to install the up-to-date release of EasyBuild in your local $EASYBUILD_PREFIX . For this purpose, you can follow the official instructions .","title":"Installation / Update local Easybuild"},{"location":"environment/modules/","text":"ULHPC Software/Modules Environment \u00b6 The UL HPC facility provides a large variety of scientific applications to its user community, either domain-specific codes and general purpose development tools which enable research and innovation excellence across a wide set of computational fields. -- see software list . We use the Environment Modules / LMod framework which provided the module utility on Compute nodes to manage nearly all software. There are two main advantages of the module approach: ULHPC can provide many different versions and/or installations of a single software package on a given machine, including a default version as well as several older and newer version. Users can easily switch to different versions or installations without having to explicitly specify different paths. With modules, the MANPATH and related environment variables are automatically managed. ULHPC modules are in practice automatically generated by Easybuild . EasyBuild (EB for short) is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. A large number of scientific software are supported ( at least 2175 supported software packages since the 4.3.2 release) - see also What is EasyBuild? . For several years now, Easybuild is used to manage the ULHPC User Software Set and generate automatically the module files available to you on our computational resources in either prod (default) or devel (early development/testing) environment -- see ULHPC Toolchains and Software Set Versioning . This enables users to easily extend the global Software Set with their own local software builds, either performed within their global home directory or ( better ) in a shared project directory though Easybuild , which generate automatically module files compliant with the ULHPC module setup . Environment modules and LMod \u00b6 Environment Modules are a standard and well-established technology across HPC sites, to permit developing and using complex software and libraries build with dependencies, allowing multiple versions of software stacks and combinations thereof to co-exist. It brings the module command which is used to manage environment variables such as PATH , LD_LIBRARY_PATH and MANPATH , enabling the easy loading and unloading of application/library profiles and their dependencies. Why do you need [Environment] Modules? When users login to a Linux system, they get a login shell and the shell uses Environment variables to run commands and applications. Most common are: PATH : colon-separated list of directories in which your system looks for executable files; MANPATH : colon-separated list of directories in which man searches for the man pages; LD_LIBRARY_PATH : colon-separated list of directories in which your system looks for for ELF / *.so libraries at execution time needed by applications. There are also application specific environment variables such as CPATH , LIBRARY_PATH , JAVA_HOME , LM_LICENSE_FILE , MKLROOT etc. A traditional way to setup these Environment variables is by customizing the shell initialization files : i.e. /etc/profile , .bash_profile , and .bashrc This proves to be very impractical on multi-user systems with various applications and multiple application versions installed as on an HPC facility. To overcome the difficulty of setting and changing the Environment variables, the TCL/C Environment Modules were introduced over 2 decades ago. The Environment Modules package is a tool that simplify shell initialization and lets users easily modify their environment during the session with modulefiles . Each modulefile contains the information needed to configure the shell for an application. Once the Modules package is initialized, the environment can be modified on a per-module basis using the module command which interprets modulefiles. Typically modulefiles instruct the module command to alter or set shell environment variables such as PATH , MANPATH , etc. Modulefiles may be shared by many users on a system (as done on the ULHPC clusters) and users may have their own collection to supplement or replace the shared modulefiles. Modules can be loaded and unloaded dynamically and atomically, in an clean fashion. All popular shells are supported, including bash , ksh , zsh , sh , csh , tcsh , fish , as well as some scripting languages such as perl , ruby , tcl , python , cmake and R . Modules are useful in managing different versions of applications. Modules can also be bundled into metamodules that will load an entire suite of different applications -- this is precisely the way we manage the ULHPC Software Set Tcl/C Environment Modules (Tmod) vs. Tcl Environment Modules vs. Lmod There exists several implementation of the module tool: Tcl/C Environment Modules (3.2.10 \\leq \\leq version < 4), also called Tmod : the seminal ( old ) implementation Tcl-only variant of Environment modules (version \\geq \\geq 4), previously called Modules-Tcl ( recommended ) Lmod , a Lua based Environment Module System Lmod (\"L\" stands for Lua ) provides all of the functionality of TCL/C Environment Modules plus more features: support for hierarchical module file structure MODULEPATH is dynamically updated when modules are loaded. makes loaded modules inactive and active to provide sane environment. supports for hidden modules support for optional usage tracking (implemented on ULHPC facilities) In particular, Lmod enforces the following safety features that are not always guaranted with the other tools: The One Name Rule : Users can only have one version active Users can only load one compiler or MPI stack at a time (through the family(...) directive) The ULHPC Facility relies on Lmod -- the associated Modulefiles being automatically generated by Easybuild . The ULHPC Facility relies on Lmod , a Lua-based Environment module system that easily handles the MODULEPATH Hierarchical problem. In this context, the module command supports the following subcommands: Command Description module avail Lists all the modules which are available to be loaded module spider Search for among available modules (Lmod only) module load [mod2...] Load a module module unload Unload a module module list List loaded modules module purge Unload all modules (purge) module display Display what a module does module use Prepend the directory to the MODULEPATH environment variable module unuse Remove the directory from the MODULEPATH environment variable What is module ? module is a shell function that modifies user shell upon load of a modulefile. It is defined as follows $ type module module is a function module () { eval $($LMOD_CMD bash \"$@\") && eval $(${LMOD_SETTARG_CMD:-:} -s sh) } In particular, module is NOT a program At the heart of environment modules interaction resides the following components: the MODULEPATH environment variable, which defines a colon-separated list of directories to search for modulefiles modulefile (see an example ) associated to each available software. Example of ULHPC toolchain/foss (auto-generated) Modulefile $ module show toolchain/foss ------------------------------------------------------------------------------- /opt/apps/resif/iris/2019b/broadwell/modules/all/toolchain/foss/2019b.lua: ------------------------------------------------------------------------------- help([[ Description =========== GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. More information ================ - Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain ]]) whatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.\") whatis(\"Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain\") whatis(\"URL: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain\") conflict(\"toolchain/foss\") load(\"compiler/GCC/8.3.0\") load(\"mpi/OpenMPI/3.1.4-GCC-8.3.0\") load(\"numlib/OpenBLAS/0.3.7-GCC-8.3.0\") load(\"numlib/FFTW/3.3.8-gompi-2019b\") load(\"numlib/ScaLAPACK/2.0.2-gompi-2019b\") setenv(\"EBROOTFOSS\",\"/opt/apps/resif/iris/2019b/broadwell/software/foss/2019b\") setenv(\"EBVERSIONFOSS\",\"2019b\") setenv(\"EBDEVELFOSS\",\"/opt/apps/resif/iris/2019b/broadwell/software/foss/2019b/easybuild/toolchain-foss-2019b-easybuild-devel\") ( reminder ): the module command is ONLY available on the compute nodes, NOT on the access front-ends. In particular, you need to be within a job to load ULHPC or private modules. ULHPC $MODULEPATH \u00b6 By default, the MODULEPATH environment variable holds a single searched directory holding the optimized builds prepared for you by the ULHPC Team. The general format of this directory is as follows: /opt/apps/resif////modules/all where: depicts the name of the cluster ( iris or aion ). Stored as $ULHPC_CLUSTER . corresponds to the ULHPC Software set release (aligned with Easybuid toolchains release ), i.e. 2019b , 2020a etc. Stored as $RESIF_VERSION_{PROD,DEVEL,LEGACY} depending on the Production / development / legacy ULHPC software set version is a lower-case strings that categorize the CPU architecture of the build host, and permits to easyli identify optimized target architecture. It is stored as $RESIF_ARCH . On Intel nodes: broadwell ( default ), skylake On AMD nodes: epyc On GPU nodes: gpu Cluster Arch. $RESIF_ARCH $MODULEPATH Environment variable Iris broadwell (default) /opt/apps/resif/iris//broadwell/modules/all Iris skylake /opt/apps/resif/iris//skylake/modules/all Iris gpu /opt/apps/resif/iris//gpu/modules/all Aion epyc (default) /opt/apps/resif/aion//{epyc}/modules/all On skylake nodes, you may want to use the optimized modules for skylake On GPU nodes, you may want to use the CPU-optimized builds for skylake (in addition to the gpu -enabled softwares) ACM PEARC'21: RESIF 3.0 If you are interested to know more on the wey we setup and deploy the User Software Environment on ULHPC systems through the RESIF 3 framework, you can refer to the below article presented during the ACM PEARC'21 conference, on July 22, 2021. ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides | Github : Sebastien Varrette, Emmanuel Kieffer, Frederic Pinel, Ezhilmathi Krishnasamy, Sarah Peter, Hyacinthe Cartiaux, and Xavier Besseron. 2021. RESIF 3.0: Toward a Flexible & Automated Management of User Software Environment on HPC facility. In Practice and Experience in Advanced Research Computing (PEARC '21) . Association for Computing Machinery (ACM), New York, NY, USA, Article 33, 1\u20134. https://doi.org/10.1145/3437359.3465600 Module Naming Schemes \u00b6 What is a Module Naming Scheme? The full software and module install paths for a particular software package are determined by the active module naming scheme along with the general software and modules install paths specified by the EasyBuild configuration. You can list the supported module naming schemes of Easybuild using: $ eb --avail-module-naming-schemes List of supported module naming schemes: EasyBuildMNS CategorizedHMNS MigrateFromEBToHMNS HierarchicalMNS CategorizedModuleNamingScheme See Flat vs. Hierarchical module naming scheme for an illustrated explaination of the difference between two extreme cases: flat or 3-level hierarchical. On ULHPC systems, we selected an intermediate scheme called CategorizedModuleNamingScheme . Module Naming Schemes on ULHPC system ULHPC modules are organised through the Categorized Naming Scheme Format: //- This means that the typical module hierarchy has as prefix a category level, taken out from one of the supported software category or module class : $ eb --show-default-moduleclasses Default available module classes: base: Default module class astro: Astronomy, Astrophysics and Cosmology bio: Bioinformatics, biology and biomedical cae: Computer Aided Engineering (incl. CFD) chem: Chemistry, Computational Chemistry and Quantum Chemistry compiler: Compilers data: Data management & processing tools debugger: Debuggers devel: Development tools geo: Earth Sciences ide: Integrated Development Environments (e.g. editors) lang: Languages and programming aids lib: General purpose libraries math: High-level mathematical software mpi: MPI stacks numlib: Numerical Libraries perf: Performance tools quantum: Quantum Computing phys: Physics and physical systems simulations system: System utilities (e.g. highly depending on system OS and hardware) toolchain: EasyBuild toolchains tools: General purpose tools vis: Visualization, plotting, documentation and typesetting It follows that the ULHPC software modules are structured according to the organization depicted below ( click to enlarge ). ULHPC Toolchains and Software Set Versioning \u00b6 We offer a YEARLY release of the ULHPC Software Set based on Easybuid release of toolchains -- see Component versions ( fixed per release ) in the foss and intel toolchains. However , count at least 6 months of validation/import after EB release before ULHPC release An overview of the currently available component versions is depicted below: Name Type 2019b ( legacy ) 2020a 2020b ( prod ) 2021a 2021b ( devel ) GCCCore compiler 8.3.0 9.3.0 10.2.0 10.3.0 11.2.0 foss toolchain 2019b 2020a 2020b 2021a 2021b intel toolchain 2019b 2020a 2020b 2021a 2021b binutils 2.32 2.34 2.35 2.36 2.37 Python 3.7.4 (and 2.7.16) 3.8.2 (and 2.7.18) 3.8.6 3.9.2 3.9.6 LLVM compiler 9.0.1 10.0.1 11.0.0 11.1.0 12.0.1 OpenMPI MPI 3.1.4 4.0.3 4.0.5 4.1.1 4.1.2 Once on a node, the current version of the ULHPC Software Set in production is stored in $RESIF_VERSION_PROD . You can use the variables $MODULEPATH_{LEGACY,PROD,DEVEL} to access or set the MODULEPATH command with the appropriate value. Yet we have define utility scripts to facilitate your quick reset of the module environment, i.e., resif-load-swset-{legacy,prod,devel} and resif-reset-swset For instance, if you want to use the legacy software set, proceed as follows in your launcher scripts: resif-load-swset-legacy # Eq. of export MODULEPATH=$MODULEPATH_LEGACY # [...] # Restore production settings resif-load-swset-prod # Eq. of export MODULEPATH=$MODULEPATH_PROD If on the contrary you want to test the (new) development software set, i.e., the devel version, stored in $RESIF_VERSION_DEVEL : resif-load-swset-devel # Eq. of export MODULEPATH=$MODULEPATH_DEVEL # [...] # Restore production settings resif-reset-swset # As resif-load-swset-prod (iris only) Skylake Optimized builds Skylake optimized build can be loaded on regular nodes using resif-load-swset-skylake # Eq. of export MODULEPATH=$MODULEPATH_PROD_SKYLAKE You MUST obviously be on a Skylake node ( sbatch -C skylake [...] ) to take benefit from it. Note that this action is not required on GPU nodes. GPU Optimized builds vs. CPU software set on GPU nodes On GPU nodes, be aware that the default MODULEPATH holds two directories: GPU Optimized builds ( i.e. typically against the {foss,intel}cuda toolchains) stored under /opt/apps/resif///gpu/modules/all CPU Optimized builds (ex: skylake on Iris )) stored under /opt/apps/resif///skylake/modules/all You may want to exclude CPU builds to ensure you take the most out of the GPU accelerators. In that case, you may want to run: # /!\\ ADAPT accordingly module unuse /opt/apps/resif/ ${ ULHPC_CLUSTER } / ${ RESIF_VERSION_PROD } /skylake/modules/all Using Easybuild to Create Custom Modules \u00b6 Just like we do, you probably want to use Easybuild to complete the existing software set with your own modules and software builds. See Building Custom (or missing) software documentation for more details. Creating a Custom Module Environment \u00b6 You can modify your environment so that certain modules are loaded whenever you log in. Use module save [] and module restore [] for that purpose -- see Lmod documentation on User collections You can also create and install your own modules for your convenience or for sharing software among collaborators. See the modulefile documentation for details of the required format and available commands. These custom modulefiles can be made visible to the module command by module use /path/to/the/custom/modulefiles Warning Make sure the UNIX file permissions grant access to all users who want to use the software. Do not give write permissions to your home directory to anyone else. Note The module use command adds new directories before other module search paths (defined as $MODULEPATH ), so modules defined in a custom directory will have precedence if there are other modules with the same name in the module search paths. If you prefer to have the new directory added at the end of $MODULEPATH , use module use -a instead of module use . Module FAQ \u00b6 Is there an environment variable that captures loaded modules? Yes, active modules can be retrieved via $LOADEDMODULES , this environment variable is automatically changed to reflect active loaded modules that is reflected via module list . If you want to access modulefile path for loaded modules you can retrieve via $_LM_FILES","title":"Modules"},{"location":"environment/modules/#ulhpc-softwaremodules-environment","text":"The UL HPC facility provides a large variety of scientific applications to its user community, either domain-specific codes and general purpose development tools which enable research and innovation excellence across a wide set of computational fields. -- see software list . We use the Environment Modules / LMod framework which provided the module utility on Compute nodes to manage nearly all software. There are two main advantages of the module approach: ULHPC can provide many different versions and/or installations of a single software package on a given machine, including a default version as well as several older and newer version. Users can easily switch to different versions or installations without having to explicitly specify different paths. With modules, the MANPATH and related environment variables are automatically managed. ULHPC modules are in practice automatically generated by Easybuild . EasyBuild (EB for short) is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. A large number of scientific software are supported ( at least 2175 supported software packages since the 4.3.2 release) - see also What is EasyBuild? . For several years now, Easybuild is used to manage the ULHPC User Software Set and generate automatically the module files available to you on our computational resources in either prod (default) or devel (early development/testing) environment -- see ULHPC Toolchains and Software Set Versioning . This enables users to easily extend the global Software Set with their own local software builds, either performed within their global home directory or ( better ) in a shared project directory though Easybuild , which generate automatically module files compliant with the ULHPC module setup .","title":"ULHPC Software/Modules Environment"},{"location":"environment/modules/#environment-modules-and-lmod","text":"Environment Modules are a standard and well-established technology across HPC sites, to permit developing and using complex software and libraries build with dependencies, allowing multiple versions of software stacks and combinations thereof to co-exist. It brings the module command which is used to manage environment variables such as PATH , LD_LIBRARY_PATH and MANPATH , enabling the easy loading and unloading of application/library profiles and their dependencies. Why do you need [Environment] Modules? When users login to a Linux system, they get a login shell and the shell uses Environment variables to run commands and applications. Most common are: PATH : colon-separated list of directories in which your system looks for executable files; MANPATH : colon-separated list of directories in which man searches for the man pages; LD_LIBRARY_PATH : colon-separated list of directories in which your system looks for for ELF / *.so libraries at execution time needed by applications. There are also application specific environment variables such as CPATH , LIBRARY_PATH , JAVA_HOME , LM_LICENSE_FILE , MKLROOT etc. A traditional way to setup these Environment variables is by customizing the shell initialization files : i.e. /etc/profile , .bash_profile , and .bashrc This proves to be very impractical on multi-user systems with various applications and multiple application versions installed as on an HPC facility. To overcome the difficulty of setting and changing the Environment variables, the TCL/C Environment Modules were introduced over 2 decades ago. The Environment Modules package is a tool that simplify shell initialization and lets users easily modify their environment during the session with modulefiles . Each modulefile contains the information needed to configure the shell for an application. Once the Modules package is initialized, the environment can be modified on a per-module basis using the module command which interprets modulefiles. Typically modulefiles instruct the module command to alter or set shell environment variables such as PATH , MANPATH , etc. Modulefiles may be shared by many users on a system (as done on the ULHPC clusters) and users may have their own collection to supplement or replace the shared modulefiles. Modules can be loaded and unloaded dynamically and atomically, in an clean fashion. All popular shells are supported, including bash , ksh , zsh , sh , csh , tcsh , fish , as well as some scripting languages such as perl , ruby , tcl , python , cmake and R . Modules are useful in managing different versions of applications. Modules can also be bundled into metamodules that will load an entire suite of different applications -- this is precisely the way we manage the ULHPC Software Set Tcl/C Environment Modules (Tmod) vs. Tcl Environment Modules vs. Lmod There exists several implementation of the module tool: Tcl/C Environment Modules (3.2.10 \\leq \\leq version < 4), also called Tmod : the seminal ( old ) implementation Tcl-only variant of Environment modules (version \\geq \\geq 4), previously called Modules-Tcl ( recommended ) Lmod , a Lua based Environment Module System Lmod (\"L\" stands for Lua ) provides all of the functionality of TCL/C Environment Modules plus more features: support for hierarchical module file structure MODULEPATH is dynamically updated when modules are loaded. makes loaded modules inactive and active to provide sane environment. supports for hidden modules support for optional usage tracking (implemented on ULHPC facilities) In particular, Lmod enforces the following safety features that are not always guaranted with the other tools: The One Name Rule : Users can only have one version active Users can only load one compiler or MPI stack at a time (through the family(...) directive) The ULHPC Facility relies on Lmod -- the associated Modulefiles being automatically generated by Easybuild . The ULHPC Facility relies on Lmod , a Lua-based Environment module system that easily handles the MODULEPATH Hierarchical problem. In this context, the module command supports the following subcommands: Command Description module avail Lists all the modules which are available to be loaded module spider Search for among available modules (Lmod only) module load [mod2...] Load a module module unload Unload a module module list List loaded modules module purge Unload all modules (purge) module display Display what a module does module use Prepend the directory to the MODULEPATH environment variable module unuse Remove the directory from the MODULEPATH environment variable What is module ? module is a shell function that modifies user shell upon load of a modulefile. It is defined as follows $ type module module is a function module () { eval $($LMOD_CMD bash \"$@\") && eval $(${LMOD_SETTARG_CMD:-:} -s sh) } In particular, module is NOT a program At the heart of environment modules interaction resides the following components: the MODULEPATH environment variable, which defines a colon-separated list of directories to search for modulefiles modulefile (see an example ) associated to each available software. Example of ULHPC toolchain/foss (auto-generated) Modulefile $ module show toolchain/foss ------------------------------------------------------------------------------- /opt/apps/resif/iris/2019b/broadwell/modules/all/toolchain/foss/2019b.lua: ------------------------------------------------------------------------------- help([[ Description =========== GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. More information ================ - Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain ]]) whatis(\"Description: GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.\") whatis(\"Homepage: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain\") whatis(\"URL: https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain\") conflict(\"toolchain/foss\") load(\"compiler/GCC/8.3.0\") load(\"mpi/OpenMPI/3.1.4-GCC-8.3.0\") load(\"numlib/OpenBLAS/0.3.7-GCC-8.3.0\") load(\"numlib/FFTW/3.3.8-gompi-2019b\") load(\"numlib/ScaLAPACK/2.0.2-gompi-2019b\") setenv(\"EBROOTFOSS\",\"/opt/apps/resif/iris/2019b/broadwell/software/foss/2019b\") setenv(\"EBVERSIONFOSS\",\"2019b\") setenv(\"EBDEVELFOSS\",\"/opt/apps/resif/iris/2019b/broadwell/software/foss/2019b/easybuild/toolchain-foss-2019b-easybuild-devel\") ( reminder ): the module command is ONLY available on the compute nodes, NOT on the access front-ends. In particular, you need to be within a job to load ULHPC or private modules.","title":"Environment modules and LMod"},{"location":"environment/modules/#ulhpc-modulepath","text":"By default, the MODULEPATH environment variable holds a single searched directory holding the optimized builds prepared for you by the ULHPC Team. The general format of this directory is as follows: /opt/apps/resif////modules/all where: depicts the name of the cluster ( iris or aion ). Stored as $ULHPC_CLUSTER . corresponds to the ULHPC Software set release (aligned with Easybuid toolchains release ), i.e. 2019b , 2020a etc. Stored as $RESIF_VERSION_{PROD,DEVEL,LEGACY} depending on the Production / development / legacy ULHPC software set version is a lower-case strings that categorize the CPU architecture of the build host, and permits to easyli identify optimized target architecture. It is stored as $RESIF_ARCH . On Intel nodes: broadwell ( default ), skylake On AMD nodes: epyc On GPU nodes: gpu Cluster Arch. $RESIF_ARCH $MODULEPATH Environment variable Iris broadwell (default) /opt/apps/resif/iris//broadwell/modules/all Iris skylake /opt/apps/resif/iris//skylake/modules/all Iris gpu /opt/apps/resif/iris//gpu/modules/all Aion epyc (default) /opt/apps/resif/aion//{epyc}/modules/all On skylake nodes, you may want to use the optimized modules for skylake On GPU nodes, you may want to use the CPU-optimized builds for skylake (in addition to the gpu -enabled softwares) ACM PEARC'21: RESIF 3.0 If you are interested to know more on the wey we setup and deploy the User Software Environment on ULHPC systems through the RESIF 3 framework, you can refer to the below article presented during the ACM PEARC'21 conference, on July 22, 2021. ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides | Github : Sebastien Varrette, Emmanuel Kieffer, Frederic Pinel, Ezhilmathi Krishnasamy, Sarah Peter, Hyacinthe Cartiaux, and Xavier Besseron. 2021. RESIF 3.0: Toward a Flexible & Automated Management of User Software Environment on HPC facility. In Practice and Experience in Advanced Research Computing (PEARC '21) . Association for Computing Machinery (ACM), New York, NY, USA, Article 33, 1\u20134. https://doi.org/10.1145/3437359.3465600","title":"ULHPC $MODULEPATH"},{"location":"environment/modules/#module-naming-schemes","text":"What is a Module Naming Scheme? The full software and module install paths for a particular software package are determined by the active module naming scheme along with the general software and modules install paths specified by the EasyBuild configuration. You can list the supported module naming schemes of Easybuild using: $ eb --avail-module-naming-schemes List of supported module naming schemes: EasyBuildMNS CategorizedHMNS MigrateFromEBToHMNS HierarchicalMNS CategorizedModuleNamingScheme See Flat vs. Hierarchical module naming scheme for an illustrated explaination of the difference between two extreme cases: flat or 3-level hierarchical. On ULHPC systems, we selected an intermediate scheme called CategorizedModuleNamingScheme . Module Naming Schemes on ULHPC system ULHPC modules are organised through the Categorized Naming Scheme Format: //- This means that the typical module hierarchy has as prefix a category level, taken out from one of the supported software category or module class : $ eb --show-default-moduleclasses Default available module classes: base: Default module class astro: Astronomy, Astrophysics and Cosmology bio: Bioinformatics, biology and biomedical cae: Computer Aided Engineering (incl. CFD) chem: Chemistry, Computational Chemistry and Quantum Chemistry compiler: Compilers data: Data management & processing tools debugger: Debuggers devel: Development tools geo: Earth Sciences ide: Integrated Development Environments (e.g. editors) lang: Languages and programming aids lib: General purpose libraries math: High-level mathematical software mpi: MPI stacks numlib: Numerical Libraries perf: Performance tools quantum: Quantum Computing phys: Physics and physical systems simulations system: System utilities (e.g. highly depending on system OS and hardware) toolchain: EasyBuild toolchains tools: General purpose tools vis: Visualization, plotting, documentation and typesetting It follows that the ULHPC software modules are structured according to the organization depicted below ( click to enlarge ).","title":"Module Naming Schemes"},{"location":"environment/modules/#ulhpc-toolchains-and-software-set-versioning","text":"We offer a YEARLY release of the ULHPC Software Set based on Easybuid release of toolchains -- see Component versions ( fixed per release ) in the foss and intel toolchains. However , count at least 6 months of validation/import after EB release before ULHPC release An overview of the currently available component versions is depicted below: Name Type 2019b ( legacy ) 2020a 2020b ( prod ) 2021a 2021b ( devel ) GCCCore compiler 8.3.0 9.3.0 10.2.0 10.3.0 11.2.0 foss toolchain 2019b 2020a 2020b 2021a 2021b intel toolchain 2019b 2020a 2020b 2021a 2021b binutils 2.32 2.34 2.35 2.36 2.37 Python 3.7.4 (and 2.7.16) 3.8.2 (and 2.7.18) 3.8.6 3.9.2 3.9.6 LLVM compiler 9.0.1 10.0.1 11.0.0 11.1.0 12.0.1 OpenMPI MPI 3.1.4 4.0.3 4.0.5 4.1.1 4.1.2 Once on a node, the current version of the ULHPC Software Set in production is stored in $RESIF_VERSION_PROD . You can use the variables $MODULEPATH_{LEGACY,PROD,DEVEL} to access or set the MODULEPATH command with the appropriate value. Yet we have define utility scripts to facilitate your quick reset of the module environment, i.e., resif-load-swset-{legacy,prod,devel} and resif-reset-swset For instance, if you want to use the legacy software set, proceed as follows in your launcher scripts: resif-load-swset-legacy # Eq. of export MODULEPATH=$MODULEPATH_LEGACY # [...] # Restore production settings resif-load-swset-prod # Eq. of export MODULEPATH=$MODULEPATH_PROD If on the contrary you want to test the (new) development software set, i.e., the devel version, stored in $RESIF_VERSION_DEVEL : resif-load-swset-devel # Eq. of export MODULEPATH=$MODULEPATH_DEVEL # [...] # Restore production settings resif-reset-swset # As resif-load-swset-prod (iris only) Skylake Optimized builds Skylake optimized build can be loaded on regular nodes using resif-load-swset-skylake # Eq. of export MODULEPATH=$MODULEPATH_PROD_SKYLAKE You MUST obviously be on a Skylake node ( sbatch -C skylake [...] ) to take benefit from it. Note that this action is not required on GPU nodes. GPU Optimized builds vs. CPU software set on GPU nodes On GPU nodes, be aware that the default MODULEPATH holds two directories: GPU Optimized builds ( i.e. typically against the {foss,intel}cuda toolchains) stored under /opt/apps/resif///gpu/modules/all CPU Optimized builds (ex: skylake on Iris )) stored under /opt/apps/resif///skylake/modules/all You may want to exclude CPU builds to ensure you take the most out of the GPU accelerators. In that case, you may want to run: # /!\\ ADAPT accordingly module unuse /opt/apps/resif/ ${ ULHPC_CLUSTER } / ${ RESIF_VERSION_PROD } /skylake/modules/all","title":"ULHPC Toolchains and Software Set Versioning"},{"location":"environment/modules/#using-easybuild-to-create-custom-modules","text":"Just like we do, you probably want to use Easybuild to complete the existing software set with your own modules and software builds. See Building Custom (or missing) software documentation for more details.","title":"Using Easybuild to Create Custom Modules"},{"location":"environment/modules/#creating-a-custom-module-environment","text":"You can modify your environment so that certain modules are loaded whenever you log in. Use module save [] and module restore [] for that purpose -- see Lmod documentation on User collections You can also create and install your own modules for your convenience or for sharing software among collaborators. See the modulefile documentation for details of the required format and available commands. These custom modulefiles can be made visible to the module command by module use /path/to/the/custom/modulefiles Warning Make sure the UNIX file permissions grant access to all users who want to use the software. Do not give write permissions to your home directory to anyone else. Note The module use command adds new directories before other module search paths (defined as $MODULEPATH ), so modules defined in a custom directory will have precedence if there are other modules with the same name in the module search paths. If you prefer to have the new directory added at the end of $MODULEPATH , use module use -a instead of module use .","title":"Creating a Custom Module Environment"},{"location":"environment/modules/#module-faq","text":"Is there an environment variable that captures loaded modules? Yes, active modules can be retrieved via $LOADEDMODULES , this environment variable is automatically changed to reflect active loaded modules that is reflected via module list . If you want to access modulefile path for loaded modules you can retrieve via $_LM_FILES","title":"Module FAQ"},{"location":"environment/workflow/","text":"ULHPC Workflow \u00b6 Your typical journey on the ULHPC facility is illustrated in the below figure. Typical workflow on UL HPC resources You daily interaction with the ULHPC facility includes the following actions: Preliminary setup Connect to the access/login servers This can be done either by ssh ( recommended ) or via the ULHPC OOD portal ( advanced users ) at this point, you probably want to create (or reattach) to a screen or tmux session Synchronize you code and/or transfer your input data using rsync/svn/git typically recall that the different storage filesystems are shared (via a high-speed interconnect network ) among the computational resources of the ULHPC facilities. In particular, it is sufficient to exchange data with the access servers to make them available on the clusters Reserve a few interactive resources with salloc -p interactive [...] recall that the module command (used to load the ULHPC User software ) is only available on the compute nodes ( eventually ) build your program, typically using gcc/icc/mpicc/nvcc.. Test your workflow / HPC analysis on a small size problem ( srun/python/sh... ) Prepare a launcher script .{sh|py} Then you can proceed with your Real Experiments : Reserve passive resources : sbatch [...] Grab the results and (eventually) transfer back your output results using rsync/svn/git","title":"Workflow"},{"location":"environment/workflow/#ulhpc-workflow","text":"Your typical journey on the ULHPC facility is illustrated in the below figure. Typical workflow on UL HPC resources You daily interaction with the ULHPC facility includes the following actions: Preliminary setup Connect to the access/login servers This can be done either by ssh ( recommended ) or via the ULHPC OOD portal ( advanced users ) at this point, you probably want to create (or reattach) to a screen or tmux session Synchronize you code and/or transfer your input data using rsync/svn/git typically recall that the different storage filesystems are shared (via a high-speed interconnect network ) among the computational resources of the ULHPC facilities. In particular, it is sufficient to exchange data with the access servers to make them available on the clusters Reserve a few interactive resources with salloc -p interactive [...] recall that the module command (used to load the ULHPC User software ) is only available on the compute nodes ( eventually ) build your program, typically using gcc/icc/mpicc/nvcc.. Test your workflow / HPC analysis on a small size problem ( srun/python/sh... ) Prepare a launcher script .{sh|py} Then you can proceed with your Real Experiments : Reserve passive resources : sbatch [...] Grab the results and (eventually) transfer back your output results using rsync/svn/git","title":"ULHPC Workflow"},{"location":"filesystems/","text":"Your journey on the ULHPC facility is illustrated in the below figure. In particular, once connected, you have access to several different File Systems (FS) which are configured for different purposes. What is a File System (FS) ? A File System (FS) is just the logical manner to store, organize & access data. There are different types of file systems available nowadays: (local) Disk FS you find on laptops and servers: FAT32 , NTFS , HFS+ , ext4 , {x,z,btr}fs ... Networked FS , such as NFS , CIFS / SMB , AFP , allowing to access a remote storage system as a NAS (Network Attached Storage) Parallel and Distributed FS : such as SpectrumScale/GPFS or Lustre . Those are typical FileSystems you meet on HPC or HTC (High Throughput Computing) facility as they exhibit several unique capabilities: data is spread across multiple storage nodes for redundancy and performance. the global capacity AND the global performance levels are increased with every systems added to the storage infrastructure. Storage Systems Overview \u00b6 Current statistics of the available filesystems are depicted on the side figure. The ULHPC facility relies on 2 types of Distributed/Parallel File Systems to deliver high-performant Data storage at a BigData scale: IBM Spectrum Scale , formerly known as the General Parallel File System ( GPFS ), a global high -performance clustered file system hosting your $HOME and projects data. Lustre , an open-source, parallel file system dedicated to large, local, parallel scratch storage. In addition, the following file-systems complete the ULHPC storage infrastructure: OneFS, A global low -performance Dell/EMC Isilon solution used to host project data, and serve for backup and archival purposes The ULHPC team relies on other filesystems within its internal backup infrastructure, such as xfs , a high-performant disk file-system deployed on storage/backup servers. Summary \u00b6 Several File Systems co-exist on the ULHPC facility and are configured for different purposes. Each servers and computational resources has access to at least three different file systems with different levels of performance, permanence and available space summarized below Directory Env. file system backup /home/users/ $HOME GPFS/Spectrumscale no /work/projects/ - GPFS/Spectrumscale yes (partial, backup subdirectory) /scratch/users/ $SCRATCH Lustre no /mnt/isilon/projects/ - OneFS yes (live sync and snapshots) ULHPC backup policies Quotas ULHPC GPFS/SpectrumScale and Lustre filesystems UL Isilon/OneFS filesystems","title":"Overview"},{"location":"filesystems/#storage-systems-overview","text":"Current statistics of the available filesystems are depicted on the side figure. The ULHPC facility relies on 2 types of Distributed/Parallel File Systems to deliver high-performant Data storage at a BigData scale: IBM Spectrum Scale , formerly known as the General Parallel File System ( GPFS ), a global high -performance clustered file system hosting your $HOME and projects data. Lustre , an open-source, parallel file system dedicated to large, local, parallel scratch storage. In addition, the following file-systems complete the ULHPC storage infrastructure: OneFS, A global low -performance Dell/EMC Isilon solution used to host project data, and serve for backup and archival purposes The ULHPC team relies on other filesystems within its internal backup infrastructure, such as xfs , a high-performant disk file-system deployed on storage/backup servers.","title":"Storage Systems Overview"},{"location":"filesystems/#summary","text":"Several File Systems co-exist on the ULHPC facility and are configured for different purposes. Each servers and computational resources has access to at least three different file systems with different levels of performance, permanence and available space summarized below Directory Env. file system backup /home/users/ $HOME GPFS/Spectrumscale no /work/projects/ - GPFS/Spectrumscale yes (partial, backup subdirectory) /scratch/users/ $SCRATCH Lustre no /mnt/isilon/projects/ - OneFS yes (live sync and snapshots) ULHPC backup policies Quotas ULHPC GPFS/SpectrumScale and Lustre filesystems UL Isilon/OneFS filesystems","title":"Summary"},{"location":"filesystems/gpfs/","text":"GPFS/SpectrumScale ( $HOME , project) \u00b6 Introduction \u00b6 IBM Spectrum Scale , formerly known as the General Parallel File System (GPFS), is global high -performance clustered file system available on all ULHPC computational systems through a DDN GridScaler/GS7K system. It allows sharing homedirs and project data between users, systems, and eventually (i.e. if needed) with the \"outside world\". In terms of raw storage capacities, it represents more than 4PB . Live status Global Home directory $HOME \u00b6 Home directories provide a convenient means for a user to have access to files such as dotfiles, source files, input files, configuration files regardless of the platform. Refer to your home directory using the environment variable $HOME whenever possible. The absolute path may change, but the value of $HOME will always be correct. $HOME quotas and backup policies See quotas for detailed information about inode, space quotas, and file system purge policies. Your HOME is backuped weekly, according to the policy detailed in the ULHPC backup policies . Global Project directory $PROJECTHOME=/work/projects/ \u00b6 Project directories are intended for sharing data within a group of researchers, under /work/projects/ Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible. Global Project quotas and backup policies See quotas for detailed information about inode, space quotas, and file system purge policies. Your projects backup directories are backuped weekly, according to the policy detailed in the ULHPC backup policies . Access rights to project directory: Quota for clusterusers group in project directories is 0 !!! When a project is created, a group of the same name ( ) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory. Be aware that your default group as a user is clusterusers which has ( on purpose ) a quota in project directories set to 0 . You thus need to ensure you always write data in your project directory using the group (instead of yoru default one.). This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...] When using rsync to transfer file toward the project directory /work/projects/ as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to: give new files the destination-default permissions with --no-p ( --no-perms ), and use the default group of the destination dir with --no-g ( --no-group ) (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX Your full rsync command becomes (adapt accordingly): rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] /work/projects//[...] For the same reason detailed above, in case you are using a build command or more generally any command meant to write data in your project directory /work/projects/ , you want to use the sg as follows: # /!\\ ADAPT accordingly sg -c \" [...]\" This is particularly important if you are building dedicated software with Easybuild for members of the project - you typically want to do it as follows: # /!\\ ADAPT accordingly sg -c \"eb [...] -r --rebuild -D\" # Dry-run - enforce using the '' group sg -c \"eb [...] -r --rebuild\" # Dry-run - enforce using the '' group Storage System Implementation \u00b6 The way the ULHPC GPFS file system is implemented is depicted on the below figure. It is composed of: Two NAS protocol servers (see below One DDN GridScaler 7K system acquired as part of RFP 160019 deployed in 2017 and later extended, composed of 1x DDN GS7K enclosure (~11GB/s IO throughput) 4x SS8460 disk expansion enclosures 350x HGST disks (7.2K RPM HDD, 6TB, Self Encrypted Disks (SED) configured over 35 RAID6 (8+2) pools 28x Sandisk SSD 400GB disks Another DDN GridScaler 7K system acquired as part of RFP 190027 deployed in 2020 as part of Aion and later extended. 1x DDN GS7990-EDR embedded storage 4x SS9012 disk expansion enclosures 360x NL-SAS HDDs (6TB, Self Encrypted Disks (SED)) configured over 36 RAID6 (8+2) pools 10x 3.2TB SED SAS-SSD for metadata. There is no single point of failure within the storage solution and the setup is fully redundant. The data paths from the storage to the NSD servers are redundant and providing one link from each of the servers to each controller in the storage unit. There are redundant power supplies, redundant fans, redundant storage controller with mirrored cache and battery backup to secure the cache data when power is lost completely. The data paths to the enclosures are redundant so that links can fail, and the system will still be fully operational. Filesystem Performance \u00b6 The performance of the GS7990 storage system via native GPFS and RDMA based data transport for the HPC filesystem is expected to be in the range of at least 20GB/s for large sequential read and writes, using a filesystem block size of 16MB and scatter or cluster allocation. Performance measurement by IOR , a synthetic benchmark for testing the performance of distributed filesystems is planned upon finalization of the installation. The IOR benchmark IOR is a parallel IO benchmark that can be used to test the performance of parallel storage systems using various interfaces and access patterns. It supports a variety of different APIs to simulate IO load and is nowadays considered as a reference Parallel filesystem I/O benchmark. It recently embedded another well-known benchmark suite called MDTest, a synthetic MPI parallel benchmark for testing the metadata performance of filesystems (such as Lustre or Spectrum Scale GPFS) where each thread is operating its own working set (to create directory/files, read files, delete files or directory tree). In complement to IOR, the IO-500 benchmarking suite (see also the white paper \" Establishing the IO-500 Benchmark \") will be performed. IO-500 aims at capturing user-experienced performance with measured performance representative for: applications with well optimised I/O patterns; applications with random-like workloads; workloads involving metadata small/objects. NAS/NFS Servers \u00b6 Two NAS protocol servers are available, each connected via 2 x IB EDR links to the IB fabric and exporting the filesystem via NFS and SMB over 2 x 10GE links into the Ethernet network.","title":"GPFS/SpectrumScale"},{"location":"filesystems/gpfs/#gpfsspectrumscale-home-project","text":"","title":"GPFS/SpectrumScale ($HOME, project)"},{"location":"filesystems/gpfs/#introduction","text":"IBM Spectrum Scale , formerly known as the General Parallel File System (GPFS), is global high -performance clustered file system available on all ULHPC computational systems through a DDN GridScaler/GS7K system. It allows sharing homedirs and project data between users, systems, and eventually (i.e. if needed) with the \"outside world\". In terms of raw storage capacities, it represents more than 4PB . Live status","title":"Introduction"},{"location":"filesystems/gpfs/#global-home-directory-home","text":"Home directories provide a convenient means for a user to have access to files such as dotfiles, source files, input files, configuration files regardless of the platform. Refer to your home directory using the environment variable $HOME whenever possible. The absolute path may change, but the value of $HOME will always be correct. $HOME quotas and backup policies See quotas for detailed information about inode, space quotas, and file system purge policies. Your HOME is backuped weekly, according to the policy detailed in the ULHPC backup policies .","title":"Global Home directory $HOME"},{"location":"filesystems/gpfs/#global-project-directory-projecthomeworkprojects","text":"Project directories are intended for sharing data within a group of researchers, under /work/projects/ Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible. Global Project quotas and backup policies See quotas for detailed information about inode, space quotas, and file system purge policies. Your projects backup directories are backuped weekly, according to the policy detailed in the ULHPC backup policies . Access rights to project directory: Quota for clusterusers group in project directories is 0 !!! When a project is created, a group of the same name ( ) is also created and researchers allowed to collaborate on the project are made members of this group,which grant them access to the project directory. Be aware that your default group as a user is clusterusers which has ( on purpose ) a quota in project directories set to 0 . You thus need to ensure you always write data in your project directory using the group (instead of yoru default one.). This can be achieved by ensuring the setgid bit is set on all folders in the project directories: chmod g+s [...] When using rsync to transfer file toward the project directory /work/projects/ as destination, be aware that rsync will not use the correct permissions when copying files into your project directory. As indicated in the Data transfer section, you also need to: give new files the destination-default permissions with --no-p ( --no-perms ), and use the default group of the destination dir with --no-g ( --no-group ) (eventually) instruct rsync to preserve whatever executable permissions existed on the source file and aren't masked at the destination using --chmod=ug=rwX Your full rsync command becomes (adapt accordingly): rsync -avz {--update | --delete} --no-p --no-g [--chmod=ug=rwX] /work/projects//[...] For the same reason detailed above, in case you are using a build command or more generally any command meant to write data in your project directory /work/projects/ , you want to use the sg as follows: # /!\\ ADAPT accordingly sg -c \" [...]\" This is particularly important if you are building dedicated software with Easybuild for members of the project - you typically want to do it as follows: # /!\\ ADAPT accordingly sg -c \"eb [...] -r --rebuild -D\" # Dry-run - enforce using the '' group sg -c \"eb [...] -r --rebuild\" # Dry-run - enforce using the '' group","title":"Global Project directory $PROJECTHOME=/work/projects/"},{"location":"filesystems/gpfs/#storage-system-implementation","text":"The way the ULHPC GPFS file system is implemented is depicted on the below figure. It is composed of: Two NAS protocol servers (see below One DDN GridScaler 7K system acquired as part of RFP 160019 deployed in 2017 and later extended, composed of 1x DDN GS7K enclosure (~11GB/s IO throughput) 4x SS8460 disk expansion enclosures 350x HGST disks (7.2K RPM HDD, 6TB, Self Encrypted Disks (SED) configured over 35 RAID6 (8+2) pools 28x Sandisk SSD 400GB disks Another DDN GridScaler 7K system acquired as part of RFP 190027 deployed in 2020 as part of Aion and later extended. 1x DDN GS7990-EDR embedded storage 4x SS9012 disk expansion enclosures 360x NL-SAS HDDs (6TB, Self Encrypted Disks (SED)) configured over 36 RAID6 (8+2) pools 10x 3.2TB SED SAS-SSD for metadata. There is no single point of failure within the storage solution and the setup is fully redundant. The data paths from the storage to the NSD servers are redundant and providing one link from each of the servers to each controller in the storage unit. There are redundant power supplies, redundant fans, redundant storage controller with mirrored cache and battery backup to secure the cache data when power is lost completely. The data paths to the enclosures are redundant so that links can fail, and the system will still be fully operational.","title":"Storage System Implementation"},{"location":"filesystems/gpfs/#filesystem-performance","text":"The performance of the GS7990 storage system via native GPFS and RDMA based data transport for the HPC filesystem is expected to be in the range of at least 20GB/s for large sequential read and writes, using a filesystem block size of 16MB and scatter or cluster allocation. Performance measurement by IOR , a synthetic benchmark for testing the performance of distributed filesystems is planned upon finalization of the installation. The IOR benchmark IOR is a parallel IO benchmark that can be used to test the performance of parallel storage systems using various interfaces and access patterns. It supports a variety of different APIs to simulate IO load and is nowadays considered as a reference Parallel filesystem I/O benchmark. It recently embedded another well-known benchmark suite called MDTest, a synthetic MPI parallel benchmark for testing the metadata performance of filesystems (such as Lustre or Spectrum Scale GPFS) where each thread is operating its own working set (to create directory/files, read files, delete files or directory tree). In complement to IOR, the IO-500 benchmarking suite (see also the white paper \" Establishing the IO-500 Benchmark \") will be performed. IO-500 aims at capturing user-experienced performance with measured performance representative for: applications with well optimised I/O patterns; applications with random-like workloads; workloads involving metadata small/objects.","title":"Filesystem Performance"},{"location":"filesystems/gpfs/#nasnfs-servers","text":"Two NAS protocol servers are available, each connected via 2 x IB EDR links to the IB fabric and exporting the filesystem via NFS and SMB over 2 x 10GE links into the Ethernet network.","title":"NAS/NFS Servers"},{"location":"filesystems/home/","text":"Global Home directory $HOME \u00b6 Home directories provide a convenient means for a user to have access to files such as dotfiles, source files, input files, configuration files regardless of the platform. Refer to your home directory using the environment variable $HOME whenever possible. The absolute path may change, but the value of $HOME will always be correct.","title":"Home"},{"location":"filesystems/home/#global-home-directory-home","text":"Home directories provide a convenient means for a user to have access to files such as dotfiles, source files, input files, configuration files regardless of the platform. Refer to your home directory using the environment variable $HOME whenever possible. The absolute path may change, but the value of $HOME will always be correct.","title":"Global Home directory $HOME"},{"location":"filesystems/isilon/","text":"Dell EMC Isilon (Archives and cold project data) \u00b6 OneFS, A global low -performance Dell/EMC Isilon solution is used to host project data, and serve for backup and archival purposes. You will find them mounted under /mnt/isilon/projects . In 2014, the IT Department of the University , the UL HPC and the LCSB join their forces (and their funding) to acquire a scalable and modular NAS solution able to sustain the need for an internal big data storage, i.e. provides space for centralized data and backups of all devices used by the UL staff and all research-related data, including the one proceed on the UL HPC platform. At the end of a public call for tender released in 2014, the EMC Isilon system was finally selected with an effective deployment in 2015. It is physically hosted in the new CDC (Centre de Calcul) server room in the Maison du Savoir . Composed by a large number of disk enclosures featuring the OneFS File System, it currently offers an effective capacity of 3.360 PB. A secondary Isilon cluster, acquired in 2020 and deployed in 2021 is duplicating this setup in a redundant way.","title":"OneFS Isilon"},{"location":"filesystems/isilon/#dell-emc-isilon-archives-and-cold-project-data","text":"OneFS, A global low -performance Dell/EMC Isilon solution is used to host project data, and serve for backup and archival purposes. You will find them mounted under /mnt/isilon/projects . In 2014, the IT Department of the University , the UL HPC and the LCSB join their forces (and their funding) to acquire a scalable and modular NAS solution able to sustain the need for an internal big data storage, i.e. provides space for centralized data and backups of all devices used by the UL staff and all research-related data, including the one proceed on the UL HPC platform. At the end of a public call for tender released in 2014, the EMC Isilon system was finally selected with an effective deployment in 2015. It is physically hosted in the new CDC (Centre de Calcul) server room in the Maison du Savoir . Composed by a large number of disk enclosures featuring the OneFS File System, it currently offers an effective capacity of 3.360 PB. A secondary Isilon cluster, acquired in 2020 and deployed in 2021 is duplicating this setup in a redundant way.","title":"Dell EMC Isilon (Archives and cold project data)"},{"location":"filesystems/lfs/","text":"Understanding Lustre I/O \u00b6 When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency, so that all clients see consistent results. Discover MDTs and OSTs \u00b6 ULHPC's Lustre file systems look and act like a single logical storage, but a large files on Lustre can be divided into multiple chunks ( stripes ) and stored across over OSTs. This technique is called file striping . The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. It is thus important to know the number of OST on your running system. As mentioned in the Lustre implementation section , the ULHPC Lustre infrastructure is composed of 2 MDS servers (2 MDT), 2 OSS servers and 16 OSTs . You can list the MDTs and OSTs with the command lfs df : $ cds # OR: cd $SCRATCH $ lfs df -h UUID bytes Used Available Use% Mounted on lscratch-MDT0000_UUID 3 .2T 15 .4G 3 .1T 1 % /mnt/lscratch [ MDT:0 ] lscratch-MDT0001_UUID 3 .2T 3 .8G 3 .2T 1 % /mnt/lscratch [ MDT:1 ] lscratch-OST0000_UUID 57 .4T 16 .7T 40 .2T 30 % /mnt/lscratch [ OST:0 ] lscratch-OST0001_UUID 57 .4T 18 .8T 38 .0T 34 % /mnt/lscratch [ OST:1 ] lscratch-OST0002_UUID 57 .4T 17 .6T 39 .3T 31 % /mnt/lscratch [ OST:2 ] lscratch-OST0003_UUID 57 .4T 16 .6T 40 .3T 30 % /mnt/lscratch [ OST:3 ] lscratch-OST0004_UUID 57 .4T 16 .5T 40 .3T 30 % /mnt/lscratch [ OST:4 ] lscratch-OST0005_UUID 57 .4T 16 .5T 40 .3T 30 % /mnt/lscratch [ OST:5 ] lscratch-OST0006_UUID 57 .4T 16 .3T 40 .6T 29 % /mnt/lscratch [ OST:6 ] lscratch-OST0007_UUID 57 .4T 17 .0T 39 .9T 30 % /mnt/lscratch [ OST:7 ] lscratch-OST0008_UUID 57 .4T 16 .8T 40 .0T 30 % /mnt/lscratch [ OST:8 ] lscratch-OST0009_UUID 57 .4T 13 .2T 43 .6T 24 % /mnt/lscratch [ OST:9 ] lscratch-OST000a_UUID 57 .4T 13 .2T 43 .7T 24 % /mnt/lscratch [ OST:10 ] lscratch-OST000b_UUID 57 .4T 13 .3T 43 .6T 24 % /mnt/lscratch [ OST:11 ] lscratch-OST000c_UUID 57 .4T 14 .0T 42 .8T 25 % /mnt/lscratch [ OST:12 ] lscratch-OST000d_UUID 57 .4T 13 .9T 43 .0T 25 % /mnt/lscratch [ OST:13 ] lscratch-OST000e_UUID 57 .4T 14 .4T 42 .5T 26 % /mnt/lscratch [ OST:14 ] lscratch-OST000f_UUID 57 .4T 12 .9T 43 .9T 23 % /mnt/lscratch [ OST:15 ] filesystem_summary: 919 .0T 247 .8T 662 .0T 28 % /mnt/lscratch File striping \u00b6 File striping permits to increase the throughput of operations by taking advantage of several OSSs and OSTs, by allowing one or more clients to read/write different parts of the same file in parallel. On the other hand, striping small files can decrease the performance. File striping allows file sizes larger than a single OST, large files MUST be striped over several OSTs in order to avoid filling a single OST and harming the performance for all users. There is default stripe configuration for ULHPC Lustre filesystems (see below). However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance. You can tune file striping using 3 properties: Property Effect Default Accepted values Advised values stripe_size Size of the file stripes in bytes 1048576 (1m) > 0 > 0 stripe_count Number of OST to stripe across 1 -1 (use all the OSTs), 1-16 -1 stripe_offset Index of the OST where the first stripe of files will be written -1 (automatic) -1 , 0-15 -1 Note : With regards stripe_offset (the index of the OST where the first stripe is to be placed); the default is -1 which results in random selection and using a non-default value is NOT recommended . Note Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance. Use the lfs getstripe command for getting the stripe parameters. Use lfs setstripe for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns. Newly created files and directories will inherit these parameters from their parent directory. However, the parameters cannot be changed on an existing file. $ lfs getstripe dir | filename $ lfs setstripe -s -c -o dir | filename usage: lfs setstripe -d (to delete default striping from an existing directory) usage: lfs setstripe [--stripe-count|-c ] [--stripe-index|-i ] [--stripe-size|-S ] Example: $ lfs getstripe $SCRATCH /scratch/users// stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 [...] $ lfs setstripe -c -1 $SCRATCH $ lfs getstripe $SCRATCH /scratch/users// stripe_count: -1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 In this example, we view the current stripe setting of the $SCRATCH directory. The stripe count is changed to all OSTs and verified. All files written to this directory will be striped over the maximum number of OSTs (16). Use lfs check osts to see the number and status of active OSTs for each filesystem on the cluster. Learn more by reading the man page: $ lfs check osts $ man lfs File stripping Examples \u00b6 Set the striping parameters for a directory containing only small files (< 20MB) $ cd $SCRATCH $ mkdir test_small_files $ lfs getstripe test_small_files test_small_files stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 pool: $ lfs setstripe --stripe-size 1M --stripe-count 1 test_small_files $ lfs getstripe test_small_files test_small_files stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 Set the striping parameters for a directory containing only large files between 100MB and 1GB $ mkdir test_large_files $ lfs setstripe --stripe-size 2M --stripe-count 2 test_large_files $ lfs getstripe test_large_files test_large_files stripe_count: 2 stripe_size: 2097152 stripe_offset: -1 Set the striping parameters for a directory containing files larger than 1GB $ mkdir test_larger_files $ lfs setstripe --stripe-size 4M --stripe-count 6 test_larger_files $ lfs getstripe test_larger_files test_larger_files stripe_count: 6 stripe_size: 4194304 stripe_offset: -1 Big Data files management on Lustre Using a large stripe size can improve performance when accessing very large files Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. Note that these are simple examples, the optimal settings defer depending on the application (concurrent threads accessing the same file, size of each write operation, etc). Lustre Best practices \u00b6 Parallel I/O on the same file Increase the stripe_count for parallel I/O to the same file. When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file. Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. For more details, you can read the following external resources: Reference Documentation: Managing File Layout (Striping) and Free Space Lustre Wiki Lustre Best Practices - Nasa HECC I/O and Lustre Usage - NISC","title":"Scratch Data Management"},{"location":"filesystems/lfs/#understanding-lustre-io","text":"When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency, so that all clients see consistent results.","title":"Understanding Lustre I/O"},{"location":"filesystems/lfs/#discover-mdts-and-osts","text":"ULHPC's Lustre file systems look and act like a single logical storage, but a large files on Lustre can be divided into multiple chunks ( stripes ) and stored across over OSTs. This technique is called file striping . The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. It is thus important to know the number of OST on your running system. As mentioned in the Lustre implementation section , the ULHPC Lustre infrastructure is composed of 2 MDS servers (2 MDT), 2 OSS servers and 16 OSTs . You can list the MDTs and OSTs with the command lfs df : $ cds # OR: cd $SCRATCH $ lfs df -h UUID bytes Used Available Use% Mounted on lscratch-MDT0000_UUID 3 .2T 15 .4G 3 .1T 1 % /mnt/lscratch [ MDT:0 ] lscratch-MDT0001_UUID 3 .2T 3 .8G 3 .2T 1 % /mnt/lscratch [ MDT:1 ] lscratch-OST0000_UUID 57 .4T 16 .7T 40 .2T 30 % /mnt/lscratch [ OST:0 ] lscratch-OST0001_UUID 57 .4T 18 .8T 38 .0T 34 % /mnt/lscratch [ OST:1 ] lscratch-OST0002_UUID 57 .4T 17 .6T 39 .3T 31 % /mnt/lscratch [ OST:2 ] lscratch-OST0003_UUID 57 .4T 16 .6T 40 .3T 30 % /mnt/lscratch [ OST:3 ] lscratch-OST0004_UUID 57 .4T 16 .5T 40 .3T 30 % /mnt/lscratch [ OST:4 ] lscratch-OST0005_UUID 57 .4T 16 .5T 40 .3T 30 % /mnt/lscratch [ OST:5 ] lscratch-OST0006_UUID 57 .4T 16 .3T 40 .6T 29 % /mnt/lscratch [ OST:6 ] lscratch-OST0007_UUID 57 .4T 17 .0T 39 .9T 30 % /mnt/lscratch [ OST:7 ] lscratch-OST0008_UUID 57 .4T 16 .8T 40 .0T 30 % /mnt/lscratch [ OST:8 ] lscratch-OST0009_UUID 57 .4T 13 .2T 43 .6T 24 % /mnt/lscratch [ OST:9 ] lscratch-OST000a_UUID 57 .4T 13 .2T 43 .7T 24 % /mnt/lscratch [ OST:10 ] lscratch-OST000b_UUID 57 .4T 13 .3T 43 .6T 24 % /mnt/lscratch [ OST:11 ] lscratch-OST000c_UUID 57 .4T 14 .0T 42 .8T 25 % /mnt/lscratch [ OST:12 ] lscratch-OST000d_UUID 57 .4T 13 .9T 43 .0T 25 % /mnt/lscratch [ OST:13 ] lscratch-OST000e_UUID 57 .4T 14 .4T 42 .5T 26 % /mnt/lscratch [ OST:14 ] lscratch-OST000f_UUID 57 .4T 12 .9T 43 .9T 23 % /mnt/lscratch [ OST:15 ] filesystem_summary: 919 .0T 247 .8T 662 .0T 28 % /mnt/lscratch","title":"Discover MDTs and OSTs"},{"location":"filesystems/lfs/#file-striping","text":"File striping permits to increase the throughput of operations by taking advantage of several OSSs and OSTs, by allowing one or more clients to read/write different parts of the same file in parallel. On the other hand, striping small files can decrease the performance. File striping allows file sizes larger than a single OST, large files MUST be striped over several OSTs in order to avoid filling a single OST and harming the performance for all users. There is default stripe configuration for ULHPC Lustre filesystems (see below). However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance. You can tune file striping using 3 properties: Property Effect Default Accepted values Advised values stripe_size Size of the file stripes in bytes 1048576 (1m) > 0 > 0 stripe_count Number of OST to stripe across 1 -1 (use all the OSTs), 1-16 -1 stripe_offset Index of the OST where the first stripe of files will be written -1 (automatic) -1 , 0-15 -1 Note : With regards stripe_offset (the index of the OST where the first stripe is to be placed); the default is -1 which results in random selection and using a non-default value is NOT recommended . Note Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance. Use the lfs getstripe command for getting the stripe parameters. Use lfs setstripe for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns. Newly created files and directories will inherit these parameters from their parent directory. However, the parameters cannot be changed on an existing file. $ lfs getstripe dir | filename $ lfs setstripe -s -c -o dir | filename usage: lfs setstripe -d (to delete default striping from an existing directory) usage: lfs setstripe [--stripe-count|-c ] [--stripe-index|-i ] [--stripe-size|-S ] Example: $ lfs getstripe $SCRATCH /scratch/users// stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 [...] $ lfs setstripe -c -1 $SCRATCH $ lfs getstripe $SCRATCH /scratch/users// stripe_count: -1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 In this example, we view the current stripe setting of the $SCRATCH directory. The stripe count is changed to all OSTs and verified. All files written to this directory will be striped over the maximum number of OSTs (16). Use lfs check osts to see the number and status of active OSTs for each filesystem on the cluster. Learn more by reading the man page: $ lfs check osts $ man lfs","title":"File striping"},{"location":"filesystems/lfs/#file-stripping-examples","text":"Set the striping parameters for a directory containing only small files (< 20MB) $ cd $SCRATCH $ mkdir test_small_files $ lfs getstripe test_small_files test_small_files stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 pool: $ lfs setstripe --stripe-size 1M --stripe-count 1 test_small_files $ lfs getstripe test_small_files test_small_files stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 Set the striping parameters for a directory containing only large files between 100MB and 1GB $ mkdir test_large_files $ lfs setstripe --stripe-size 2M --stripe-count 2 test_large_files $ lfs getstripe test_large_files test_large_files stripe_count: 2 stripe_size: 2097152 stripe_offset: -1 Set the striping parameters for a directory containing files larger than 1GB $ mkdir test_larger_files $ lfs setstripe --stripe-size 4M --stripe-count 6 test_larger_files $ lfs getstripe test_larger_files test_larger_files stripe_count: 6 stripe_size: 4194304 stripe_offset: -1 Big Data files management on Lustre Using a large stripe size can improve performance when accessing very large files Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. Note that these are simple examples, the optimal settings defer depending on the application (concurrent threads accessing the same file, size of each write operation, etc).","title":"File stripping Examples"},{"location":"filesystems/lfs/#lustre-best-practices","text":"Parallel I/O on the same file Increase the stripe_count for parallel I/O to the same file. When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file. Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. For more details, you can read the following external resources: Reference Documentation: Managing File Layout (Striping) and Free Space Lustre Wiki Lustre Best Practices - Nasa HECC I/O and Lustre Usage - NISC","title":"Lustre Best practices"},{"location":"filesystems/lustre/","text":"Lustre ( $SCRATCH ) \u00b6 Introduction \u00b6 The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. It is available as a global high -performance file system on all ULHPC computational systems through a DDN ExaScaler system. It is meant to host temporary scratch data within your jobs. In terms of raw storage capacities, it represents more than 1.6PB . Live status Global Scratch directory $SCRATCH \u00b6 The scratch area is a Lustre -based file system designed for high performance temporary storage of large files. It is thus intended to support large I/O for jobs that are being actively computed on the ULHPC systems. We recommend that you run your jobs, especially data intensive ones, from the ULHPC scratch file system. Refer to your scratch directory using the environment variable $SCRATCH whenever possible (which expands to /scratch/users/$(whoami) ). The scratch file system is shared via the Infiniband network of the ULHPC facility and is available from all nodes while being tuned for high performance. ULHPC $SCRATCH quotas and backup Extended ACLs are provided for sharing data with other users using fine-grained control. See quotas for detailed information about inode, space quotas, and file system policies. In particular, your SCRATCH directory is NOT backuped according to the policy detailed in the ULHPC backup policies . A short history of Lustre Lustre was initiated & funded by the U.S. Department of Energy Office of Science & National Nuclear Security Administration laboratories in mid 2000s. Developments continue through the Cluster File Systems (ClusterFS) company founded in 2001. Sun Microsystems acquired ClusterFS in 2007 with the intent to bring Lustre technologies to Sun's ZFS file system and the Solaris operating system. In 2010, Oracle bought Sun and began to manage and release Lustre, however the company was not known for HPC. In December 2010, Oracle announced that they would cease Lustre 2.x development and place Lustre 1.8 into maintenance-only support, creating uncertainty around the future development of the file system. Following this announcement, several new organizations sprang up to provide support and development in an open community development model, including Whamcloud , Open Scalable File Systems ( OpenSFS , a nonprofit organization promoting the Lustre file system to ensure Lustre remains vendor-neutral, open, and free), Xyratex or DDN. By the end of 2010, most Lustre developers had left Oracle. WhamCloud was bought by Intel in 2011 and Xyratex took over the Lustre trade mark, logo, related assets (support) from Oracle. In June 2018, the Lustre team and assets were acquired from Intel by DDN. DDN organized the new acquisition as an independent division, reviving the Whamcloud name for the new division. General Architecture \u00b6 A Lustre file system has three major functional units: One or more MetaData Servers (MDS) nodes (here two) that have one or more MetaData Target (MDT) devices per Lustre filesystem that stores namespace metadata, such as filenames, directories, access permissions, and file layout. The MDT data is stored in a local disk filesystem. However, unlike block-based distributed filesystems, such as GPFS/SpectrumScale and PanFS, where the metadata server controls all of the block allocation, the Lustre metadata server is only involved in pathname and permission checks, and is not involved in any file I/O operations, avoiding I/O scalability bottlenecks on the metadata server. One or more Object Storage Server (OSS) nodes that store file data on one or more Object Storage Target (OST) devices. The capacity of a Lustre file system is the sum of the capacities provided by the OSTs. OSSs do most of the work and thus require as much RAM as possible Rule of thumb: ~2 GB base memory + 1 GB / OST Failover configurations: ~2 GB / OST OSSs should have as much CPUs as possible, but it is not as much critical as on MDS Client(s) that access and use the data. Lustre presents all clients with a unified namespace for all of the files and data in the filesystem, using standard POSIX semantics, and allows concurrent and coherent read and write access to the files in the filesystem. Lustre general features and numbers Lustre brings a modern architecture within an Object based file system with the following features: Adaptable : supports wide range of networks and storage hardware Scalable : Distributed file object handling for 100.000 clients and more Stability : production-quality stability and failover Modular : interfaces for easy adaption Highly Available : no single point of failure when configured with HA software BIG and exapandable : allow for multiple PB in one namespace Open-source and community driven. Lustre provides a POSIX compliant layer supported on most Linux flavours. In terms of raw number capabilities for the Lustre: Max system size: about 64PB Max number of OSTs: 8150 Max number of MDTs: multiple per filesystem supported since Lustre 2.4 Files per directory: 25 Millions (**don't run ls -al ) Max stripes: 2000 since Lustre 2.2 Stripe size: Min 64kB -- Max 2TB Max object size: 16TB( ldiskfs ) 256PB (ZFS) Max file size: 31.35PB ( ldiskfs ) 8EB (ZFS) When to use Lustre? Lustre is optimized for : Large files Sequential throughput Parallel applications writing to different parts of a file Lustre will not perform well for Lots of small files High number of meta data requests, improved on new versions Waste of space on the OSTs Understanding the Lustre Filesystems Storage System Implementation \u00b6 The way the ULHPC Lustre file system is implemented is depicted on the below figure. Acquired as part of RFP 170035 , the ULHPC configuration is based upon: a set of 2x EXAScaler Lustre building blocks that each consist of: 1x DDN SS7700 base enclosure and its controller pair with 4x FDR ports 1x DDN SS8460 disk expansion enclosure (84-slot drive enclosures) OSTs: 160x SEAGATE disks (7.2K RPM HDD, 8TB, Self Encrypted Disks (SED)) configured over 16 RAID6 (8+2) pools and extra disks in spare pools MDTs: 18x HGST disks (10K RPM HDD, 1.8TB, Self Encrypted Disks (SED)) configured over 8 RAID1 pools and extra disks in spare pools Two redundant MDS servers Dell R630, 2x Intel Xeon E5-2667v4 @ 3.20GHz [8c], 128GB RAM Two redundant OSS servers Dell R630XL, 2x Intel Xeon E5-2640v4 @ 2.40GHz [10c], 128GB RAM Criteria Value Power (nominal) 6.8 KW Power (idle) 5.5 KW Weight 432 kg Rack Height 22U LNet is configured to be performed with OST based balancing. Filesystem Performance \u00b6 The performance of the ULHPC Lustre filesystem is expected to be in the range of at least 15GB/s for large sequential read and writes. IOR \u00b6 Upon release of the system, performance measurement by IOR , a synthetic benchmark for testing the performance of distributed filesystems, was run for an increasing number of clients as well as with 1kiB, 4kiB, 1MiB and 4MiB transfer sizes. As can be seen, aggregated writes and reads exceed 15 GB/s (depending on the test) which meets the minimum requirement. FIO \u00b6 Random IOPS benchmark was performed using FIO with 20 and 40 GB file size over 8 jobs, leading to the following total size of 160GB and 320 GB 320 GB is > 2 \\times \\times RAM size of the OSS node (128 GB RAM) 160 GB is > 1 \\times \\times RAM size of the OSS node (128 GB RAM) MDTEST \u00b6 Mdtest (based on the 7c0ec41 on September 11 , 2017 (based on v1.9.3)) was used to benchmark the metadata capabilities of the delivered system. HT was turned on to be able to run 32 threads. Mind the logarithmic Y-Axis. Tests on 4 clients with up to 20 threads have been included as well to show the scalability of the system. Lustre Usage \u00b6 Understanding Lustre I/O \u00b6 When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency, so that all clients see consistent results. Discover MDTs and OSTs \u00b6 ULHPC's Lustre file systems look and act like a single logical storage, but a large files on Lustre can be divided into multiple chunks ( stripes ) and stored across over OSTs. This technique is called file striping . The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. It is thus important to know the number of OST on your running system. As mentioned in the Lustre implementation section , the ULHPC Lustre infrastructure is composed of 2 MDS servers (2 MDT), 2 OSS servers and 16 OSTs . You can list the MDTs and OSTs with the command lfs df : $ cds # OR: cd $SCRATCH $ lfs df -h UUID bytes Used Available Use% Mounted on lscratch-MDT0000_UUID 3 .2T 15 .4G 3 .1T 1 % /mnt/lscratch [ MDT:0 ] lscratch-MDT0001_UUID 3 .2T 3 .8G 3 .2T 1 % /mnt/lscratch [ MDT:1 ] lscratch-OST0000_UUID 57 .4T 16 .7T 40 .2T 30 % /mnt/lscratch [ OST:0 ] lscratch-OST0001_UUID 57 .4T 18 .8T 38 .0T 34 % /mnt/lscratch [ OST:1 ] lscratch-OST0002_UUID 57 .4T 17 .6T 39 .3T 31 % /mnt/lscratch [ OST:2 ] lscratch-OST0003_UUID 57 .4T 16 .6T 40 .3T 30 % /mnt/lscratch [ OST:3 ] lscratch-OST0004_UUID 57 .4T 16 .5T 40 .3T 30 % /mnt/lscratch [ OST:4 ] lscratch-OST0005_UUID 57 .4T 16 .5T 40 .3T 30 % /mnt/lscratch [ OST:5 ] lscratch-OST0006_UUID 57 .4T 16 .3T 40 .6T 29 % /mnt/lscratch [ OST:6 ] lscratch-OST0007_UUID 57 .4T 17 .0T 39 .9T 30 % /mnt/lscratch [ OST:7 ] lscratch-OST0008_UUID 57 .4T 16 .8T 40 .0T 30 % /mnt/lscratch [ OST:8 ] lscratch-OST0009_UUID 57 .4T 13 .2T 43 .6T 24 % /mnt/lscratch [ OST:9 ] lscratch-OST000a_UUID 57 .4T 13 .2T 43 .7T 24 % /mnt/lscratch [ OST:10 ] lscratch-OST000b_UUID 57 .4T 13 .3T 43 .6T 24 % /mnt/lscratch [ OST:11 ] lscratch-OST000c_UUID 57 .4T 14 .0T 42 .8T 25 % /mnt/lscratch [ OST:12 ] lscratch-OST000d_UUID 57 .4T 13 .9T 43 .0T 25 % /mnt/lscratch [ OST:13 ] lscratch-OST000e_UUID 57 .4T 14 .4T 42 .5T 26 % /mnt/lscratch [ OST:14 ] lscratch-OST000f_UUID 57 .4T 12 .9T 43 .9T 23 % /mnt/lscratch [ OST:15 ] filesystem_summary: 919 .0T 247 .8T 662 .0T 28 % /mnt/lscratch File striping \u00b6 File striping permits to increase the throughput of operations by taking advantage of several OSSs and OSTs, by allowing one or more clients to read/write different parts of the same file in parallel. On the other hand, striping small files can decrease the performance. File striping allows file sizes larger than a single OST, large files MUST be striped over several OSTs in order to avoid filling a single OST and harming the performance for all users. There is default stripe configuration for ULHPC Lustre filesystems (see below). However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance. You can tune file striping using 3 properties: Property Effect Default Accepted values Advised values stripe_size Size of the file stripes in bytes 1048576 (1m) > 0 > 0 stripe_count Number of OST to stripe across 1 -1 (use all the OSTs), 1-16 -1 stripe_offset Index of the OST where the first stripe of files will be written -1 (automatic) -1 , 0-15 -1 Note : With regards stripe_offset (the index of the OST where the first stripe is to be placed); the default is -1 which results in random selection and using a non-default value is NOT recommended . Note Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance. Use the lfs getstripe command for getting the stripe parameters. Use lfs setstripe for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns. Newly created files and directories will inherit these parameters from their parent directory. However, the parameters cannot be changed on an existing file. $ lfs getstripe dir | filename $ lfs setstripe -s -c -o dir | filename usage: lfs setstripe -d (to delete default striping from an existing directory) usage: lfs setstripe [--stripe-count|-c ] [--stripe-index|-i ] [--stripe-size|-S ] Example: $ lfs getstripe $SCRATCH /scratch/users// stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 [...] $ lfs setstripe -c -1 $SCRATCH $ lfs getstripe $SCRATCH /scratch/users// stripe_count: -1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 In this example, we view the current stripe setting of the $SCRATCH directory. The stripe count is changed to all OSTs and verified. All files written to this directory will be striped over the maximum number of OSTs (16). Use lfs check osts to see the number and status of active OSTs for each filesystem on the cluster. Learn more by reading the man page: $ lfs check osts $ man lfs File stripping Examples \u00b6 Set the striping parameters for a directory containing only small files (< 20MB) $ cd $SCRATCH $ mkdir test_small_files $ lfs getstripe test_small_files test_small_files stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 pool: $ lfs setstripe --stripe-size 1M --stripe-count 1 test_small_files $ lfs getstripe test_small_files test_small_files stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 Set the striping parameters for a directory containing only large files between 100MB and 1GB $ mkdir test_large_files $ lfs setstripe --stripe-size 2M --stripe-count 2 test_large_files $ lfs getstripe test_large_files test_large_files stripe_count: 2 stripe_size: 2097152 stripe_offset: -1 Set the striping parameters for a directory containing files larger than 1GB $ mkdir test_larger_files $ lfs setstripe --stripe-size 4M --stripe-count 6 test_larger_files $ lfs getstripe test_larger_files test_larger_files stripe_count: 6 stripe_size: 4194304 stripe_offset: -1 Big Data files management on Lustre Using a large stripe size can improve performance when accessing very large files Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. Note that these are simple examples, the optimal settings defer depending on the application (concurrent threads accessing the same file, size of each write operation, etc). Lustre Best practices \u00b6 Parallel I/O on the same file Increase the stripe_count for parallel I/O to the same file. When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file. Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. For more details, you can read the following external resources: Reference Documentation: Managing File Layout (Striping) and Free Space Lustre Wiki Lustre Best Practices - Nasa HECC I/O and Lustre Usage - NISC","title":"Lustre"},{"location":"filesystems/lustre/#lustre-scratch","text":"","title":"Lustre ($SCRATCH)"},{"location":"filesystems/lustre/#introduction","text":"The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. It is available as a global high -performance file system on all ULHPC computational systems through a DDN ExaScaler system. It is meant to host temporary scratch data within your jobs. In terms of raw storage capacities, it represents more than 1.6PB . Live status","title":"Introduction"},{"location":"filesystems/lustre/#global-scratch-directory-scratch","text":"The scratch area is a Lustre -based file system designed for high performance temporary storage of large files. It is thus intended to support large I/O for jobs that are being actively computed on the ULHPC systems. We recommend that you run your jobs, especially data intensive ones, from the ULHPC scratch file system. Refer to your scratch directory using the environment variable $SCRATCH whenever possible (which expands to /scratch/users/$(whoami) ). The scratch file system is shared via the Infiniband network of the ULHPC facility and is available from all nodes while being tuned for high performance. ULHPC $SCRATCH quotas and backup Extended ACLs are provided for sharing data with other users using fine-grained control. See quotas for detailed information about inode, space quotas, and file system policies. In particular, your SCRATCH directory is NOT backuped according to the policy detailed in the ULHPC backup policies . A short history of Lustre Lustre was initiated & funded by the U.S. Department of Energy Office of Science & National Nuclear Security Administration laboratories in mid 2000s. Developments continue through the Cluster File Systems (ClusterFS) company founded in 2001. Sun Microsystems acquired ClusterFS in 2007 with the intent to bring Lustre technologies to Sun's ZFS file system and the Solaris operating system. In 2010, Oracle bought Sun and began to manage and release Lustre, however the company was not known for HPC. In December 2010, Oracle announced that they would cease Lustre 2.x development and place Lustre 1.8 into maintenance-only support, creating uncertainty around the future development of the file system. Following this announcement, several new organizations sprang up to provide support and development in an open community development model, including Whamcloud , Open Scalable File Systems ( OpenSFS , a nonprofit organization promoting the Lustre file system to ensure Lustre remains vendor-neutral, open, and free), Xyratex or DDN. By the end of 2010, most Lustre developers had left Oracle. WhamCloud was bought by Intel in 2011 and Xyratex took over the Lustre trade mark, logo, related assets (support) from Oracle. In June 2018, the Lustre team and assets were acquired from Intel by DDN. DDN organized the new acquisition as an independent division, reviving the Whamcloud name for the new division.","title":"Global Scratch directory $SCRATCH"},{"location":"filesystems/lustre/#general-architecture","text":"A Lustre file system has three major functional units: One or more MetaData Servers (MDS) nodes (here two) that have one or more MetaData Target (MDT) devices per Lustre filesystem that stores namespace metadata, such as filenames, directories, access permissions, and file layout. The MDT data is stored in a local disk filesystem. However, unlike block-based distributed filesystems, such as GPFS/SpectrumScale and PanFS, where the metadata server controls all of the block allocation, the Lustre metadata server is only involved in pathname and permission checks, and is not involved in any file I/O operations, avoiding I/O scalability bottlenecks on the metadata server. One or more Object Storage Server (OSS) nodes that store file data on one or more Object Storage Target (OST) devices. The capacity of a Lustre file system is the sum of the capacities provided by the OSTs. OSSs do most of the work and thus require as much RAM as possible Rule of thumb: ~2 GB base memory + 1 GB / OST Failover configurations: ~2 GB / OST OSSs should have as much CPUs as possible, but it is not as much critical as on MDS Client(s) that access and use the data. Lustre presents all clients with a unified namespace for all of the files and data in the filesystem, using standard POSIX semantics, and allows concurrent and coherent read and write access to the files in the filesystem. Lustre general features and numbers Lustre brings a modern architecture within an Object based file system with the following features: Adaptable : supports wide range of networks and storage hardware Scalable : Distributed file object handling for 100.000 clients and more Stability : production-quality stability and failover Modular : interfaces for easy adaption Highly Available : no single point of failure when configured with HA software BIG and exapandable : allow for multiple PB in one namespace Open-source and community driven. Lustre provides a POSIX compliant layer supported on most Linux flavours. In terms of raw number capabilities for the Lustre: Max system size: about 64PB Max number of OSTs: 8150 Max number of MDTs: multiple per filesystem supported since Lustre 2.4 Files per directory: 25 Millions (**don't run ls -al ) Max stripes: 2000 since Lustre 2.2 Stripe size: Min 64kB -- Max 2TB Max object size: 16TB( ldiskfs ) 256PB (ZFS) Max file size: 31.35PB ( ldiskfs ) 8EB (ZFS) When to use Lustre? Lustre is optimized for : Large files Sequential throughput Parallel applications writing to different parts of a file Lustre will not perform well for Lots of small files High number of meta data requests, improved on new versions Waste of space on the OSTs Understanding the Lustre Filesystems","title":"General Architecture"},{"location":"filesystems/lustre/#storage-system-implementation","text":"The way the ULHPC Lustre file system is implemented is depicted on the below figure. Acquired as part of RFP 170035 , the ULHPC configuration is based upon: a set of 2x EXAScaler Lustre building blocks that each consist of: 1x DDN SS7700 base enclosure and its controller pair with 4x FDR ports 1x DDN SS8460 disk expansion enclosure (84-slot drive enclosures) OSTs: 160x SEAGATE disks (7.2K RPM HDD, 8TB, Self Encrypted Disks (SED)) configured over 16 RAID6 (8+2) pools and extra disks in spare pools MDTs: 18x HGST disks (10K RPM HDD, 1.8TB, Self Encrypted Disks (SED)) configured over 8 RAID1 pools and extra disks in spare pools Two redundant MDS servers Dell R630, 2x Intel Xeon E5-2667v4 @ 3.20GHz [8c], 128GB RAM Two redundant OSS servers Dell R630XL, 2x Intel Xeon E5-2640v4 @ 2.40GHz [10c], 128GB RAM Criteria Value Power (nominal) 6.8 KW Power (idle) 5.5 KW Weight 432 kg Rack Height 22U LNet is configured to be performed with OST based balancing.","title":"Storage System Implementation"},{"location":"filesystems/lustre/#filesystem-performance","text":"The performance of the ULHPC Lustre filesystem is expected to be in the range of at least 15GB/s for large sequential read and writes.","title":"Filesystem Performance"},{"location":"filesystems/lustre/#ior","text":"Upon release of the system, performance measurement by IOR , a synthetic benchmark for testing the performance of distributed filesystems, was run for an increasing number of clients as well as with 1kiB, 4kiB, 1MiB and 4MiB transfer sizes. As can be seen, aggregated writes and reads exceed 15 GB/s (depending on the test) which meets the minimum requirement.","title":"IOR"},{"location":"filesystems/lustre/#fio","text":"Random IOPS benchmark was performed using FIO with 20 and 40 GB file size over 8 jobs, leading to the following total size of 160GB and 320 GB 320 GB is > 2 \\times \\times RAM size of the OSS node (128 GB RAM) 160 GB is > 1 \\times \\times RAM size of the OSS node (128 GB RAM)","title":"FIO"},{"location":"filesystems/lustre/#mdtest","text":"Mdtest (based on the 7c0ec41 on September 11 , 2017 (based on v1.9.3)) was used to benchmark the metadata capabilities of the delivered system. HT was turned on to be able to run 32 threads. Mind the logarithmic Y-Axis. Tests on 4 clients with up to 20 threads have been included as well to show the scalability of the system.","title":"MDTEST"},{"location":"filesystems/lustre/#lustre-usage","text":"","title":"Lustre Usage"},{"location":"filesystems/lustre/#understanding-lustre-io","text":"When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency, so that all clients see consistent results.","title":"Understanding Lustre I/O"},{"location":"filesystems/lustre/#discover-mdts-and-osts","text":"ULHPC's Lustre file systems look and act like a single logical storage, but a large files on Lustre can be divided into multiple chunks ( stripes ) and stored across over OSTs. This technique is called file striping . The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. It is thus important to know the number of OST on your running system. As mentioned in the Lustre implementation section , the ULHPC Lustre infrastructure is composed of 2 MDS servers (2 MDT), 2 OSS servers and 16 OSTs . You can list the MDTs and OSTs with the command lfs df : $ cds # OR: cd $SCRATCH $ lfs df -h UUID bytes Used Available Use% Mounted on lscratch-MDT0000_UUID 3 .2T 15 .4G 3 .1T 1 % /mnt/lscratch [ MDT:0 ] lscratch-MDT0001_UUID 3 .2T 3 .8G 3 .2T 1 % /mnt/lscratch [ MDT:1 ] lscratch-OST0000_UUID 57 .4T 16 .7T 40 .2T 30 % /mnt/lscratch [ OST:0 ] lscratch-OST0001_UUID 57 .4T 18 .8T 38 .0T 34 % /mnt/lscratch [ OST:1 ] lscratch-OST0002_UUID 57 .4T 17 .6T 39 .3T 31 % /mnt/lscratch [ OST:2 ] lscratch-OST0003_UUID 57 .4T 16 .6T 40 .3T 30 % /mnt/lscratch [ OST:3 ] lscratch-OST0004_UUID 57 .4T 16 .5T 40 .3T 30 % /mnt/lscratch [ OST:4 ] lscratch-OST0005_UUID 57 .4T 16 .5T 40 .3T 30 % /mnt/lscratch [ OST:5 ] lscratch-OST0006_UUID 57 .4T 16 .3T 40 .6T 29 % /mnt/lscratch [ OST:6 ] lscratch-OST0007_UUID 57 .4T 17 .0T 39 .9T 30 % /mnt/lscratch [ OST:7 ] lscratch-OST0008_UUID 57 .4T 16 .8T 40 .0T 30 % /mnt/lscratch [ OST:8 ] lscratch-OST0009_UUID 57 .4T 13 .2T 43 .6T 24 % /mnt/lscratch [ OST:9 ] lscratch-OST000a_UUID 57 .4T 13 .2T 43 .7T 24 % /mnt/lscratch [ OST:10 ] lscratch-OST000b_UUID 57 .4T 13 .3T 43 .6T 24 % /mnt/lscratch [ OST:11 ] lscratch-OST000c_UUID 57 .4T 14 .0T 42 .8T 25 % /mnt/lscratch [ OST:12 ] lscratch-OST000d_UUID 57 .4T 13 .9T 43 .0T 25 % /mnt/lscratch [ OST:13 ] lscratch-OST000e_UUID 57 .4T 14 .4T 42 .5T 26 % /mnt/lscratch [ OST:14 ] lscratch-OST000f_UUID 57 .4T 12 .9T 43 .9T 23 % /mnt/lscratch [ OST:15 ] filesystem_summary: 919 .0T 247 .8T 662 .0T 28 % /mnt/lscratch","title":"Discover MDTs and OSTs"},{"location":"filesystems/lustre/#file-striping","text":"File striping permits to increase the throughput of operations by taking advantage of several OSSs and OSTs, by allowing one or more clients to read/write different parts of the same file in parallel. On the other hand, striping small files can decrease the performance. File striping allows file sizes larger than a single OST, large files MUST be striped over several OSTs in order to avoid filling a single OST and harming the performance for all users. There is default stripe configuration for ULHPC Lustre filesystems (see below). However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance. You can tune file striping using 3 properties: Property Effect Default Accepted values Advised values stripe_size Size of the file stripes in bytes 1048576 (1m) > 0 > 0 stripe_count Number of OST to stripe across 1 -1 (use all the OSTs), 1-16 -1 stripe_offset Index of the OST where the first stripe of files will be written -1 (automatic) -1 , 0-15 -1 Note : With regards stripe_offset (the index of the OST where the first stripe is to be placed); the default is -1 which results in random selection and using a non-default value is NOT recommended . Note Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance. Use the lfs getstripe command for getting the stripe parameters. Use lfs setstripe for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns. Newly created files and directories will inherit these parameters from their parent directory. However, the parameters cannot be changed on an existing file. $ lfs getstripe dir | filename $ lfs setstripe -s -c -o dir | filename usage: lfs setstripe -d (to delete default striping from an existing directory) usage: lfs setstripe [--stripe-count|-c ] [--stripe-index|-i ] [--stripe-size|-S ] Example: $ lfs getstripe $SCRATCH /scratch/users// stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 [...] $ lfs setstripe -c -1 $SCRATCH $ lfs getstripe $SCRATCH /scratch/users// stripe_count: -1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 In this example, we view the current stripe setting of the $SCRATCH directory. The stripe count is changed to all OSTs and verified. All files written to this directory will be striped over the maximum number of OSTs (16). Use lfs check osts to see the number and status of active OSTs for each filesystem on the cluster. Learn more by reading the man page: $ lfs check osts $ man lfs","title":"File striping"},{"location":"filesystems/lustre/#file-stripping-examples","text":"Set the striping parameters for a directory containing only small files (< 20MB) $ cd $SCRATCH $ mkdir test_small_files $ lfs getstripe test_small_files test_small_files stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 pool: $ lfs setstripe --stripe-size 1M --stripe-count 1 test_small_files $ lfs getstripe test_small_files test_small_files stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 Set the striping parameters for a directory containing only large files between 100MB and 1GB $ mkdir test_large_files $ lfs setstripe --stripe-size 2M --stripe-count 2 test_large_files $ lfs getstripe test_large_files test_large_files stripe_count: 2 stripe_size: 2097152 stripe_offset: -1 Set the striping parameters for a directory containing files larger than 1GB $ mkdir test_larger_files $ lfs setstripe --stripe-size 4M --stripe-count 6 test_larger_files $ lfs getstripe test_larger_files test_larger_files stripe_count: 6 stripe_size: 4194304 stripe_offset: -1 Big Data files management on Lustre Using a large stripe size can improve performance when accessing very large files Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file. Note that these are simple examples, the optimal settings defer depending on the application (concurrent threads accessing the same file, size of each write operation, etc).","title":"File stripping Examples"},{"location":"filesystems/lustre/#lustre-best-practices","text":"Parallel I/O on the same file Increase the stripe_count for parallel I/O to the same file. When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file. Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes. For more details, you can read the following external resources: Reference Documentation: Managing File Layout (Striping) and Free Space Lustre Wiki Lustre Best Practices - Nasa HECC I/O and Lustre Usage - NISC","title":"Lustre Best practices"},{"location":"filesystems/overview/","text":"ULHPC File Systems Overview \u00b6 Several File Systems co-exist on the ULHPC facility and are configured for different purposes. Each servers and computational resources has access to at least three different file systems with different levels of performance, permanence and available space summarized below Directory Env. file system backup /home/users/ $HOME GPFS/Spectrumscale no /work/projects/ - GPFS/Spectrumscale yes (partial, backup subdirectory) /scratch/users/ $SCRATCH Lustre no /mnt/isilon/projects/ - OneFS yes (live sync and snapshots)","title":"Overview"},{"location":"filesystems/overview/#ulhpc-file-systems-overview","text":"Several File Systems co-exist on the ULHPC facility and are configured for different purposes. Each servers and computational resources has access to at least three different file systems with different levels of performance, permanence and available space summarized below Directory Env. file system backup /home/users/ $HOME GPFS/Spectrumscale no /work/projects/ - GPFS/Spectrumscale yes (partial, backup subdirectory) /scratch/users/ $SCRATCH Lustre no /mnt/isilon/projects/ - OneFS yes (live sync and snapshots)","title":"ULHPC File Systems Overview"},{"location":"filesystems/projecthome/","text":"Global Project directory $PROJECTHOME=/work/projects/ \u00b6 Project directories are intended for sharing data within a group of researchers, under /work/projects/ Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible.","title":"Projecthome"},{"location":"filesystems/projecthome/#global-project-directory-projecthomeworkprojects","text":"Project directories are intended for sharing data within a group of researchers, under /work/projects/ Refer to your project base home directory using the environment variable $PROJECTHOME=/work/projects whenever possible.","title":"Global Project directory $PROJECTHOME=/work/projects/"},{"location":"filesystems/quotas/","text":"Quotas \u00b6 Overview \u00b6 Directory Default space quota Default inode quota $HOME 500 GB 1 M $SCRATCH 10 TB 1 M /work/projects/... 1 TB 1 M /mnt/isilon/projects/... 1.14 PB globally - Quotas \u00b6 Warning When a quota is reached writes to that directory will fail. Note On Isilon everyone shares one global quota and the HPC Platform team sets up project quotas. Unfortunately it is not possible to see the quota status on the cluster. Current usage \u00b6 We provide the df-ulhpc command on the cluster login nodes, which displays current usage, soft quota, hard quota and grace period. Any directories that have exceeded the quota will be highlighted in red. Once you reach the soft quota you can still write data until the grace period expires (7 days) or you reach the hard quota. After you reach the end of the grace period or the hard quota, you have to reduce your usage to below the soft quota to be able to write data again. Check current space quota status: df-ulhpc Check current inode quota status: df-ulhpc -i Check free space on all file systems: df -h Check free space on current file system: df -h . To detect the exact source of inode usage, you can use the command du --max-depth = --human-readable --inodes where depth : the inode usage for any file from depth and bellow is summed in the report for the directory in level depth in which the file belongs, and directory : the directory for which the analysis is curried out; leaving empty performs the analysis in the current working directory. For a more graphical approach, use ncdu , with the c option to display the aggregate inode number for the directories in the current working directory. Increases \u00b6 If your project needs additional space or inodes for a specific project directory you may request it via ServiceNow (HPC \u2192 Storage & projects \u2192 Extend quota). Quotas on the home directory and scratch cannot be increased. Troubleshooting \u00b6 The quotas on project directories are based on the group. Be aware that the quota for the default user group clusterusers is 0. If you get a quota error, but df-ulhpc and df-ulhpc -i confirm that the quota is not expired, you are most likely trying to write a file with the group clusterusers instead of the project group. To avoid this issue, check out the newgrp command or set the s mode bit (\"set group ID\") on the directory with chmod g+s . The s bit means that any file or folder created below will inherit the group. To transfer data with rsync into a project directory, please check the data transfer documentation .","title":"Quotas"},{"location":"filesystems/quotas/#quotas","text":"","title":"Quotas"},{"location":"filesystems/quotas/#overview","text":"Directory Default space quota Default inode quota $HOME 500 GB 1 M $SCRATCH 10 TB 1 M /work/projects/... 1 TB 1 M /mnt/isilon/projects/... 1.14 PB globally -","title":"Overview"},{"location":"filesystems/quotas/#quotas_1","text":"Warning When a quota is reached writes to that directory will fail. Note On Isilon everyone shares one global quota and the HPC Platform team sets up project quotas. Unfortunately it is not possible to see the quota status on the cluster.","title":"Quotas"},{"location":"filesystems/quotas/#current-usage","text":"We provide the df-ulhpc command on the cluster login nodes, which displays current usage, soft quota, hard quota and grace period. Any directories that have exceeded the quota will be highlighted in red. Once you reach the soft quota you can still write data until the grace period expires (7 days) or you reach the hard quota. After you reach the end of the grace period or the hard quota, you have to reduce your usage to below the soft quota to be able to write data again. Check current space quota status: df-ulhpc Check current inode quota status: df-ulhpc -i Check free space on all file systems: df -h Check free space on current file system: df -h . To detect the exact source of inode usage, you can use the command du --max-depth = --human-readable --inodes where depth : the inode usage for any file from depth and bellow is summed in the report for the directory in level depth in which the file belongs, and directory : the directory for which the analysis is curried out; leaving empty performs the analysis in the current working directory. For a more graphical approach, use ncdu , with the c option to display the aggregate inode number for the directories in the current working directory.","title":"Current usage"},{"location":"filesystems/quotas/#increases","text":"If your project needs additional space or inodes for a specific project directory you may request it via ServiceNow (HPC \u2192 Storage & projects \u2192 Extend quota). Quotas on the home directory and scratch cannot be increased.","title":"Increases"},{"location":"filesystems/quotas/#troubleshooting","text":"The quotas on project directories are based on the group. Be aware that the quota for the default user group clusterusers is 0. If you get a quota error, but df-ulhpc and df-ulhpc -i confirm that the quota is not expired, you are most likely trying to write a file with the group clusterusers instead of the project group. To avoid this issue, check out the newgrp command or set the s mode bit (\"set group ID\") on the directory with chmod g+s . The s bit means that any file or folder created below will inherit the group. To transfer data with rsync into a project directory, please check the data transfer documentation .","title":"Troubleshooting"},{"location":"filesystems/scratch/","text":"Global Scratch directory $SCRATCH \u00b6 The scratch area is a Lustre -based file system designed for high performance temporary storage of large files. It is thus intended to support large I/O for jobs that are being actively computed on the ULHPC systems. We recommend that you run your jobs, especially data intensive ones, from the ULHPC scratch file system. Refer to your scratch directory using the environment variable $SCRATCH whenever possible (which expands to /scratch/users/$(whoami) ). The scratch file system is shared via the Infiniband network of the ULHPC facility and is available from all nodes while being tuned for high performance.","title":"Scratch"},{"location":"filesystems/scratch/#global-scratch-directory-scratch","text":"The scratch area is a Lustre -based file system designed for high performance temporary storage of large files. It is thus intended to support large I/O for jobs that are being actively computed on the ULHPC systems. We recommend that you run your jobs, especially data intensive ones, from the ULHPC scratch file system. Refer to your scratch directory using the environment variable $SCRATCH whenever possible (which expands to /scratch/users/$(whoami) ). The scratch file system is shared via the Infiniband network of the ULHPC facility and is available from all nodes while being tuned for high performance.","title":"Global Scratch directory $SCRATCH"},{"location":"filesystems/unix-file-permissions/","text":"Unix File Permissions \u00b6 Brief Overview \u00b6 Every file (and directory) has an owner, an associated Unix group, and a set of permission flags that specify separate read, write, and execute permissions for the \"user\" (owner), \"group\", and \"other\". Group permissions apply to all users who belong to the group associated with the file. \"Other\" is also sometimes known as \"world\" permissions, and applies to all users who can login to the system. The command ls -l displays the permissions and associated group for any file. Here is an example of the output of this command: drwx------ 2 elvis elvis 2048 Jun 12 2012 private -rw------- 2 elvis elvis 1327 Apr 9 2012 try.f90 -rwx------ 2 elvis elvis 12040 Apr 9 2012 a.out drwxr-x--- 2 elvis bigsci 2048 Oct 17 2011 share drwxr-xr-x 3 elvis bigsci 2048 Nov 13 2011 public From left to right, the fields above represent: set of ten permission flags link count (irrelevant to this topic) owner associated group size date of last modification name of file The permission flags from left to right are: Position Meaning 1 \"d\" if a directory, \"-\" if a normal file 2, 3, 4 read, write, execute permission for user (owner) of file 5, 6, 7 read, write, execute permission for group 8, 9, 10 read, write, execute permission for other (world) and have the following meanings: Value Meaning - Flag is not set. r File is readable. w File is writable. For directories, files may be created or removed. x File is executable. For directories, files may be listed. s Set group ID (sgid). For directories, files created therein will be associated with the same group as the directory, rather than default group of the user. Subdirectories created therein will not only have the same group, but will also inherit the sgid setting. These definitions can be used to interpret the example output of ls -l presented above: drwx------ 2 elvis elvis 2048 Jun 12 2012 private This is a directory named \"private\", owned by user elvis and associated with Unix group elvis. The directory has read, write, and execute permissions for the owner, and no permissions for any other user. -rw------- 2 elvis elvis 1327 Apr 9 2012 try.f90 This is a normal file named \"try.f90\", owned by user elvis and associated with group elvis. It is readable and writable by the owner, but is not accessible to any other user. -rwx------ 2 elvis elvis 12040 Apr 9 2012 a.out This is a normal file named \"a.out\", owned by user elvis and associated with group elvis. It is executable, as well as readable and writable, for the owner only. drwxr-x--- 2 elvis bigsci 2048 Oct 17 2011 share This is a directory named \"share\", owned by user elvis and associated with group bigsci. The owner can read and write the directory; all members of the file group bigsci can list the contents of the directory. Presumably, this directory would contain files that also have \"group read\" permissions. drwxr-xr-x 3 elvis bigsci 2048 Nov 13 2011 public This is a directory named \"public\", owned by user elvis and associated with group bigsci. The owner can read and write the directory; all other users can only read the contents of the directory. A directory such as this would most likely contain files that have \"world read\" permissions. Useful File Permission Commands \u00b6 umask \u00b6 When a file is created, the permission flags are set according to the file mode creation mask, which can be set using the umask command. The file mode creation mask (sometimes referred to as \"the umask\") is a three-digit octal value whose nine bits correspond to fields 2-10 of the permission flags. The resulting permissions are calculated via the bitwise AND of the unary complement of the argument (using bitwise NOT) and the default permissions specified by the shell (typically 666 for files and 777 for directories). Common useful values are: umask value File Permissions Directory Permissions 002 -rw-rw-r-- drwxrwxr-x 007 -rw-rw---- drwxrwx--- 022 -rw-r--r-- drwxr-xr-x 027 -rw-r----- drwxr-x--- 077 -rw------- drwx------ Note that at ULHPC, the default umask is left unchanged (022), yet it can be redefined in your ~/.bash_profile configuration file if needed. chmod \u00b6 The chmod (\"change mode\") command is used to change the permission flags on existing files. It can be applied recursively using the \"-R\" option. It can be invoked with either octal values representing the permission flags, or with symbolic representations of the flags. The octal values have the following meaning: Octal Digit Binary Representation ( rwx ) Permission 0 000 none 1 001 execute only 2 010 write only 3 011 write and execute 4 100 read only 5 101 read and execute 6 110 read and write 7 111 read, write, and execute (full permissions) Here is an example of chmod using octal values: $ umask 0022 $ touch foo $ ls -l foo -rw-r--r--. 1 elvis elvis 0 Nov 19 14:49 foo $ chmod 755 foo $ ls -l foo -rwxr-xr-x. 1 elvis elvis 0 Nov 19 14:49 foo In the above example, the umask for user elvis results in a file that is read-write for the user, and read for group and other. The chmod command specifies read-write-execute permissions for the user, and read-execute permissions for group and other. Here is the format of the chmod command when using symbolic values: chmod [-R] [classes][operator][modes] file ... The classes determine to which combination of user/group/other the operation will apply, the operator specifies whether permissions are being added or removed, and the modes specify the permissions to be added or removed. Classes are formed by combining one or more of the following letters: Letter Class Description u user Owner of the file g group Users who are members of the file's group o other Users who are not the owner of the file or members of the file's group a all All of the above (equivalent to ugo ) The following operators are supported: Operator Description + Add the specified modes to the specified classes. - Remove the specified modes from the specified classes. = The specified modes are made the exact modes for the specified classes. The modes specify which permissions are to be added to or removed from the specified classes. There are three primary values which correspond to the basic permissions, and two less frequently-used values that are useful in specific circumstances: Mode Name Description r read Read a file or list a directory's contents. w write Write to a file or directory. x execute Execute a file or traverse a directory. X \"special\" execute This is a slightly more restrictive version of \"x\". It applies execute permissions to directories in all cases, and to files only if at least one execute permission bit is already set. It is typically used with the \"+\" operator and the \"-R\" option, to give group and/or other access to a large directory tree, without setting execute permissions on normal (non-executable) files (e.g., text files). For example, chmod -R go+rx bigdir would set read and execute permissions on every file (including text files) and directory in the bigdir directory, recursively, for group and other. The command chmod -R go+rX bigdir would set read and execute permissions on every directory, and would set group and other read and execute permissions on files that were already executable by the owner. s setgid or sgid This setting is typically applied to directories. If set, any file created in that directory will be associated with the directory's group, rather than with the default file group of the owner. This is useful in setting up directories where many users share access. This setting is sometimes referred to as the \"sticky bit\", although that phrase has a historical meaning unrelated to this context. Sets of class/operator/mode may separated by commas. Using the above definitions, the previous (octal notation) example can be done symbolically: $ umask 0022 $ touch foo $ ls -l foo -rw-r--r--. 1 elvis elvis 0 Nov 19 14:49 foo $ chmod u+x,go+rx foo $ ls -l foo -rwxr-xr-x. 1 elvis elvis 0 Nov 19 14:49 foo Unix File Groups \u00b6 Unix file groups provide a means to control access to shared data on disk and tape. Overview of Unix Groups \u00b6 Every user on a Unix system is a member of one or more Unix groups, including their primary or default group. Every file (or directory) on the system has an owner and an associated group. When a user creates a file, the file's associated group will be the user's default group. The user (owner) has the ability to change the associated group to any of the groups to which the user belongs. Unix groups can be defined that allow users to share data with other users who belong to the same group. Unix Groups at ULHPC \u00b6 All user's default group is clusterusers . Users usually belong to several other groups, including groups associated with specific research projects. Groups are used to shared file between project members, and can be created on request. See the page about Project Data Management for more information. Useful Unix Group Commands \u00b6 Command Description groups username List group membership id username List group membership with group ids ls -l List group associated with file or directory chgrp Change group associated with file or directory newgrp Create new shell with different default group sg Execute command with different default group","title":"Unix File Permissions"},{"location":"filesystems/unix-file-permissions/#unix-file-permissions","text":"","title":"Unix File Permissions"},{"location":"filesystems/unix-file-permissions/#brief-overview","text":"Every file (and directory) has an owner, an associated Unix group, and a set of permission flags that specify separate read, write, and execute permissions for the \"user\" (owner), \"group\", and \"other\". Group permissions apply to all users who belong to the group associated with the file. \"Other\" is also sometimes known as \"world\" permissions, and applies to all users who can login to the system. The command ls -l displays the permissions and associated group for any file. Here is an example of the output of this command: drwx------ 2 elvis elvis 2048 Jun 12 2012 private -rw------- 2 elvis elvis 1327 Apr 9 2012 try.f90 -rwx------ 2 elvis elvis 12040 Apr 9 2012 a.out drwxr-x--- 2 elvis bigsci 2048 Oct 17 2011 share drwxr-xr-x 3 elvis bigsci 2048 Nov 13 2011 public From left to right, the fields above represent: set of ten permission flags link count (irrelevant to this topic) owner associated group size date of last modification name of file The permission flags from left to right are: Position Meaning 1 \"d\" if a directory, \"-\" if a normal file 2, 3, 4 read, write, execute permission for user (owner) of file 5, 6, 7 read, write, execute permission for group 8, 9, 10 read, write, execute permission for other (world) and have the following meanings: Value Meaning - Flag is not set. r File is readable. w File is writable. For directories, files may be created or removed. x File is executable. For directories, files may be listed. s Set group ID (sgid). For directories, files created therein will be associated with the same group as the directory, rather than default group of the user. Subdirectories created therein will not only have the same group, but will also inherit the sgid setting. These definitions can be used to interpret the example output of ls -l presented above: drwx------ 2 elvis elvis 2048 Jun 12 2012 private This is a directory named \"private\", owned by user elvis and associated with Unix group elvis. The directory has read, write, and execute permissions for the owner, and no permissions for any other user. -rw------- 2 elvis elvis 1327 Apr 9 2012 try.f90 This is a normal file named \"try.f90\", owned by user elvis and associated with group elvis. It is readable and writable by the owner, but is not accessible to any other user. -rwx------ 2 elvis elvis 12040 Apr 9 2012 a.out This is a normal file named \"a.out\", owned by user elvis and associated with group elvis. It is executable, as well as readable and writable, for the owner only. drwxr-x--- 2 elvis bigsci 2048 Oct 17 2011 share This is a directory named \"share\", owned by user elvis and associated with group bigsci. The owner can read and write the directory; all members of the file group bigsci can list the contents of the directory. Presumably, this directory would contain files that also have \"group read\" permissions. drwxr-xr-x 3 elvis bigsci 2048 Nov 13 2011 public This is a directory named \"public\", owned by user elvis and associated with group bigsci. The owner can read and write the directory; all other users can only read the contents of the directory. A directory such as this would most likely contain files that have \"world read\" permissions.","title":"Brief Overview"},{"location":"filesystems/unix-file-permissions/#useful-file-permission-commands","text":"","title":"Useful File Permission Commands"},{"location":"filesystems/unix-file-permissions/#umask","text":"When a file is created, the permission flags are set according to the file mode creation mask, which can be set using the umask command. The file mode creation mask (sometimes referred to as \"the umask\") is a three-digit octal value whose nine bits correspond to fields 2-10 of the permission flags. The resulting permissions are calculated via the bitwise AND of the unary complement of the argument (using bitwise NOT) and the default permissions specified by the shell (typically 666 for files and 777 for directories). Common useful values are: umask value File Permissions Directory Permissions 002 -rw-rw-r-- drwxrwxr-x 007 -rw-rw---- drwxrwx--- 022 -rw-r--r-- drwxr-xr-x 027 -rw-r----- drwxr-x--- 077 -rw------- drwx------ Note that at ULHPC, the default umask is left unchanged (022), yet it can be redefined in your ~/.bash_profile configuration file if needed.","title":"umask"},{"location":"filesystems/unix-file-permissions/#chmod","text":"The chmod (\"change mode\") command is used to change the permission flags on existing files. It can be applied recursively using the \"-R\" option. It can be invoked with either octal values representing the permission flags, or with symbolic representations of the flags. The octal values have the following meaning: Octal Digit Binary Representation ( rwx ) Permission 0 000 none 1 001 execute only 2 010 write only 3 011 write and execute 4 100 read only 5 101 read and execute 6 110 read and write 7 111 read, write, and execute (full permissions) Here is an example of chmod using octal values: $ umask 0022 $ touch foo $ ls -l foo -rw-r--r--. 1 elvis elvis 0 Nov 19 14:49 foo $ chmod 755 foo $ ls -l foo -rwxr-xr-x. 1 elvis elvis 0 Nov 19 14:49 foo In the above example, the umask for user elvis results in a file that is read-write for the user, and read for group and other. The chmod command specifies read-write-execute permissions for the user, and read-execute permissions for group and other. Here is the format of the chmod command when using symbolic values: chmod [-R] [classes][operator][modes] file ... The classes determine to which combination of user/group/other the operation will apply, the operator specifies whether permissions are being added or removed, and the modes specify the permissions to be added or removed. Classes are formed by combining one or more of the following letters: Letter Class Description u user Owner of the file g group Users who are members of the file's group o other Users who are not the owner of the file or members of the file's group a all All of the above (equivalent to ugo ) The following operators are supported: Operator Description + Add the specified modes to the specified classes. - Remove the specified modes from the specified classes. = The specified modes are made the exact modes for the specified classes. The modes specify which permissions are to be added to or removed from the specified classes. There are three primary values which correspond to the basic permissions, and two less frequently-used values that are useful in specific circumstances: Mode Name Description r read Read a file or list a directory's contents. w write Write to a file or directory. x execute Execute a file or traverse a directory. X \"special\" execute This is a slightly more restrictive version of \"x\". It applies execute permissions to directories in all cases, and to files only if at least one execute permission bit is already set. It is typically used with the \"+\" operator and the \"-R\" option, to give group and/or other access to a large directory tree, without setting execute permissions on normal (non-executable) files (e.g., text files). For example, chmod -R go+rx bigdir would set read and execute permissions on every file (including text files) and directory in the bigdir directory, recursively, for group and other. The command chmod -R go+rX bigdir would set read and execute permissions on every directory, and would set group and other read and execute permissions on files that were already executable by the owner. s setgid or sgid This setting is typically applied to directories. If set, any file created in that directory will be associated with the directory's group, rather than with the default file group of the owner. This is useful in setting up directories where many users share access. This setting is sometimes referred to as the \"sticky bit\", although that phrase has a historical meaning unrelated to this context. Sets of class/operator/mode may separated by commas. Using the above definitions, the previous (octal notation) example can be done symbolically: $ umask 0022 $ touch foo $ ls -l foo -rw-r--r--. 1 elvis elvis 0 Nov 19 14:49 foo $ chmod u+x,go+rx foo $ ls -l foo -rwxr-xr-x. 1 elvis elvis 0 Nov 19 14:49 foo","title":"chmod"},{"location":"filesystems/unix-file-permissions/#unix-file-groups","text":"Unix file groups provide a means to control access to shared data on disk and tape.","title":"Unix File Groups"},{"location":"filesystems/unix-file-permissions/#overview-of-unix-groups","text":"Every user on a Unix system is a member of one or more Unix groups, including their primary or default group. Every file (or directory) on the system has an owner and an associated group. When a user creates a file, the file's associated group will be the user's default group. The user (owner) has the ability to change the associated group to any of the groups to which the user belongs. Unix groups can be defined that allow users to share data with other users who belong to the same group.","title":"Overview of Unix Groups"},{"location":"filesystems/unix-file-permissions/#unix-groups-at-ulhpc","text":"All user's default group is clusterusers . Users usually belong to several other groups, including groups associated with specific research projects. Groups are used to shared file between project members, and can be created on request. See the page about Project Data Management for more information.","title":"Unix Groups at ULHPC"},{"location":"filesystems/unix-file-permissions/#useful-unix-group-commands","text":"Command Description groups username List group membership id username List group membership with group ids ls -l List group associated with file or directory chgrp Change group associated with file or directory newgrp Create new shell with different default group sg Execute command with different default group","title":"Useful Unix Group Commands"},{"location":"help/","text":"Support \u00b6 ULHPC strives to support in a user friendly way your [super]computing needs. Note however that we are not here to make your PhD at your place ;) Service Now HPC Support Portal FAQ/Troubleshooting \u00b6 Password reset Connection issues File Permissions Access rights to project directory Quotas Read the Friendly Manual \u00b6 We have always maintained an extensive documentation and tutorials available online, which aims at being the most up-to-date and comprehensive. So please, read the documentation first if you have a question of problem -- we probably provide detailed instructions here Help Desk \u00b6 The online help desk Service is the preferred method for contacting ULHPC. Tips Before reporting a problem or and issue, kindly remember that: Your issue is probably documented here on the ULHPC Technical documentation An event may be on-going: check the ULHPC Live status page Planned maintenance are announced at least 2 weeks in advance - -- see Maintenance and Downtime Policy The proper SSH banner is displayed during planned downtime check the state of your nodes and jobs Joining/monitoring running jobs Monitoring post-mortem Job status and efficiency Service Now HPC Support Portal You can make code snippets, shell outputs, etc in your ticket much more readable by inserting a line with: [code]
     before the snippet, and another line with: 
    [/code] after it. For a full list of formatting options, see this ServiceNow article . Be as precise and complete as possible ULHPC team handle thousands of support requests per year. In order to ensure efficient timely resolution of issues, ensure that: you select the appropriate category (left menu) you include as much of the following as possible when making a request: Who? - Name and user id (login), eventually project name When? - When did the problem occur? Where? - Which cluster ? Which node ? Which job ? Really include Job IDs Location of relevant files input/output, job launcher scripts, source code, executables etc. What? - What happened? What exactly were you doing or trying to do ? include Error messages - kindly report system or software messages literally and exactly . output of module list any steps you have tried Steps to reproduce Any part of this technical documentation you checked before opening the ticket Access to the online help system requires logging in with your Uni.lu username, password, and eventually one-time password. If you are an existing user unable to log in, you can send us an email . Availability and Response Time HPC support is provided on a volunteer basis by UL HPC staff and associated UL experts working at normal business hours. We offer no guarantee on response time except with paid support contracts. Email support \u00b6 You can contact us by mail to the ULHPC Team Email ( ONLY if you cannot login/access the HPC Support helpdesk portal : hpc-team@uni.lu You may also ask the help of other ULHPC users using the HPC User community mailing list: (moderated): hpc-users@uni.lu","title":"Support"},{"location":"help/#support","text":"ULHPC strives to support in a user friendly way your [super]computing needs. Note however that we are not here to make your PhD at your place ;) Service Now HPC Support Portal","title":"Support"},{"location":"help/#faqtroubleshooting","text":"Password reset Connection issues File Permissions Access rights to project directory Quotas","title":"FAQ/Troubleshooting"},{"location":"help/#read-the-friendly-manual","text":"We have always maintained an extensive documentation and tutorials available online, which aims at being the most up-to-date and comprehensive. So please, read the documentation first if you have a question of problem -- we probably provide detailed instructions here","title":"Read the Friendly Manual"},{"location":"help/#help-desk","text":"The online help desk Service is the preferred method for contacting ULHPC. Tips Before reporting a problem or and issue, kindly remember that: Your issue is probably documented here on the ULHPC Technical documentation An event may be on-going: check the ULHPC Live status page Planned maintenance are announced at least 2 weeks in advance - -- see Maintenance and Downtime Policy The proper SSH banner is displayed during planned downtime check the state of your nodes and jobs Joining/monitoring running jobs Monitoring post-mortem Job status and efficiency Service Now HPC Support Portal You can make code snippets, shell outputs, etc in your ticket much more readable by inserting a line with: [code]
     before the snippet, and another line with: 
    [/code] after it. For a full list of formatting options, see this ServiceNow article . Be as precise and complete as possible ULHPC team handle thousands of support requests per year. In order to ensure efficient timely resolution of issues, ensure that: you select the appropriate category (left menu) you include as much of the following as possible when making a request: Who? - Name and user id (login), eventually project name When? - When did the problem occur? Where? - Which cluster ? Which node ? Which job ? Really include Job IDs Location of relevant files input/output, job launcher scripts, source code, executables etc. What? - What happened? What exactly were you doing or trying to do ? include Error messages - kindly report system or software messages literally and exactly . output of module list any steps you have tried Steps to reproduce Any part of this technical documentation you checked before opening the ticket Access to the online help system requires logging in with your Uni.lu username, password, and eventually one-time password. If you are an existing user unable to log in, you can send us an email . Availability and Response Time HPC support is provided on a volunteer basis by UL HPC staff and associated UL experts working at normal business hours. We offer no guarantee on response time except with paid support contracts.","title":"Help Desk"},{"location":"help/#email-support","text":"You can contact us by mail to the ULHPC Team Email ( ONLY if you cannot login/access the HPC Support helpdesk portal : hpc-team@uni.lu You may also ask the help of other ULHPC users using the HPC User community mailing list: (moderated): hpc-users@uni.lu","title":"Email support"},{"location":"interconnect/ethernet/","text":"Ethernet Network \u00b6 Having a single high-bandwidth and low-latency network as the local Fast IB interconnect network to support efficient HPC and Big data workloads would not provide the necessary flexibility brought by the Ethernet protocol. Especially applications that are not able to employ the native protocol foreseen for that network and thus forced to use an IP emulation layer will benefit from the flexibility of Ethernet-based networks. An additional, Ethernet-based network offers the robustness and resiliency needed for management tasks inside the system in such cases Outside the Fast IB interconnect network used inside the clusters, we maintain an Ethernet network organized as a 2-layer topology: one upper level ( Gateway Layer ) with routing, switching features, network isolation and filtering (ACL) rules and meant to interconnect only switches. This layer is handled by a redundant set of site routers (ULHPC gateway routers). it allows to interface the University network for both internal (LAN) and external (WAN) communications one bottom level ( Switching Layer ) composed by the [stacked] core switches as well as the TOR (Top-the-rack) switches, meant to interface the HPC servers and compute nodes. An overview of this topology is provided in the below figure. ACM PEARC'22 article If you are interested to get more details on the implemented Ethernet network, you can refer to the following article published to the ACM PEARC'22 conference (Practice and Experience in Advanced Research Computing) in Boston, USA on July 13, 2022. ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides Sebastien Varrette, Hyacinthe Cartiaux, Teddy Valette, and Abatcha Olloh. 2022. Aggregating and Consolidating two High Performant Network Topologies: The ULHPC Experience. In Practice and Experience in Advanced Research Computing (PEARC '22) . Association for Computing Machinery, New York, NY, USA, Article 61, 1\u20136. https://doi.org/10.1145/3491418.3535159","title":"Ethernet Interconnect"},{"location":"interconnect/ethernet/#ethernet-network","text":"Having a single high-bandwidth and low-latency network as the local Fast IB interconnect network to support efficient HPC and Big data workloads would not provide the necessary flexibility brought by the Ethernet protocol. Especially applications that are not able to employ the native protocol foreseen for that network and thus forced to use an IP emulation layer will benefit from the flexibility of Ethernet-based networks. An additional, Ethernet-based network offers the robustness and resiliency needed for management tasks inside the system in such cases Outside the Fast IB interconnect network used inside the clusters, we maintain an Ethernet network organized as a 2-layer topology: one upper level ( Gateway Layer ) with routing, switching features, network isolation and filtering (ACL) rules and meant to interconnect only switches. This layer is handled by a redundant set of site routers (ULHPC gateway routers). it allows to interface the University network for both internal (LAN) and external (WAN) communications one bottom level ( Switching Layer ) composed by the [stacked] core switches as well as the TOR (Top-the-rack) switches, meant to interface the HPC servers and compute nodes. An overview of this topology is provided in the below figure. ACM PEARC'22 article If you are interested to get more details on the implemented Ethernet network, you can refer to the following article published to the ACM PEARC'22 conference (Practice and Experience in Advanced Research Computing) in Boston, USA on July 13, 2022. ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides Sebastien Varrette, Hyacinthe Cartiaux, Teddy Valette, and Abatcha Olloh. 2022. Aggregating and Consolidating two High Performant Network Topologies: The ULHPC Experience. In Practice and Experience in Advanced Research Computing (PEARC '22) . Association for Computing Machinery, New York, NY, USA, Article 61, 1\u20136. https://doi.org/10.1145/3491418.3535159","title":"Ethernet Network"},{"location":"interconnect/ib/","text":"Fast Local Interconnect Network \u00b6 High Performance Computing (HPC) encompasses advanced computation over parallel processing, enabling faster execution of highly compute intensive tasks. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores and their utilisation factor and the interconnect performance, efficiency, and scalability. HPC interconnect technologies can be nowadays divided into three categories: Ethernet, InfiniBand, and vendor specific interconnects. While Ethernet is established as the dominant interconnect standard for mainstream commercial computing requirements, the underlying protocol has inherent limitations preventing low-latency deployments expected in real HPC environment. When in need of high-bandwidth and low-latency as required in efficient high performance computing systems, better options have emerged and are considered: InfiniBand technologies 1 . See Introduction to High-Speed InfiniBand Interconnect . Vendor specific interconnects, which currently correspond to the technology provided by three main HPC vendors: Cray/HPC Slingshot , Intel's EOL Omni-Path Architecture (OPA) or, to a minor measure, Bull BXI . Within the ULHPC facility, the InfiniBand solution was preferred as the predominant interconnect technology in the HPC market, tested against the largest set of HPC workloads. In practice: Iris relies on a EDR Infiniband (IB) Fabric in a Fat-Tree Topology Aion relies on a HDR100 Infiniband (IB) Fabric in a Fat-Tree Topology ACM PEARC'22 article If you are interested to understand the architecture and the solutions designed upon Aion acquisition to expand and consolidate the previously existing IB networks beyond its seminal capacity limits (while keeping at best their Bisection bandwidth), you can refer to the following article published to the ACM PEARC'22 conference (Practice and Experience in Advanced Research Computing) in Boston, USA on July 13, 2022. ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides Sebastien Varrette, Hyacinthe Cartiaux, Teddy Valette, and Abatcha Olloh. 2022. Aggregating and Consolidating two High Performant Network Topologies: The ULHPC Experience. In Practice and Experience in Advanced Research Computing (PEARC '22) . Association for Computing Machinery, New York, NY, USA, Article 61, 1\u20136. https://doi.org/10.1145/3491418.3535159 ULHPC IB Topology \u00b6 One of the most significant differentiators between HPC systems and lesser performing systems is, apart from the interconnect technology deployed, the supporting topology. There are several topologies commonly used in large-scale HPC deployments ( Fat-Tree , 3D-Torus , Dragonfly+ etc.). Fat-tree remains the widely used topology in HPC clusters due to its versatility, high bisection bandwidth and well understood routing. For this reason, each production clusters of the ULHPC facility rely on Fat-Tree topology. To minimize the number of switches per nodes while keeping a good Bisection bandwidth and allowing to interconnect Iris and Aion IB networks, the following configuration has been implemented: For more details: Iris IB Interconnect Aion IB Interconnect The tight integration of I/O and compute in the ULHPC supercomputer architecture gives a very robust, time critical production systems. The selected routing algorithms also provides a dedicated and fast path to the IO targets, avoiding congestion on the high-speed network and mitigating the risk of runtime \"jitter\" for time critical jobs. IB Fabric Diagnostic Utilities \u00b6 An InfiniBand fabric is composed of switches and channel adapter (HCA/Connect-X cards) devices. To identify devices in a fabric (or even in one switch system), each device is given a GUID (a MAC address equivalent). Since a GUID is a non-user-friendly string of characters, we alias it to a meaningful, user-given name. There are a few IB diagnostic tools (typically installed by the infiniband-diags package) using these names. The ULHPC team is using them to diagnose Infiniband Fabric Information 2 -- see also InfiniBand Guide by Atos/Bull (PDF) Tools Description ibnodes Show Infiniband nodes in topology ibhosts Show InfiniBand host nodes in topology ibswitches Show InfiniBand switch nodes in topology ibnetdiscover Discover InfiniBand topology ibdiag Scans the fabric using directed route packets and extracts all the available information (connectivity, devices) perfquery find errors on a particular or number of HCA\u2019s and switch ports sminfo Get InfiniBand Subnet Manager Info Mellanox Equipment FW Update \u00b6 An InfiniBand fabric is composed of switches and channel adapter (HCA/Connect-X cards) devices. Both should be kept up-to-date to mitigate potential security issues. Mellanox ConnectX HCA cards \u00b6 The Mellanox HCA firmware updater tool: mlxup , can be downloaded from mellanox.com . A Typical workflow applied within the ULHPC team to update the firmware of the Connect-X cards : Query specific device or all devices (if no device were supplied) mlxup --query Go to https://support.mellanox.com/s/downloads-center then click on Adapter > ConnectX- > All downloads (select any OS, it will redirect you to the same page) Click on \"Firmware\" tab and enter the PSID number obtained from mlxup --query Download the latest firmware version wget http://content.mellanox.com/firmware/fw-ConnectX[...].bin.zip Unzip the downloaded file: unzip [...] Burn device with latest firmware mlxup -d -i .bin Reboot Mellanox IB Switches \u00b6 Reference documentation You need to download from Mellanox Download Center BEWARE of the processor architecture (X86 vs. PPC) when selecting the images select the switch model and download the proposed images -- Pay attention to the download path Originated in 1999 to specifically address workload requirements that were not adequately addressed by Ethernet and designed for scalability, using a switched fabric network topology together with Remote Direct Memory Access (RDMA) to reduce CPU overhead. Although InfiniBand is backed by a standards organisation ( InfiniBand Trade Association with formal and open multi-vendor processes, the InfiniBand market is currently dominated by a single significant vendor Mellanox recently acquired by NVidia , which also dominates the non-Ethernet market segment across HPC deployments. \u21a9 Most require priviledged (root) right and thus are not available for ULHPC end users. \u21a9","title":"Fast Infiniband Interconnect"},{"location":"interconnect/ib/#fast-local-interconnect-network","text":"High Performance Computing (HPC) encompasses advanced computation over parallel processing, enabling faster execution of highly compute intensive tasks. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores and their utilisation factor and the interconnect performance, efficiency, and scalability. HPC interconnect technologies can be nowadays divided into three categories: Ethernet, InfiniBand, and vendor specific interconnects. While Ethernet is established as the dominant interconnect standard for mainstream commercial computing requirements, the underlying protocol has inherent limitations preventing low-latency deployments expected in real HPC environment. When in need of high-bandwidth and low-latency as required in efficient high performance computing systems, better options have emerged and are considered: InfiniBand technologies 1 . See Introduction to High-Speed InfiniBand Interconnect . Vendor specific interconnects, which currently correspond to the technology provided by three main HPC vendors: Cray/HPC Slingshot , Intel's EOL Omni-Path Architecture (OPA) or, to a minor measure, Bull BXI . Within the ULHPC facility, the InfiniBand solution was preferred as the predominant interconnect technology in the HPC market, tested against the largest set of HPC workloads. In practice: Iris relies on a EDR Infiniband (IB) Fabric in a Fat-Tree Topology Aion relies on a HDR100 Infiniband (IB) Fabric in a Fat-Tree Topology ACM PEARC'22 article If you are interested to understand the architecture and the solutions designed upon Aion acquisition to expand and consolidate the previously existing IB networks beyond its seminal capacity limits (while keeping at best their Bisection bandwidth), you can refer to the following article published to the ACM PEARC'22 conference (Practice and Experience in Advanced Research Computing) in Boston, USA on July 13, 2022. ACM Reference Format | ORBilu entry | OpenAccess | ULHPC blog post | slides Sebastien Varrette, Hyacinthe Cartiaux, Teddy Valette, and Abatcha Olloh. 2022. Aggregating and Consolidating two High Performant Network Topologies: The ULHPC Experience. In Practice and Experience in Advanced Research Computing (PEARC '22) . Association for Computing Machinery, New York, NY, USA, Article 61, 1\u20136. https://doi.org/10.1145/3491418.3535159","title":"Fast Local Interconnect Network"},{"location":"interconnect/ib/#ulhpc-ib-topology","text":"One of the most significant differentiators between HPC systems and lesser performing systems is, apart from the interconnect technology deployed, the supporting topology. There are several topologies commonly used in large-scale HPC deployments ( Fat-Tree , 3D-Torus , Dragonfly+ etc.). Fat-tree remains the widely used topology in HPC clusters due to its versatility, high bisection bandwidth and well understood routing. For this reason, each production clusters of the ULHPC facility rely on Fat-Tree topology. To minimize the number of switches per nodes while keeping a good Bisection bandwidth and allowing to interconnect Iris and Aion IB networks, the following configuration has been implemented: For more details: Iris IB Interconnect Aion IB Interconnect The tight integration of I/O and compute in the ULHPC supercomputer architecture gives a very robust, time critical production systems. The selected routing algorithms also provides a dedicated and fast path to the IO targets, avoiding congestion on the high-speed network and mitigating the risk of runtime \"jitter\" for time critical jobs.","title":"ULHPC IB Topology"},{"location":"interconnect/ib/#ib-fabric-diagnostic-utilities","text":"An InfiniBand fabric is composed of switches and channel adapter (HCA/Connect-X cards) devices. To identify devices in a fabric (or even in one switch system), each device is given a GUID (a MAC address equivalent). Since a GUID is a non-user-friendly string of characters, we alias it to a meaningful, user-given name. There are a few IB diagnostic tools (typically installed by the infiniband-diags package) using these names. The ULHPC team is using them to diagnose Infiniband Fabric Information 2 -- see also InfiniBand Guide by Atos/Bull (PDF) Tools Description ibnodes Show Infiniband nodes in topology ibhosts Show InfiniBand host nodes in topology ibswitches Show InfiniBand switch nodes in topology ibnetdiscover Discover InfiniBand topology ibdiag Scans the fabric using directed route packets and extracts all the available information (connectivity, devices) perfquery find errors on a particular or number of HCA\u2019s and switch ports sminfo Get InfiniBand Subnet Manager Info","title":"IB Fabric Diagnostic Utilities"},{"location":"interconnect/ib/#mellanox-equipment-fw-update","text":"An InfiniBand fabric is composed of switches and channel adapter (HCA/Connect-X cards) devices. Both should be kept up-to-date to mitigate potential security issues.","title":"Mellanox Equipment FW Update"},{"location":"interconnect/ib/#mellanox-connectx-hca-cards","text":"The Mellanox HCA firmware updater tool: mlxup , can be downloaded from mellanox.com . A Typical workflow applied within the ULHPC team to update the firmware of the Connect-X cards : Query specific device or all devices (if no device were supplied) mlxup --query Go to https://support.mellanox.com/s/downloads-center then click on Adapter > ConnectX- > All downloads (select any OS, it will redirect you to the same page) Click on \"Firmware\" tab and enter the PSID number obtained from mlxup --query Download the latest firmware version wget http://content.mellanox.com/firmware/fw-ConnectX[...].bin.zip Unzip the downloaded file: unzip [...] Burn device with latest firmware mlxup -d -i .bin Reboot","title":"Mellanox ConnectX HCA cards"},{"location":"interconnect/ib/#mellanox-ib-switches","text":"Reference documentation You need to download from Mellanox Download Center BEWARE of the processor architecture (X86 vs. PPC) when selecting the images select the switch model and download the proposed images -- Pay attention to the download path Originated in 1999 to specifically address workload requirements that were not adequately addressed by Ethernet and designed for scalability, using a switched fabric network topology together with Remote Direct Memory Access (RDMA) to reduce CPU overhead. Although InfiniBand is backed by a standards organisation ( InfiniBand Trade Association with formal and open multi-vendor processes, the InfiniBand market is currently dominated by a single significant vendor Mellanox recently acquired by NVidia , which also dominates the non-Ethernet market segment across HPC deployments. \u21a9 Most require priviledged (root) right and thus are not available for ULHPC end users. \u21a9","title":"Mellanox IB Switches"},{"location":"jobs/best-effort/","text":"Best-effort Jobs \u00b6 Node Type Slurm command regular sbatch [-A ] -p batch --qos besteffort [-C {broadwell,skylake}] [...] gpu sbatch [-A ] -p gpu --qos besteffort [-C volta[32]] -G 1 [...] bigmem sbatch [-A ] -p bigmem --qos besteffort [...] Best-effort (preemptible) jobs allow an efficient usage of the platform by filling available computing nodes until regular jobs are submitted. sbatch -p {batch | gpu | bigmem} --qos besteffort [...] What means job preemption? Job preemption is the the act of \"stopping\" one or more \"low-priority\" jobs to let a \"high-priority\" job run. Job preemption is implemented as a variation of Slurm's Gang Scheduling logic. When a non -best-effort job is allocated resources that are already allocated to one or more best-effort jobs, the preemptable job(s) (thus on QOS besteffort ) are preempted. On ULHPC facilities, the preempted job(s) can be requeued (if possible) or canceling them. **For jobs to be requeued, they MUST have the \" --requeue \" sbatch option set. The besteffort QOS have less constraints than the other QOS (for instance, you can submit more jobs etc. ) As a general rule users should ensure that they track successful completion of best-effort jobs (which may be interrupted by other jobs at any time) and use them in combination with mechanisms such as Checkpoint-Restart that allow applications to stop and resume safely.","title":"Best-effort Jobs"},{"location":"jobs/best-effort/#best-effort-jobs","text":"Node Type Slurm command regular sbatch [-A ] -p batch --qos besteffort [-C {broadwell,skylake}] [...] gpu sbatch [-A ] -p gpu --qos besteffort [-C volta[32]] -G 1 [...] bigmem sbatch [-A ] -p bigmem --qos besteffort [...] Best-effort (preemptible) jobs allow an efficient usage of the platform by filling available computing nodes until regular jobs are submitted. sbatch -p {batch | gpu | bigmem} --qos besteffort [...] What means job preemption? Job preemption is the the act of \"stopping\" one or more \"low-priority\" jobs to let a \"high-priority\" job run. Job preemption is implemented as a variation of Slurm's Gang Scheduling logic. When a non -best-effort job is allocated resources that are already allocated to one or more best-effort jobs, the preemptable job(s) (thus on QOS besteffort ) are preempted. On ULHPC facilities, the preempted job(s) can be requeued (if possible) or canceling them. **For jobs to be requeued, they MUST have the \" --requeue \" sbatch option set. The besteffort QOS have less constraints than the other QOS (for instance, you can submit more jobs etc. ) As a general rule users should ensure that they track successful completion of best-effort jobs (which may be interrupted by other jobs at any time) and use them in combination with mechanisms such as Checkpoint-Restart that allow applications to stop and resume safely.","title":"Best-effort Jobs"},{"location":"jobs/billing/","text":"Job Accounting and Billing \u00b6 Usage Charging Policy ULHPC Resource Allocation Policy (PDF) Billing rates \u00b6 Trackable RESources (TRES) Billing Weights \u00b6 The above policy is in practice implemented through the Slurm Trackable RESources (TRES) and remains an important factor for the Fairsharing score calculation. As explained in the ULHPC Usage Charging Policy , we set TRES for CPU, GPU, and Memory usage according to weights defined as follows: Weight Description \\alpha_{cpu} \\alpha_{cpu} Normalized relative performance of CPU processor core (ref.: skylake 73.6 GFlops/core) \\alpha_{mem} \\alpha_{mem} Inverse of the average available memory size per core \\alpha_{GPU} \\alpha_{GPU} Weight per GPU accelerator Each partition has its own weights (combined into TRESBillingWeight ) you can check with # /!\\ ADAPT accordingly scontrol show partition ","title":"Job Accounting and Billing"},{"location":"jobs/billing/#job-accounting-and-billing","text":"Usage Charging Policy ULHPC Resource Allocation Policy (PDF)","title":"Job Accounting and Billing"},{"location":"jobs/billing/#billing-rates","text":"","title":"Billing rates"},{"location":"jobs/billing/#trackable-resources-tres-billing-weights","text":"The above policy is in practice implemented through the Slurm Trackable RESources (TRES) and remains an important factor for the Fairsharing score calculation. As explained in the ULHPC Usage Charging Policy , we set TRES for CPU, GPU, and Memory usage according to weights defined as follows: Weight Description \\alpha_{cpu} \\alpha_{cpu} Normalized relative performance of CPU processor core (ref.: skylake 73.6 GFlops/core) \\alpha_{mem} \\alpha_{mem} Inverse of the average available memory size per core \\alpha_{GPU} \\alpha_{GPU} Weight per GPU accelerator Each partition has its own weights (combined into TRESBillingWeight ) you can check with # /!\\ ADAPT accordingly scontrol show partition ","title":"Trackable RESources (TRES) Billing Weights"},{"location":"jobs/gpu/","text":"ULHPC GPU Nodes \u00b6 Each GPU node provided as part of the gpu partition feature 4x Nvidia V100 SXM2 (with either 16G or 32G memory) interconnected by the NVLink 2.0 architecture NVlink was designed as an alternative solution to PCI Express with higher bandwidth and additional features (e.g., shared memory) specifically designed to be compatible with Nvidia's own GPU ISA for multi-GPU systems -- see wikichip article . Because of the hardware organization, you MUST follow the below recommendations: Do not run jobs on GPU nodes if you have no use of GPU accelerators! , i.e. if you are not using any of the software compiled against the {foss,intel}cuda toolchain. Avoid using more than 4 GPUs, ideally within the same node Dedicated \u00bc of the available CPU cores for the management of each GPU card reserved. Thus your typical GPU launcher would match the AI/DL launcher example: #!/bin/bash -l ### Request one GPU tasks for 4 hours - dedicate 1/4 of available cores for its management #SBATCH -N 1 #SBATCH --ntasks-per-node=1 #SBATCH -c 7 #SBATCH -G 1 #SBATCH --time=04:00:00 #SBATCH -p gpu print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } module purge || print_error_and_exit \"No 'module' command\" module load numlib/cuDNN # Example with cuDNN [ ... ] You can quickly access a GPU node for interactive jobs using si-gpu .","title":"GPU Jobs"},{"location":"jobs/gpu/#ulhpc-gpu-nodes","text":"Each GPU node provided as part of the gpu partition feature 4x Nvidia V100 SXM2 (with either 16G or 32G memory) interconnected by the NVLink 2.0 architecture NVlink was designed as an alternative solution to PCI Express with higher bandwidth and additional features (e.g., shared memory) specifically designed to be compatible with Nvidia's own GPU ISA for multi-GPU systems -- see wikichip article . Because of the hardware organization, you MUST follow the below recommendations: Do not run jobs on GPU nodes if you have no use of GPU accelerators! , i.e. if you are not using any of the software compiled against the {foss,intel}cuda toolchain. Avoid using more than 4 GPUs, ideally within the same node Dedicated \u00bc of the available CPU cores for the management of each GPU card reserved. Thus your typical GPU launcher would match the AI/DL launcher example: #!/bin/bash -l ### Request one GPU tasks for 4 hours - dedicate 1/4 of available cores for its management #SBATCH -N 1 #SBATCH --ntasks-per-node=1 #SBATCH -c 7 #SBATCH -G 1 #SBATCH --time=04:00:00 #SBATCH -p gpu print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } module purge || print_error_and_exit \"No 'module' command\" module load numlib/cuDNN # Example with cuDNN [ ... ] You can quickly access a GPU node for interactive jobs using si-gpu .","title":"ULHPC GPU Nodes"},{"location":"jobs/interactive/","text":"Interactive Jobs \u00b6 The interactive ( floating ) partition (exclusively associated to the debug QOS ) is to be used for code development, testing, and debugging . Important Production runs are not permitted in interactive jobs . User accounts are subject to suspension if they are determined to be using the interactive partition and the debug QOS for production computing. In particular, interactive job \"chaining\" is not allowed. Chaining is defined as using a batch script to submit another batch script. You can access the different node classes available using the -C flag (see also List of Slurm features on ULHPC nodes ), or ( better ) through the custom helper functions defined for each category of nodes, i.e. si , si-gpu or si-bigmem : Regular Dual-CPU node ### Quick interative job for the default time $ si # salloc -p interactive --qos debug -C batch ### Explicitly ask for a skylake node $ si -C skylake # salloc -p interactive --qos debug -C batch -C skylake ### Use 1 full node for 28 tasks $ si --ntasks-per-node 28 # salloc -p interactive --qos debug -C batch --ntasks-per-node 28 ### interactive job for 2 hours $ si -t 02 :00:00 # salloc -p interactive --qos debug -C batch -t 02:00:00 ### interactive job on 2 nodes, 1 multithreaded tasks per node $ si -N 2 --ntasks-per-node 1 -c 4 si -N 2 --ntasks-per-node 1 -c 4 # salloc -p interactive --qos debug -C batch -N 2 --ntasks-per-node 1 -c 4 GPU node ### Quick interative job for the default time $ si-gpu # /!\\ WARNING: append -G 1 to really reserve a GPU # salloc -p interactive --qos debug -C gpu -G 1 ### (Better) Allocate 1/4 of available CPU cores per GPU to manage $ si-gpu -G 1 -c 7 $ si-gpu -G 2 -c 14 $ si-gpu -G 4 -c 28 Large-Memory node ### Quick interative job for the default time $ si-bigmem # salloc -p interactive --qos debug -C bigmem ### interactive job with 1 multithreaded task per socket available (4 in total) $ si-bigmem --ntasks-per-node 4 --ntasks-per-socket 1 -c 28 # salloc -p interactive --qos debug -C bigmem --ntasks-per-node 4 --ntasks-per-socket 1 -c 4 ### interactive job for 1 task but 512G of memory $ si-bigmem --mem 512G # salloc -p interactive --qos debug -C bigmem --mem 512G If you prefer to rely on the regular srun , the below table proposes the equivalent commands run by the helper scripts si* : Node Type Slurm command regular si [...] salloc -p interactive --qos debug -C batch [...] salloc -p interactive --qos debug -C batch,broadwell [...] salloc -p interactive --qos debug -C batch,skylake [...] gpu si-gpu [...] salloc -p interactive --qos debug -C gpu [-C volta[32]] -G 1 [...] bigmem si-bigmem [...] salloc -p interactive --qos debug -C bigmem [...] Impact of Interactive jobs implementation over a floating partition We have recently changed the way interactive jobs are served. Since the interactive partition is no longer dedicated but floating above the other partitions, there is NO guarantee to have an interactive job running if the surrounding partition ( batch , gpu or bigmem ) is full. However , the backfill scheduling in place together with the partition priority set ensure that interactive jobs will be first served upon resource release.","title":"Interactive Job"},{"location":"jobs/interactive/#interactive-jobs","text":"The interactive ( floating ) partition (exclusively associated to the debug QOS ) is to be used for code development, testing, and debugging . Important Production runs are not permitted in interactive jobs . User accounts are subject to suspension if they are determined to be using the interactive partition and the debug QOS for production computing. In particular, interactive job \"chaining\" is not allowed. Chaining is defined as using a batch script to submit another batch script. You can access the different node classes available using the -C flag (see also List of Slurm features on ULHPC nodes ), or ( better ) through the custom helper functions defined for each category of nodes, i.e. si , si-gpu or si-bigmem : Regular Dual-CPU node ### Quick interative job for the default time $ si # salloc -p interactive --qos debug -C batch ### Explicitly ask for a skylake node $ si -C skylake # salloc -p interactive --qos debug -C batch -C skylake ### Use 1 full node for 28 tasks $ si --ntasks-per-node 28 # salloc -p interactive --qos debug -C batch --ntasks-per-node 28 ### interactive job for 2 hours $ si -t 02 :00:00 # salloc -p interactive --qos debug -C batch -t 02:00:00 ### interactive job on 2 nodes, 1 multithreaded tasks per node $ si -N 2 --ntasks-per-node 1 -c 4 si -N 2 --ntasks-per-node 1 -c 4 # salloc -p interactive --qos debug -C batch -N 2 --ntasks-per-node 1 -c 4 GPU node ### Quick interative job for the default time $ si-gpu # /!\\ WARNING: append -G 1 to really reserve a GPU # salloc -p interactive --qos debug -C gpu -G 1 ### (Better) Allocate 1/4 of available CPU cores per GPU to manage $ si-gpu -G 1 -c 7 $ si-gpu -G 2 -c 14 $ si-gpu -G 4 -c 28 Large-Memory node ### Quick interative job for the default time $ si-bigmem # salloc -p interactive --qos debug -C bigmem ### interactive job with 1 multithreaded task per socket available (4 in total) $ si-bigmem --ntasks-per-node 4 --ntasks-per-socket 1 -c 28 # salloc -p interactive --qos debug -C bigmem --ntasks-per-node 4 --ntasks-per-socket 1 -c 4 ### interactive job for 1 task but 512G of memory $ si-bigmem --mem 512G # salloc -p interactive --qos debug -C bigmem --mem 512G If you prefer to rely on the regular srun , the below table proposes the equivalent commands run by the helper scripts si* : Node Type Slurm command regular si [...] salloc -p interactive --qos debug -C batch [...] salloc -p interactive --qos debug -C batch,broadwell [...] salloc -p interactive --qos debug -C batch,skylake [...] gpu si-gpu [...] salloc -p interactive --qos debug -C gpu [-C volta[32]] -G 1 [...] bigmem si-bigmem [...] salloc -p interactive --qos debug -C bigmem [...] Impact of Interactive jobs implementation over a floating partition We have recently changed the way interactive jobs are served. Since the interactive partition is no longer dedicated but floating above the other partitions, there is NO guarantee to have an interactive job running if the surrounding partition ( batch , gpu or bigmem ) is full. However , the backfill scheduling in place together with the partition priority set ensure that interactive jobs will be first served upon resource release.","title":"Interactive Jobs"},{"location":"jobs/long/","text":"Long Jobs \u00b6 If you are confident that your jobs will last more than 2 days while efficiently using the allocated resources , you can use --qos long QOS. sbatch -p {batch | gpu | bigmem} --qos long [...] Following EuroHPC/PRACE Recommendations, the long QOS allow for an extended Max walltime ( MaxWall ) set to 14 days . Node Type Slurm command regular sbatch [-A ] -p batch --qos long [-C {broadwell,skylake}] [...] gpu sbatch [-A ] -p gpu --qos long [-C volta[32]] -G 1 [...] bigmem sbatch [-A ] -p bigmem --qos long [...] Important Be aware however that special restrictions applies for this kind of jobs. There is a limit to the maximum number of concurrent nodes involved in long jobs (see sqos for details). No more than 4 long jobs per User ( MaxJobsPU ) are allowed, using no more than 2 nodes per jobs.","title":"Long Jobs"},{"location":"jobs/long/#long-jobs","text":"If you are confident that your jobs will last more than 2 days while efficiently using the allocated resources , you can use --qos long QOS. sbatch -p {batch | gpu | bigmem} --qos long [...] Following EuroHPC/PRACE Recommendations, the long QOS allow for an extended Max walltime ( MaxWall ) set to 14 days . Node Type Slurm command regular sbatch [-A ] -p batch --qos long [-C {broadwell,skylake}] [...] gpu sbatch [-A ] -p gpu --qos long [-C volta[32]] -G 1 [...] bigmem sbatch [-A ] -p bigmem --qos long [...] Important Be aware however that special restrictions applies for this kind of jobs. There is a limit to the maximum number of concurrent nodes involved in long jobs (see sqos for details). No more than 4 long jobs per User ( MaxJobsPU ) are allowed, using no more than 2 nodes per jobs.","title":"Long Jobs"},{"location":"jobs/priority/","text":"ULHPC Job Prioritization Factors \u00b6 The ULHPC Slurm configuration rely on the Multifactor Priority Plugin and the Fair tree algorithm to preform Fairsharing among the users 1 Priority Factors \u00b6 There are several factors enabled on ULHPC supercomputers that influence job priority: Age : length of time a job has been waiting (PD state) in the queue Fairshare : difference between the portion of the computing resource that has been promised and the amount of resources that has been consumed - see Fairsharing . Partition : factor associated with each node partition , for instance to privilege interactive over batch partitions QOS A factor associated with each Quality Of Service ( low \\longrightarrow \\longrightarrow urgent ) The job's priority at any given time will be a weighted sum of all the factors that have been enabled. Job priority can be expressed as: Job_priority = PriorityWeightAge * age_factor + PriorityWeightFairshare * fair-share_factor+ PriorityWeightPartition * partition_factor + PriorityWeightQOS * QOS_factor + - nice_factor All of the factors in this formula are floating point numbers that range from 0.0 to 1.0. The weights are unsigned, 32 bit integers, you can get with: $ sprio -w # OR, from slurm.conf $ scontrol show config | grep -i PriorityWeight You can use the sprio to view the factors that comprise a job's scheduling priority and were your (pending) jobs stands in the priority queue. sprio Utility usage Show current weights sprio -w List pending jobs, sorted by jobid sprio [ -n ] # OR: sp List pending jobs, sorted by priority sprio [-n] -S+Y sprio [-n] | sort -k 3 -n sprio [-n] -l | sort -k 4 -n Getting the priority given to a job can be done either with squeue : # /!\\ ADAPT accordingly squeue -o %Q -j Backfill Scheduling \u00b6 Backfill is a mechanism by which lower priority jobs can start earlier to fill the idle slots provided they are finished before the next high priority jobs is expected to start based on resource availability. If your job is sufficiently small, it can be backfilled and scheduled in the shadow of a larger, higher-priority job For more details, see official Slurm documentation All users from a higher priority account receive a higher fair share factor than all users from a lower priority account \u21a9","title":"Job Priority and Backfilling"},{"location":"jobs/priority/#ulhpc-job-prioritization-factors","text":"The ULHPC Slurm configuration rely on the Multifactor Priority Plugin and the Fair tree algorithm to preform Fairsharing among the users 1","title":"ULHPC Job Prioritization Factors"},{"location":"jobs/priority/#priority-factors","text":"There are several factors enabled on ULHPC supercomputers that influence job priority: Age : length of time a job has been waiting (PD state) in the queue Fairshare : difference between the portion of the computing resource that has been promised and the amount of resources that has been consumed - see Fairsharing . Partition : factor associated with each node partition , for instance to privilege interactive over batch partitions QOS A factor associated with each Quality Of Service ( low \\longrightarrow \\longrightarrow urgent ) The job's priority at any given time will be a weighted sum of all the factors that have been enabled. Job priority can be expressed as: Job_priority = PriorityWeightAge * age_factor + PriorityWeightFairshare * fair-share_factor+ PriorityWeightPartition * partition_factor + PriorityWeightQOS * QOS_factor + - nice_factor All of the factors in this formula are floating point numbers that range from 0.0 to 1.0. The weights are unsigned, 32 bit integers, you can get with: $ sprio -w # OR, from slurm.conf $ scontrol show config | grep -i PriorityWeight You can use the sprio to view the factors that comprise a job's scheduling priority and were your (pending) jobs stands in the priority queue. sprio Utility usage Show current weights sprio -w List pending jobs, sorted by jobid sprio [ -n ] # OR: sp List pending jobs, sorted by priority sprio [-n] -S+Y sprio [-n] | sort -k 3 -n sprio [-n] -l | sort -k 4 -n Getting the priority given to a job can be done either with squeue : # /!\\ ADAPT accordingly squeue -o %Q -j ","title":"Priority Factors"},{"location":"jobs/priority/#backfill-scheduling","text":"Backfill is a mechanism by which lower priority jobs can start earlier to fill the idle slots provided they are finished before the next high priority jobs is expected to start based on resource availability. If your job is sufficiently small, it can be backfilled and scheduled in the shadow of a larger, higher-priority job For more details, see official Slurm documentation All users from a higher priority account receive a higher fair share factor than all users from a lower priority account \u21a9","title":"Backfill Scheduling"},{"location":"jobs/reason-codes/","text":"Job Status and Reason Codes \u00b6 The squeue command details a variety of information on an active job\u2019s status with state and reason codes. Job state codes describe a job\u2019s current state in queue (e.g. pending, completed). Job reason codes describe the reason why the job is in its current state. The following tables outline a variety of job state and reason codes you may encounter when using squeue to check on your jobs. Job State Codes \u00b6 Status Code Explaination CANCELLED CA The job was explicitly cancelled by the user or system administrator. COMPLETED CD The job has completed successfully. COMPLETING CG The job is finishing but some processes are still active. DEADLINE DL The job terminated on deadline FAILED F The job terminated with a non-zero exit code and failed to execute. NODE_FAIL NF The job terminated due to failure of one or more allocated nodes OUT_OF_MEMORY OOM The Job experienced an out of memory error. PENDING PD The job is waiting for resource allocation. It will eventually run. PREEMPTED PR The job was terminated because of preemption by another job. RUNNING R The job currently is allocated to a node and is running. SUSPENDED S A running job has been stopped with its cores released to other jobs. STOPPED ST A running job has been stopped with its cores retained. TIMEOUT TO Job terminated upon reaching its time limit. A full list of these Job State codes can be found in squeue documentation. or sacct documentation . Job Reason Codes \u00b6 Reason Code Explaination Priority One or more higher priority jobs is in queue for running. Your job will eventually run. Dependency This job is waiting for a dependent job to complete and will run afterwards. Resources The job is waiting for resources to become available and will eventually run. InvalidAccount The job\u2019s account is invalid. Cancel the job and rerun with correct account. InvaldQoS The job\u2019s QoS is invalid. Cancel the job and rerun with correct account. QOSGrpCpuLimit All CPUs assigned to your job\u2019s specified QoS are in use; job will run eventually. QOSGrpMaxJobsLimit Maximum number of jobs for your job\u2019s QoS have been met; job will run eventually. QOSGrpNodeLimit All nodes assigned to your job\u2019s specified QoS are in use; job will run eventually. PartitionCpuLimit All CPUs assigned to your job\u2019s specified partition are in use; job will run eventually. PartitionMaxJobsLimit Maximum number of jobs for your job\u2019s partition have been met; job will run eventually. PartitionNodeLimit All nodes assigned to your job\u2019s specified partition are in use; job will run eventually. AssociationCpuLimit All CPUs assigned to your job\u2019s specified association are in use; job will run eventually. AssociationMaxJobsLimit Maximum number of jobs for your job\u2019s association have been met; job will run eventually. AssociationNodeLimit All nodes assigned to your job\u2019s specified association are in use; job will run eventually. A full list of these Job Reason Codes can be found in Slurm\u2019s documentation. Running Job Statistics Metrics \u00b6 The sstat command allows users to easily pull up status information about their currently running jobs. This includes information about CPU usage , task information , node information , resident set size (RSS) , and virtual memory (VM) . We can invoke the sstat command as such: # /!\\ ADAPT accordingly $ sstat --jobs = By default, sstat will pull up significantly more information than what would be needed in the commands default output. To remedy this, we can use the --format flag to choose what we want in our output. A chart of some these variables are listed in the table below: Variable Description avecpu Average CPU time of all tasks in job. averss Average resident set size of all tasks. avevmsize Average virtual memory of all tasks in a job. jobid The id of the Job. maxrss Maximum number of bytes read by all tasks in the job. maxvsize Maximum number of bytes written by all tasks in the job. ntasks Number of tasks in a job. For an example, let's print out a job's average job id, cpu time, max rss, and number of tasks. We can do this by typing out the command: # /!\\ ADAPT accordingly sstat --jobs = --format = jobid,cputime,maxrss,ntasks A full list of variables that specify data handled by sstat can be found with the --helpformat flag or by visiting the slurm page on sstat . Past Job Statistics Metrics \u00b6 You can use the custom susage function in /etc/profile.d/slurm.sh to collect statistics information. $ susage -h Usage: susage [-m] [-Y] [-S YYYY-MM-DD] [-E YYYT-MM-DD] For a specific user (if accounting rights granted): susage [...] -u For a specific account (if accounting rights granted): susage [...] -A Display past job usage summary But by default, you should use the sacct command allows users to pull up status information about past jobs. This command is very similar to sstat , but is used on jobs that have been previously run on the system instead of currently running jobs. # /!\\ ADAPT accordingly $ sacct [ -X ] --jobs = [ --format = metric1,... ] # OR, for a user, eventually between a Start and End date $ sacct [ -X ] -u $USER [ -S YYYY-MM-DD ] [ -E YYYY-MM-DD ] [ --format = metric1,... ] # OR, for an account - ADAPT accordingly $ sacct [ -X ] -A [ --format = metric1,... ] Use -X to aggregate the statistics relevant to the job allocation itself, not taking job steps into consideration. The main metrics code you may be interested to review are listed below. Variable Description account Account the job ran under. avecpu Average CPU time of all tasks in job. averss Average resident set size of all tasks in the job. cputime Formatted (Elapsed time * CPU) count used by a job or step. elapsed Jobs elapsed time formated as DD-HH:MM:SS. exitcode The exit code returned by the job script or salloc. jobid The id of the Job. jobname The name of the Job. maxdiskread Maximum number of bytes read by all tasks in the job. maxdiskwrite Maximum number of bytes written by all tasks in the job. maxrss Maximum resident set size of all tasks in the job. ncpus Amount of allocated CPUs. nnodes The number of nodes used in a job. ntasks Number of tasks in a job. priority Slurm priority. qos Quality of service. reqcpu Required number of CPUs reqmem Required amount of memory for a job. reqtres Required Trackable RESources (TRES) user Userna A full list of variables that specify data handled by sacct can be found with the --helpformat flag or by visiting the slurm page on sacct .","title":"Job State and Reason Code"},{"location":"jobs/reason-codes/#job-status-and-reason-codes","text":"The squeue command details a variety of information on an active job\u2019s status with state and reason codes. Job state codes describe a job\u2019s current state in queue (e.g. pending, completed). Job reason codes describe the reason why the job is in its current state. The following tables outline a variety of job state and reason codes you may encounter when using squeue to check on your jobs.","title":"Job Status and Reason Codes"},{"location":"jobs/reason-codes/#job-state-codes","text":"Status Code Explaination CANCELLED CA The job was explicitly cancelled by the user or system administrator. COMPLETED CD The job has completed successfully. COMPLETING CG The job is finishing but some processes are still active. DEADLINE DL The job terminated on deadline FAILED F The job terminated with a non-zero exit code and failed to execute. NODE_FAIL NF The job terminated due to failure of one or more allocated nodes OUT_OF_MEMORY OOM The Job experienced an out of memory error. PENDING PD The job is waiting for resource allocation. It will eventually run. PREEMPTED PR The job was terminated because of preemption by another job. RUNNING R The job currently is allocated to a node and is running. SUSPENDED S A running job has been stopped with its cores released to other jobs. STOPPED ST A running job has been stopped with its cores retained. TIMEOUT TO Job terminated upon reaching its time limit. A full list of these Job State codes can be found in squeue documentation. or sacct documentation .","title":"Job State Codes"},{"location":"jobs/reason-codes/#job-reason-codes","text":"Reason Code Explaination Priority One or more higher priority jobs is in queue for running. Your job will eventually run. Dependency This job is waiting for a dependent job to complete and will run afterwards. Resources The job is waiting for resources to become available and will eventually run. InvalidAccount The job\u2019s account is invalid. Cancel the job and rerun with correct account. InvaldQoS The job\u2019s QoS is invalid. Cancel the job and rerun with correct account. QOSGrpCpuLimit All CPUs assigned to your job\u2019s specified QoS are in use; job will run eventually. QOSGrpMaxJobsLimit Maximum number of jobs for your job\u2019s QoS have been met; job will run eventually. QOSGrpNodeLimit All nodes assigned to your job\u2019s specified QoS are in use; job will run eventually. PartitionCpuLimit All CPUs assigned to your job\u2019s specified partition are in use; job will run eventually. PartitionMaxJobsLimit Maximum number of jobs for your job\u2019s partition have been met; job will run eventually. PartitionNodeLimit All nodes assigned to your job\u2019s specified partition are in use; job will run eventually. AssociationCpuLimit All CPUs assigned to your job\u2019s specified association are in use; job will run eventually. AssociationMaxJobsLimit Maximum number of jobs for your job\u2019s association have been met; job will run eventually. AssociationNodeLimit All nodes assigned to your job\u2019s specified association are in use; job will run eventually. A full list of these Job Reason Codes can be found in Slurm\u2019s documentation.","title":"Job Reason Codes"},{"location":"jobs/reason-codes/#running-job-statistics-metrics","text":"The sstat command allows users to easily pull up status information about their currently running jobs. This includes information about CPU usage , task information , node information , resident set size (RSS) , and virtual memory (VM) . We can invoke the sstat command as such: # /!\\ ADAPT accordingly $ sstat --jobs = By default, sstat will pull up significantly more information than what would be needed in the commands default output. To remedy this, we can use the --format flag to choose what we want in our output. A chart of some these variables are listed in the table below: Variable Description avecpu Average CPU time of all tasks in job. averss Average resident set size of all tasks. avevmsize Average virtual memory of all tasks in a job. jobid The id of the Job. maxrss Maximum number of bytes read by all tasks in the job. maxvsize Maximum number of bytes written by all tasks in the job. ntasks Number of tasks in a job. For an example, let's print out a job's average job id, cpu time, max rss, and number of tasks. We can do this by typing out the command: # /!\\ ADAPT accordingly sstat --jobs = --format = jobid,cputime,maxrss,ntasks A full list of variables that specify data handled by sstat can be found with the --helpformat flag or by visiting the slurm page on sstat .","title":"Running Job Statistics Metrics"},{"location":"jobs/reason-codes/#past-job-statistics-metrics","text":"You can use the custom susage function in /etc/profile.d/slurm.sh to collect statistics information. $ susage -h Usage: susage [-m] [-Y] [-S YYYY-MM-DD] [-E YYYT-MM-DD] For a specific user (if accounting rights granted): susage [...] -u For a specific account (if accounting rights granted): susage [...] -A Display past job usage summary But by default, you should use the sacct command allows users to pull up status information about past jobs. This command is very similar to sstat , but is used on jobs that have been previously run on the system instead of currently running jobs. # /!\\ ADAPT accordingly $ sacct [ -X ] --jobs = [ --format = metric1,... ] # OR, for a user, eventually between a Start and End date $ sacct [ -X ] -u $USER [ -S YYYY-MM-DD ] [ -E YYYY-MM-DD ] [ --format = metric1,... ] # OR, for an account - ADAPT accordingly $ sacct [ -X ] -A [ --format = metric1,... ] Use -X to aggregate the statistics relevant to the job allocation itself, not taking job steps into consideration. The main metrics code you may be interested to review are listed below. Variable Description account Account the job ran under. avecpu Average CPU time of all tasks in job. averss Average resident set size of all tasks in the job. cputime Formatted (Elapsed time * CPU) count used by a job or step. elapsed Jobs elapsed time formated as DD-HH:MM:SS. exitcode The exit code returned by the job script or salloc. jobid The id of the Job. jobname The name of the Job. maxdiskread Maximum number of bytes read by all tasks in the job. maxdiskwrite Maximum number of bytes written by all tasks in the job. maxrss Maximum resident set size of all tasks in the job. ncpus Amount of allocated CPUs. nnodes The number of nodes used in a job. ntasks Number of tasks in a job. priority Slurm priority. qos Quality of service. reqcpu Required number of CPUs reqmem Required amount of memory for a job. reqtres Required Trackable RESources (TRES) user Userna A full list of variables that specify data handled by sacct can be found with the --helpformat flag or by visiting the slurm page on sacct .","title":"Past Job Statistics Metrics"},{"location":"jobs/submit/","text":"Regular Jobs \u00b6 Node Type Slurm command regular sbatch [-A ] -p batch [--qos {high,urgent}] [-C {broadwell,skylake}] [...] gpu sbatch [-A ] -p gpu [--qos {high,urgent}] [-C volta[32]] -G 1 [...] bigmem sbatch [-A ] -p bigmem [--qos {high,urgent}] [...] Main Slurm commands Resource Allocation guide sbatch [...] /path/to/launcher \u00b6 sbatch is used to submit a batch launcher script for later execution, corresponding to batch/passive submission mode . The script will typically contain one or more srun commands to launch parallel tasks. Upon submission with sbatch , Slurm will: allocate resources (nodes, tasks, partition, constraints, etc.) runs a single copy of the batch script on the first allocated node in particular, if you depend on other scripts, ensure you have refer to them with the complete path toward them. When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm. # /!\\ ADAPT path to launcher accordingly $ sbatch .sh Submitted batch job 864933 Job Submission Option \u00b6 There are several useful environment variables set be Slurm within an allocated job. The most important ones are detailed in the below table which summarizes the main job submission options offered with {sbatch | srun | salloc} [...] : Command-line option Description Example -N Nodes request -N 2 --ntasks-per-node= Tasks-per-node request --ntasks-per-node=28 --ntasks-per-socket= Tasks-per-socket request --ntasks-per-socket=14 -c Cores-per-task request (multithreading) -c 1 --mem=GB GB memory per node request --mem 0 -t [DD-]HH[:MM:SS]> Walltime request -t 4:00:00 -G GPU(s) request -G 4 -C Feature request ( broadwell,skylake... ) -C skylake -p Specify job partition/queue --qos Specify job qos -A Specify account -J Job name -J MyApp -d Job dependency -d singleton --mail-user= Specify email address --mail-type= Notify user by email when certain event types occur. --mail-type=END,FAIL At a minimum a job submission script must include number of nodes, time, type of partition and nodes (resource allocation constraint and features), and quality of service (QOS). If a script does not specify any of these options then a default may be applied. The full list of directives is documented in the man pages for the sbatch command (see. man sbatch ). Within a job, you aim at running a certain number of tasks , and Slurm allow for a fine-grain control of the resource allocation that must be satisfied for each task. Beware of Slurm terminology in Multicore Architecture ! Slurm Node = Physical node , specified with -N <#nodes> Advice : always explicit number of expected number of tasks per node using --ntasks-per-node . This way you control the node footprint of your job. Slurm Socket = Physical Socket/CPU/Processor Advice : if possible, explicit also the number of expected number of tasks per socket (processor) using --ntasks-per-socket . relations between and must be aligned with the physical NUMA characteristics of the node. For instance on aion nodes, = 8* For instance on iris regular nodes, =2* when on iris bigmem nodes, =4* . ( the most confusing ): Slurm CPU = Physical CORE use -c <#threads> to specify the number of cores reserved per task. Hyper-Threading (HT) Technology is disabled on all ULHPC compute nodes. In particular: assume #cores = #threads , thus when using -c , you can safely set OMP_NUM_THREADS = ${ SLURM_CPUS_PER_TASK :- 1 } # Default to 1 if SLURM_CPUS_PER_TASK not set to automatically abstract from the job context you have interest to match the physical NUMA characteristics of the compute node you're running at (Ex: target 16 threads per socket on Aion nodes (as there are 8 virtual sockets per nodes, 14 threads per socket on Iris regular nodes). The total number of tasks defined in a given job is stored in the $SLURM_NTASKS environment variable. The --cpus-per-task option of srun in Slurm 23.11 and later In the latest versions of Slurm srun inherits the --cpus-per-task value requested by salloc or sbatch by reading the value of SLURM_CPUS_PER_TASK , as for any other option. This behavior may differ from some older versions where special handling was required to propagate the --cpus-per-task option to srun . In case you would like to launch multiple programs in a single allocation/batch script, divide the resources accordingly by requesting resources with srun when launching the process, for instance: srun --cpus-per-task --ntasks [ ... ] We encourage you to always explicitly specify upon resource allocation the number of tasks you want per node/socket ( --ntasks-per-node --ntasks-per-socket ), to easily scale on multiple nodes with -N . Adapt the number of threads and the settings to match the physical NUMA characteristics of the nodes Aion 16 cores per socket and 8 (virtual) sockets (CPUs) per aion node. {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <8n> --ntasks-per-socket -c Total : \\times 8\\times \\times 8\\times tasks, each on threads Ensure \\times \\times = 16 Ex: -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 ( Total : 64 tasks) Iris (default Dual-CPU) 14 cores per socket and 2 sockets (physical CPUs) per regular iris . {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <2n> --ntasks-per-socket -c Total : \\times 2\\times \\times 2\\times tasks, each on threads Ensure \\times \\times = 14 Ex: -N 2 --ntasks-per-node 4 --ntasks-per-socket 2 -c 7 ( Total : 8 tasks) Iris (Bigmem) 28 cores per socket and 4 sockets (physical CPUs) per bigmem iris {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <4n> --ntasks-per-socket -c Total : \\times 4\\times \\times 4\\times tasks, each on threads Ensure \\times \\times = 28 Ex: -N 2 --ntasks-per-node 8 --ntasks-per-socket 2 -c 14 ( Total : 16 tasks) Careful Monitoring of your Jobs \u00b6 Bug DON'T LEAVE your jobs running WITHOUT monitoring them and ensure they are not abusing of the computational resources allocated for you!!! ULHPC Tutorial / Getting Started You will find below several ways to monitor the effective usage of the resources allocated (for running jobs) as well as the general efficiency (Average Walltime Accuracy, CPU/Memory efficiency etc.) for past jobs. Joining/monitoring running jobs \u00b6 sjoin \u00b6 At any moment of time, you can join a running job using the custom helper functions sjoin in another terminal (or another screen/tmux tab/window). The format is as follows: sjoin [ -w ] # Use to automatically complete among your jobs Using sjoin to htop your processes # check your running job ( access ) $> sq # squeue -u $(whoami) JOBID PARTIT QOS NAME USER NODE CPUS ST TIME TIME_LEFT PRIORITY NODELIST ( REASON ) 2171206 [ ... ] # Connect to your running job, identified by its Job ID ( access ) $> sjoin 2171206 # /!\\ ADAPT accordingly, use to have it autocatically completed # Equivalent of: srun --jobid 2171206 --gres=gpu:0 --pty bash -i ( node ) $> htop # view of all processes # F5: tree view # u : filter by process of # q: quit On the [impossibility] to monitor passive GPU jobs over sjoin If you use sjoin to join a GPU job, you WON'T be able to see the allocated GPU activity with nvidia-smi and all the monitoring tools provided by NVidia. The reason is that currently, there is no way to perform an over-allocation of a Slurm Generic Resource (GRES) as our GPU cards, that means you can't create ( e.g. with sjoin or srun --jobid [...] ) job steps with access to GPUs which are bound to another step. To keep sjoin working with gres job, you MUST add \" --gres=none \" You can use a direct connection with ssh or clush -w @job: for that (see below) but be aware that confined context is NOT maintained that way and that you will see the GPU processes on all 4 GPU cards. ClusterShell \u00b6 Danger Only for VERY Advanced users!!! . You should know what you are doing when using ClusterShell as you can mistakenly generate a huge amount of remote commands across the cluster which, while they will likely fail, still induce an unexpected load that may disturb the system. ClusterShell is a useful Python package for executing arbitrary commands across multiple hosts. On the ULHPC clusters, it provides a relatively simple way for you to run commands on nodes your jobs are running on, and collect the results. Info You can only ssh to, and therefore run clush on, nodes where you have active/running jobs. nodeset \u00b6 The nodeset command enables the easy manipulation of node sets, as well as node groups, at the command line level. It uses sinfo underneath but has slightly different syntax. You can use it to ask about node states and nodes your job is running on. The nice difference is you can ask for folded (e.g. iris-[075,078,091-092] ) or expanded (e.g. iris-075 iris-078 iris-091 iris-092 ) forms of the node lists. Command description nodeset -L[LL] List all groups available nodeset -c [...] show number of nodes in nodeset(s) nodeset -e [...] expand nodeset(s) to separate nodes nodeset -f [...] fold nodeset(s) (or separate nodes) into one nodeset Nodeset expansion and folding nodeset -e (expand) # Get list of nodes with issues $ sinfo -R --noheader -o \"%N\" iris- [ 005 -008,017,161-162 ] # ... and expand that list $ sinfo -R --noheader -o \"%N\" | nodeset -e iris-005 iris-006 iris-007 iris-008 iris-017 iris-161 iris-162 # Actually equivalent of (see below) $ nodeset -e @state:drained nodeset -f (fold) # List nodes in IDLE state $> sinfo -t IDLE --noheader interactive up 4 :00:00 4 idle iris- [ 003 -005,007 ] long up 30 -00:00:0 2 idle iris- [ 015 -016 ] batch* up 5 -00:00:00 1 idle iris-134 gpu up 5 -00:00:00 9 idle iris- [ 170 ,173,175-178,181 ] bigmem up 5 -00:00:00 0 n/a # make out a synthetic list $> sinfo -t IDLE --noheader | awk '{ print $6 }' | nodeset -f iris- [ 003 -005,007,015-016,134,170,173,175-178,181 ] # ... actually done when restricting the column to nodelist only $> sinfo -t IDLE --noheader -o \"%N\" iris- [ 003 -005,007,015-016,134,170,173,175-178,181 ] # Actually equivalent of (see below) $ nodeset -f @state:idle Exclusion / intersection of nodeset Option Description -x exclude from working set -i intersection from working set with -X ( --xor ) elements that are in exactly one of the working set and # Exclusion $> nodeset -f iris- [ 001 -010 ] -x iris- [ 003 -005,007,015-016 ] iris- [ 001 -002,006,008-010 ] # Intersection $> nodeset -f iris- [ 001 -010 ] -i iris- [ 003 -005,007,015-016 ] iris- [ 003 -005,007 ] # \"XOR\" (one occurrence only) $> nodeset -f iris- [ 001 -010 ] -x iris-006 -X iris- [ 005 -007 ] iris- [ 001 -004,006,008-010 ] The groups useful to you that we have configured are @user , @job and @state . List available groups $ nodeset -LLL # convenient partition groups @batch iris- [ 001 -168 ] 168 @bigmem iris- [ 187 -190 ] 4 @gpu iris- [ 169 -186,191-196 ] 24 @interactive iris- [ 001 -196 ] 196 # conveniente state groups @state:allocated [ ... ] @state:idle [ ... ] @state:mixed [ ... ] @state:reserved [ ... ] # your individual jobs @job:2252046 iris-076 1 @job:2252050 iris- [ 191 -196 ] 6 # all the jobs under your username @user:svarrette iris- [ 076 ,191-196 ] 7 User group List expanded node names where you have jobs running # Similar to: squeue -h -u $USER -o \"%N\"|nodeset -e $ nodeset -e @user: $USER Job group List folded nodes where your job 1234567 is running (use sq to quickly list your jobs): $ similar to squeue -h -j 1234567 -o \"%N\" nodeset -f @job:1234567 State group List expanded node names that are idle according to slurm # Similar to: sinfo -t IDLE -o \"%N\" nodeset -e @state:idle clush \u00b6 clush can run commands on multiple nodes at once for instance to monitor you jobs. It uses the node grouping syntax from [ nodeset ](( https://clustershell.readthedocs.io/en/latest/tools/nodeset.html ) to allow you to run commands on those nodes. clush uses ssh to connect to each of these nodes. You can use the -b option to gather output from nodes with same output into the same lines. Leaving this out will report on each node separately. Option Description -b gathering output (as when piping to dshbak -c ) -w specify remote hosts, incl. node groups with @group special syntax -g similar to -w @ , restrict commands to the hosts group --diff show differences between common outputs Monitor CPU usage Show %cpu, memory usage, and command for all nodes running any of your jobs. clush -bw @user: $USER ps -u $USER -o%cpu,rss,cmd As above, but only for the nodes reserved with your job clush -bw @job: ps -u $USER -o%cpu,rss,cmd Monitor GPU usage Show what's running on all the GPUs on the nodes associated with your job 654321 . clush -bw @job:654321 bash -l -c 'nvidia-smi --format=csv --query-compute-apps=process_name,used_gpu_memory' As above but for all your jobs (assuming you have only GPU nodes with all GPUs) clush -bw @user: $USER bash -l -c 'nvidia-smi --format=csv --query-compute-apps=process_name,used_gpu_memory' This may be convenient for passive jobs since the sjoin utility does NOT permit to run nvidia-smi (see explaination ). However that way you will see unfortunately ALL processes running on the 4 GPU cards -- including from other users sharing your nodes. It's a known bug, not a feature. pestat : CPU/Mem usage report \u00b6 We have deployed the (excellent) Slurm tool pestat (Processor Element status) of Ole Holm Nielsen that you can use to quickly check the CPU/Memory usage of your jobs. Information deserving investigation (too low/high CPU or Memory usage compared to allocation) will be flagged in Red or Magenta pestat [-p ] [-G] [-f] pestat output (official sample output) General Guidelines \u00b6 As mentionned before, always check your node activity with at least htop on the all allocated nodes to ensure you use them as expected. Several cases might apply to your job workflow: Single Node, single core You are dealing with an embarrasingly parallel job campaign and this approach is bad and overload the scheduler unnecessarily. You will also quickly cross the limits set in terms of maximum number of jobs. You must aggregate multiples tasks within a single job to exploit fully a complete node. In particular, you MUST consider using GNU Parallel and our generic GNU launcher launcher.parallel.sh . ULHPC Tutorial / HPC Management of Embarrassingly Parallel Jobs Single Node, multi-core If you asked for more than a core in your job (> 1 tasks, -c where > 1), there are 3 typical situations you MUST analysed (and pestat or htop are of great help for that): You cannot see the expected activity (only 1 core seems to be active at 100%), then you should review your workflow as you are under -exploiting (and thus probably waste ) the allocated resources. you have the expected activity on the requested cores (Ex: the 28 cores were requested, and htop reports a significant usage of all cores) BUT the CPU load of the system exceed the core capacity of the computing node . That means you are forking too many processes and overloading/harming the systems. For instance on regular iris (resp. aion ) node, a CPU load above 28 (resp. 128) is suspect. Note that we use LBNL Node Health Check (NHC) to automatically drain nodes for which the load exceed twice the core capacity An analogy for a single core load with the amont of cars possible in a single-lane brige or tunnel is illustrated below ( source ). Like the bridge/tunnel operator, you'd like your cars/processes to never be waiting, otherwise you are harming the system. Imagine this analogy for the amount of cores available on a computing node to better reporesent the situtation on a single core. you have the expected activity on the requested cores and the load match your allocation without harming the system: you're good to go! Multi-node If you asked for more than ONE node , ensure that you have consider the following questions. You are running an MPI job : you generally know what you're doing, YET ensure your followed the single node monitoring checks ( htop etc. yet across all nodes) to review your core activity on ALL nodes (see 3. below) . Consider also parallel profilers like Arm Forge You are running an embarrasingly parallel job campaign . You should first ensure you correctly exploit a single node using GNU Parallel before attempting to cross multiple nodes You run a distributed framework able to exploit multiple nodes (typically with a master/slave model as for Spark cluster ). You MUST assert that your [slave] processes are really run on the over nodes using # check you running job $ sq # Join **another** node than the first one listed $ sjoin -w $ htop # view of all processes # F5: tree view # u : filter by process of # q: quit Monitoring past jobs efficiency \u00b6 Walltime estimation and Job efficiency By default, none of the regular jobs you submit can exceed a walltime of 2 days ( 2-00:00:00 ). You have a strong interest to estimate accurately the walltime of your jobs. While it is not always possible, or quite hard to guess at the beginning of a given job campaign where you'll probably ask for the maximum walltime possible, you should look back as your historical usage for the past efficiency and elapsed time of your previously completed jobs using seff or susage utilities . Update the time constraint [#SBATCH] -t [...] of your jobs accordingly. There are two immediate benefits for you: Short jobs are scheduled faster, and may even be elligible for backfilling You will be more likely elligible for a raw share upgrade of your user account -- see Fairsharing The below utilities will help you track the CPU/Memory efficiency ( seff ) or the Average Walltime Accuracy ( susage , sacct ) of your past jobs seff \u00b6 Use seff to double check a past job CPU/Memory efficiency. Below examples should be self-speaking: Good CPU Eff. $ seff 2171749 Job ID: 2171749 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 28 CPU Utilized: 41-01:38:14 CPU Efficiency: 99.64% of 41-05:09:44 core-walltime Job Wall-clock time: 1-11:19:38 Memory Utilized: 2.73 GB Memory Efficiency: 2.43% of 112.00 GB Good Memory Eff. $ seff 2117620 Job ID: 2117620 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 16 CPU Utilized: 14:24:49 CPU Efficiency: 23.72% of 2-12:46:24 core-walltime Job Wall-clock time: 03:47:54 Memory Utilized: 193.04 GB Memory Efficiency: 80.43% of 240.00 GB Good CPU and Memory Eff. $ seff 2138087 Job ID: 2138087 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 64 CPU Utilized: 87-16:58:22 CPU Efficiency: 86.58% of 101-07:16:16 core-walltime Job Wall-clock time: 1-13:59:19 Memory Utilized: 1.64 TB Memory Efficiency: 99.29% of 1.65 TB [Very] Bad efficiency This illustrates a very bad job in terms of CPU/memory efficiency (below 4%), which illustrate a case where basically the user wasted 4 hours of computation while mobilizing a full node and its 28 cores. $ seff 2199497 Job ID: 2199497 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 28 CPU Utilized: 00:08:33 CPU Efficiency: 3.55% of 04:00:48 core-walltime Job Wall-clock time: 00:08:36 Memory Utilized: 55.84 MB Memory Efficiency: 0.05% of 112.00 GB This is typical of a single-core task can could be drastically improved via GNU Parallel . Note however that demonstrating a CPU good efficiency with seff may not be enough! You may still induce an abnormal load on the reserved nodes if you spawn more processes than allowed by the Slurm reservation. To avoid that, always try to prefix your executions with srun within your launchers. See also Specific Resource Allocations . susage \u00b6 Use susage to check your past jobs walltime accuracy ( Timelimit vs. Elapsed ) $ susage -h Usage: susage [-m] [-Y] [-S YYYY-MM-DD] [-E YYYT-MM-DD] For a specific user (if accounting rights granted): susage [...] -u For a specific account (if accounting rights granted): susage [...] -A Display past job usage summary In all cases, if you are confident that your jobs will last more than 2 days while efficiently using the allocated resources , you can use --qos long QOS. Be aware that special restrictions applies for this kind of jobs.","title":"Passive/Batch Job"},{"location":"jobs/submit/#regular-jobs","text":"Node Type Slurm command regular sbatch [-A ] -p batch [--qos {high,urgent}] [-C {broadwell,skylake}] [...] gpu sbatch [-A ] -p gpu [--qos {high,urgent}] [-C volta[32]] -G 1 [...] bigmem sbatch [-A ] -p bigmem [--qos {high,urgent}] [...] Main Slurm commands Resource Allocation guide","title":"Regular Jobs"},{"location":"jobs/submit/#sbatch-pathtolauncher","text":"sbatch is used to submit a batch launcher script for later execution, corresponding to batch/passive submission mode . The script will typically contain one or more srun commands to launch parallel tasks. Upon submission with sbatch , Slurm will: allocate resources (nodes, tasks, partition, constraints, etc.) runs a single copy of the batch script on the first allocated node in particular, if you depend on other scripts, ensure you have refer to them with the complete path toward them. When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm. # /!\\ ADAPT path to launcher accordingly $ sbatch .sh Submitted batch job 864933","title":"sbatch [...] /path/to/launcher"},{"location":"jobs/submit/#job-submission-option","text":"There are several useful environment variables set be Slurm within an allocated job. The most important ones are detailed in the below table which summarizes the main job submission options offered with {sbatch | srun | salloc} [...] : Command-line option Description Example -N Nodes request -N 2 --ntasks-per-node= Tasks-per-node request --ntasks-per-node=28 --ntasks-per-socket= Tasks-per-socket request --ntasks-per-socket=14 -c Cores-per-task request (multithreading) -c 1 --mem=GB GB memory per node request --mem 0 -t [DD-]HH[:MM:SS]> Walltime request -t 4:00:00 -G GPU(s) request -G 4 -C Feature request ( broadwell,skylake... ) -C skylake -p Specify job partition/queue --qos Specify job qos -A Specify account -J Job name -J MyApp -d Job dependency -d singleton --mail-user= Specify email address --mail-type= Notify user by email when certain event types occur. --mail-type=END,FAIL At a minimum a job submission script must include number of nodes, time, type of partition and nodes (resource allocation constraint and features), and quality of service (QOS). If a script does not specify any of these options then a default may be applied. The full list of directives is documented in the man pages for the sbatch command (see. man sbatch ). Within a job, you aim at running a certain number of tasks , and Slurm allow for a fine-grain control of the resource allocation that must be satisfied for each task. Beware of Slurm terminology in Multicore Architecture ! Slurm Node = Physical node , specified with -N <#nodes> Advice : always explicit number of expected number of tasks per node using --ntasks-per-node . This way you control the node footprint of your job. Slurm Socket = Physical Socket/CPU/Processor Advice : if possible, explicit also the number of expected number of tasks per socket (processor) using --ntasks-per-socket . relations between and must be aligned with the physical NUMA characteristics of the node. For instance on aion nodes, = 8* For instance on iris regular nodes, =2* when on iris bigmem nodes, =4* . ( the most confusing ): Slurm CPU = Physical CORE use -c <#threads> to specify the number of cores reserved per task. Hyper-Threading (HT) Technology is disabled on all ULHPC compute nodes. In particular: assume #cores = #threads , thus when using -c , you can safely set OMP_NUM_THREADS = ${ SLURM_CPUS_PER_TASK :- 1 } # Default to 1 if SLURM_CPUS_PER_TASK not set to automatically abstract from the job context you have interest to match the physical NUMA characteristics of the compute node you're running at (Ex: target 16 threads per socket on Aion nodes (as there are 8 virtual sockets per nodes, 14 threads per socket on Iris regular nodes). The total number of tasks defined in a given job is stored in the $SLURM_NTASKS environment variable. The --cpus-per-task option of srun in Slurm 23.11 and later In the latest versions of Slurm srun inherits the --cpus-per-task value requested by salloc or sbatch by reading the value of SLURM_CPUS_PER_TASK , as for any other option. This behavior may differ from some older versions where special handling was required to propagate the --cpus-per-task option to srun . In case you would like to launch multiple programs in a single allocation/batch script, divide the resources accordingly by requesting resources with srun when launching the process, for instance: srun --cpus-per-task --ntasks [ ... ] We encourage you to always explicitly specify upon resource allocation the number of tasks you want per node/socket ( --ntasks-per-node --ntasks-per-socket ), to easily scale on multiple nodes with -N . Adapt the number of threads and the settings to match the physical NUMA characteristics of the nodes Aion 16 cores per socket and 8 (virtual) sockets (CPUs) per aion node. {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <8n> --ntasks-per-socket -c Total : \\times 8\\times \\times 8\\times tasks, each on threads Ensure \\times \\times = 16 Ex: -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 ( Total : 64 tasks) Iris (default Dual-CPU) 14 cores per socket and 2 sockets (physical CPUs) per regular iris . {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <2n> --ntasks-per-socket -c Total : \\times 2\\times \\times 2\\times tasks, each on threads Ensure \\times \\times = 14 Ex: -N 2 --ntasks-per-node 4 --ntasks-per-socket 2 -c 7 ( Total : 8 tasks) Iris (Bigmem) 28 cores per socket and 4 sockets (physical CPUs) per bigmem iris {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <4n> --ntasks-per-socket -c Total : \\times 4\\times \\times 4\\times tasks, each on threads Ensure \\times \\times = 28 Ex: -N 2 --ntasks-per-node 8 --ntasks-per-socket 2 -c 14 ( Total : 16 tasks)","title":"Job Submission Option"},{"location":"jobs/submit/#careful-monitoring-of-your-jobs","text":"Bug DON'T LEAVE your jobs running WITHOUT monitoring them and ensure they are not abusing of the computational resources allocated for you!!! ULHPC Tutorial / Getting Started You will find below several ways to monitor the effective usage of the resources allocated (for running jobs) as well as the general efficiency (Average Walltime Accuracy, CPU/Memory efficiency etc.) for past jobs.","title":"Careful Monitoring of your Jobs"},{"location":"jobs/submit/#joiningmonitoring-running-jobs","text":"","title":"Joining/monitoring running jobs"},{"location":"jobs/submit/#sjoin","text":"At any moment of time, you can join a running job using the custom helper functions sjoin in another terminal (or another screen/tmux tab/window). The format is as follows: sjoin [ -w ] # Use to automatically complete among your jobs Using sjoin to htop your processes # check your running job ( access ) $> sq # squeue -u $(whoami) JOBID PARTIT QOS NAME USER NODE CPUS ST TIME TIME_LEFT PRIORITY NODELIST ( REASON ) 2171206 [ ... ] # Connect to your running job, identified by its Job ID ( access ) $> sjoin 2171206 # /!\\ ADAPT accordingly, use to have it autocatically completed # Equivalent of: srun --jobid 2171206 --gres=gpu:0 --pty bash -i ( node ) $> htop # view of all processes # F5: tree view # u : filter by process of # q: quit On the [impossibility] to monitor passive GPU jobs over sjoin If you use sjoin to join a GPU job, you WON'T be able to see the allocated GPU activity with nvidia-smi and all the monitoring tools provided by NVidia. The reason is that currently, there is no way to perform an over-allocation of a Slurm Generic Resource (GRES) as our GPU cards, that means you can't create ( e.g. with sjoin or srun --jobid [...] ) job steps with access to GPUs which are bound to another step. To keep sjoin working with gres job, you MUST add \" --gres=none \" You can use a direct connection with ssh or clush -w @job: for that (see below) but be aware that confined context is NOT maintained that way and that you will see the GPU processes on all 4 GPU cards.","title":"sjoin"},{"location":"jobs/submit/#clustershell","text":"Danger Only for VERY Advanced users!!! . You should know what you are doing when using ClusterShell as you can mistakenly generate a huge amount of remote commands across the cluster which, while they will likely fail, still induce an unexpected load that may disturb the system. ClusterShell is a useful Python package for executing arbitrary commands across multiple hosts. On the ULHPC clusters, it provides a relatively simple way for you to run commands on nodes your jobs are running on, and collect the results. Info You can only ssh to, and therefore run clush on, nodes where you have active/running jobs.","title":"ClusterShell"},{"location":"jobs/submit/#nodeset","text":"The nodeset command enables the easy manipulation of node sets, as well as node groups, at the command line level. It uses sinfo underneath but has slightly different syntax. You can use it to ask about node states and nodes your job is running on. The nice difference is you can ask for folded (e.g. iris-[075,078,091-092] ) or expanded (e.g. iris-075 iris-078 iris-091 iris-092 ) forms of the node lists. Command description nodeset -L[LL] List all groups available nodeset -c [...] show number of nodes in nodeset(s) nodeset -e [...] expand nodeset(s) to separate nodes nodeset -f [...] fold nodeset(s) (or separate nodes) into one nodeset Nodeset expansion and folding nodeset -e (expand) # Get list of nodes with issues $ sinfo -R --noheader -o \"%N\" iris- [ 005 -008,017,161-162 ] # ... and expand that list $ sinfo -R --noheader -o \"%N\" | nodeset -e iris-005 iris-006 iris-007 iris-008 iris-017 iris-161 iris-162 # Actually equivalent of (see below) $ nodeset -e @state:drained nodeset -f (fold) # List nodes in IDLE state $> sinfo -t IDLE --noheader interactive up 4 :00:00 4 idle iris- [ 003 -005,007 ] long up 30 -00:00:0 2 idle iris- [ 015 -016 ] batch* up 5 -00:00:00 1 idle iris-134 gpu up 5 -00:00:00 9 idle iris- [ 170 ,173,175-178,181 ] bigmem up 5 -00:00:00 0 n/a # make out a synthetic list $> sinfo -t IDLE --noheader | awk '{ print $6 }' | nodeset -f iris- [ 003 -005,007,015-016,134,170,173,175-178,181 ] # ... actually done when restricting the column to nodelist only $> sinfo -t IDLE --noheader -o \"%N\" iris- [ 003 -005,007,015-016,134,170,173,175-178,181 ] # Actually equivalent of (see below) $ nodeset -f @state:idle Exclusion / intersection of nodeset Option Description -x exclude from working set -i intersection from working set with -X ( --xor ) elements that are in exactly one of the working set and # Exclusion $> nodeset -f iris- [ 001 -010 ] -x iris- [ 003 -005,007,015-016 ] iris- [ 001 -002,006,008-010 ] # Intersection $> nodeset -f iris- [ 001 -010 ] -i iris- [ 003 -005,007,015-016 ] iris- [ 003 -005,007 ] # \"XOR\" (one occurrence only) $> nodeset -f iris- [ 001 -010 ] -x iris-006 -X iris- [ 005 -007 ] iris- [ 001 -004,006,008-010 ] The groups useful to you that we have configured are @user , @job and @state . List available groups $ nodeset -LLL # convenient partition groups @batch iris- [ 001 -168 ] 168 @bigmem iris- [ 187 -190 ] 4 @gpu iris- [ 169 -186,191-196 ] 24 @interactive iris- [ 001 -196 ] 196 # conveniente state groups @state:allocated [ ... ] @state:idle [ ... ] @state:mixed [ ... ] @state:reserved [ ... ] # your individual jobs @job:2252046 iris-076 1 @job:2252050 iris- [ 191 -196 ] 6 # all the jobs under your username @user:svarrette iris- [ 076 ,191-196 ] 7 User group List expanded node names where you have jobs running # Similar to: squeue -h -u $USER -o \"%N\"|nodeset -e $ nodeset -e @user: $USER Job group List folded nodes where your job 1234567 is running (use sq to quickly list your jobs): $ similar to squeue -h -j 1234567 -o \"%N\" nodeset -f @job:1234567 State group List expanded node names that are idle according to slurm # Similar to: sinfo -t IDLE -o \"%N\" nodeset -e @state:idle","title":"nodeset"},{"location":"jobs/submit/#clush","text":"clush can run commands on multiple nodes at once for instance to monitor you jobs. It uses the node grouping syntax from [ nodeset ](( https://clustershell.readthedocs.io/en/latest/tools/nodeset.html ) to allow you to run commands on those nodes. clush uses ssh to connect to each of these nodes. You can use the -b option to gather output from nodes with same output into the same lines. Leaving this out will report on each node separately. Option Description -b gathering output (as when piping to dshbak -c ) -w specify remote hosts, incl. node groups with @group special syntax -g similar to -w @ , restrict commands to the hosts group --diff show differences between common outputs Monitor CPU usage Show %cpu, memory usage, and command for all nodes running any of your jobs. clush -bw @user: $USER ps -u $USER -o%cpu,rss,cmd As above, but only for the nodes reserved with your job clush -bw @job: ps -u $USER -o%cpu,rss,cmd Monitor GPU usage Show what's running on all the GPUs on the nodes associated with your job 654321 . clush -bw @job:654321 bash -l -c 'nvidia-smi --format=csv --query-compute-apps=process_name,used_gpu_memory' As above but for all your jobs (assuming you have only GPU nodes with all GPUs) clush -bw @user: $USER bash -l -c 'nvidia-smi --format=csv --query-compute-apps=process_name,used_gpu_memory' This may be convenient for passive jobs since the sjoin utility does NOT permit to run nvidia-smi (see explaination ). However that way you will see unfortunately ALL processes running on the 4 GPU cards -- including from other users sharing your nodes. It's a known bug, not a feature.","title":"clush"},{"location":"jobs/submit/#pestat-cpumem-usage-report","text":"We have deployed the (excellent) Slurm tool pestat (Processor Element status) of Ole Holm Nielsen that you can use to quickly check the CPU/Memory usage of your jobs. Information deserving investigation (too low/high CPU or Memory usage compared to allocation) will be flagged in Red or Magenta pestat [-p ] [-G] [-f] pestat output (official sample output)","title":"pestat: CPU/Mem usage report"},{"location":"jobs/submit/#general-guidelines","text":"As mentionned before, always check your node activity with at least htop on the all allocated nodes to ensure you use them as expected. Several cases might apply to your job workflow: Single Node, single core You are dealing with an embarrasingly parallel job campaign and this approach is bad and overload the scheduler unnecessarily. You will also quickly cross the limits set in terms of maximum number of jobs. You must aggregate multiples tasks within a single job to exploit fully a complete node. In particular, you MUST consider using GNU Parallel and our generic GNU launcher launcher.parallel.sh . ULHPC Tutorial / HPC Management of Embarrassingly Parallel Jobs Single Node, multi-core If you asked for more than a core in your job (> 1 tasks, -c where > 1), there are 3 typical situations you MUST analysed (and pestat or htop are of great help for that): You cannot see the expected activity (only 1 core seems to be active at 100%), then you should review your workflow as you are under -exploiting (and thus probably waste ) the allocated resources. you have the expected activity on the requested cores (Ex: the 28 cores were requested, and htop reports a significant usage of all cores) BUT the CPU load of the system exceed the core capacity of the computing node . That means you are forking too many processes and overloading/harming the systems. For instance on regular iris (resp. aion ) node, a CPU load above 28 (resp. 128) is suspect. Note that we use LBNL Node Health Check (NHC) to automatically drain nodes for which the load exceed twice the core capacity An analogy for a single core load with the amont of cars possible in a single-lane brige or tunnel is illustrated below ( source ). Like the bridge/tunnel operator, you'd like your cars/processes to never be waiting, otherwise you are harming the system. Imagine this analogy for the amount of cores available on a computing node to better reporesent the situtation on a single core. you have the expected activity on the requested cores and the load match your allocation without harming the system: you're good to go! Multi-node If you asked for more than ONE node , ensure that you have consider the following questions. You are running an MPI job : you generally know what you're doing, YET ensure your followed the single node monitoring checks ( htop etc. yet across all nodes) to review your core activity on ALL nodes (see 3. below) . Consider also parallel profilers like Arm Forge You are running an embarrasingly parallel job campaign . You should first ensure you correctly exploit a single node using GNU Parallel before attempting to cross multiple nodes You run a distributed framework able to exploit multiple nodes (typically with a master/slave model as for Spark cluster ). You MUST assert that your [slave] processes are really run on the over nodes using # check you running job $ sq # Join **another** node than the first one listed $ sjoin -w $ htop # view of all processes # F5: tree view # u : filter by process of # q: quit","title":"General Guidelines"},{"location":"jobs/submit/#monitoring-past-jobs-efficiency","text":"Walltime estimation and Job efficiency By default, none of the regular jobs you submit can exceed a walltime of 2 days ( 2-00:00:00 ). You have a strong interest to estimate accurately the walltime of your jobs. While it is not always possible, or quite hard to guess at the beginning of a given job campaign where you'll probably ask for the maximum walltime possible, you should look back as your historical usage for the past efficiency and elapsed time of your previously completed jobs using seff or susage utilities . Update the time constraint [#SBATCH] -t [...] of your jobs accordingly. There are two immediate benefits for you: Short jobs are scheduled faster, and may even be elligible for backfilling You will be more likely elligible for a raw share upgrade of your user account -- see Fairsharing The below utilities will help you track the CPU/Memory efficiency ( seff ) or the Average Walltime Accuracy ( susage , sacct ) of your past jobs","title":"Monitoring past jobs efficiency"},{"location":"jobs/submit/#seff","text":"Use seff to double check a past job CPU/Memory efficiency. Below examples should be self-speaking: Good CPU Eff. $ seff 2171749 Job ID: 2171749 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 28 CPU Utilized: 41-01:38:14 CPU Efficiency: 99.64% of 41-05:09:44 core-walltime Job Wall-clock time: 1-11:19:38 Memory Utilized: 2.73 GB Memory Efficiency: 2.43% of 112.00 GB Good Memory Eff. $ seff 2117620 Job ID: 2117620 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 16 CPU Utilized: 14:24:49 CPU Efficiency: 23.72% of 2-12:46:24 core-walltime Job Wall-clock time: 03:47:54 Memory Utilized: 193.04 GB Memory Efficiency: 80.43% of 240.00 GB Good CPU and Memory Eff. $ seff 2138087 Job ID: 2138087 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 64 CPU Utilized: 87-16:58:22 CPU Efficiency: 86.58% of 101-07:16:16 core-walltime Job Wall-clock time: 1-13:59:19 Memory Utilized: 1.64 TB Memory Efficiency: 99.29% of 1.65 TB [Very] Bad efficiency This illustrates a very bad job in terms of CPU/memory efficiency (below 4%), which illustrate a case where basically the user wasted 4 hours of computation while mobilizing a full node and its 28 cores. $ seff 2199497 Job ID: 2199497 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 28 CPU Utilized: 00:08:33 CPU Efficiency: 3.55% of 04:00:48 core-walltime Job Wall-clock time: 00:08:36 Memory Utilized: 55.84 MB Memory Efficiency: 0.05% of 112.00 GB This is typical of a single-core task can could be drastically improved via GNU Parallel . Note however that demonstrating a CPU good efficiency with seff may not be enough! You may still induce an abnormal load on the reserved nodes if you spawn more processes than allowed by the Slurm reservation. To avoid that, always try to prefix your executions with srun within your launchers. See also Specific Resource Allocations .","title":"seff"},{"location":"jobs/submit/#susage","text":"Use susage to check your past jobs walltime accuracy ( Timelimit vs. Elapsed ) $ susage -h Usage: susage [-m] [-Y] [-S YYYY-MM-DD] [-E YYYT-MM-DD] For a specific user (if accounting rights granted): susage [...] -u For a specific account (if accounting rights granted): susage [...] -A Display past job usage summary In all cases, if you are confident that your jobs will last more than 2 days while efficiently using the allocated resources , you can use --qos long QOS. Be aware that special restrictions applies for this kind of jobs.","title":"susage"},{"location":"policies/aup/","text":"Acceptable Use Policy (AUP) 2.1 \u00b6 The University of Luxembourg operates since 2007 a large academic HPC facility which remains the reference implementation within the country, offering a cutting-edge research infrastructure to Luxembourg public research while serving as edge access to the upcoming Euro-HPC Luxembourg supercomputer. The University extends access to its HPC resources (including facilities, services and HPC experts) to its students, staff, research partners (including scientific staff of national public organizations and external partners for the duration of joint research projects) and to industrial partners. UL HPC AUP \u00b6 There are a number of policies which apply to ULHPC users. UL HPC Acceptable Use Policy (AUP) [pdf] Important All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP) . You should read and keep a signed copy of this document before using the facility. Access and/or usage of any ULHPC system assumes the tacit acknowledgement to this policy. The purpose of this document is to define the rules and terms governing acceptable use of resources (core hours, license hours, data storage capacity as well as network connectivity and technical support), including access, utilization and security of the resources and data. Crediting ULHPC in your research \u00b6 One of the requirements stemming from the AUP , is to credit and acknowle the usage of the University of Luxembourg HPC facility for ALL publications and contributions having results and/or contents obtained or derived from that usage. Publication tagging \u00b6 You are also requested to tag the publication(s) you have produced thanks to the usage of the UL HPC platform upon their registration on Orbilu : Login on MyOrbiLu Select your publication entry and click on the \"Edit\" button Select the \"2. Enrich\" category at the top of the page In the \"Research center\" field, enter \"ulhpc\" and select the proposition This tag is a very important indicator for us to quantify the concrete impact of the HPC facility on the research performed at the University. List of publications generated thanks to the UL HPC Platform","title":"Acceptable Use Policy (AUP)"},{"location":"policies/aup/#acceptable-use-policy-aup-21","text":"The University of Luxembourg operates since 2007 a large academic HPC facility which remains the reference implementation within the country, offering a cutting-edge research infrastructure to Luxembourg public research while serving as edge access to the upcoming Euro-HPC Luxembourg supercomputer. The University extends access to its HPC resources (including facilities, services and HPC experts) to its students, staff, research partners (including scientific staff of national public organizations and external partners for the duration of joint research projects) and to industrial partners.","title":"Acceptable Use Policy (AUP) 2.1"},{"location":"policies/aup/#ul-hpc-aup","text":"There are a number of policies which apply to ULHPC users. UL HPC Acceptable Use Policy (AUP) [pdf] Important All users of UL HPC resources and PIs must abide by the UL HPC Acceptable Use Policy (AUP) . You should read and keep a signed copy of this document before using the facility. Access and/or usage of any ULHPC system assumes the tacit acknowledgement to this policy. The purpose of this document is to define the rules and terms governing acceptable use of resources (core hours, license hours, data storage capacity as well as network connectivity and technical support), including access, utilization and security of the resources and data.","title":"UL HPC AUP"},{"location":"policies/aup/#crediting-ulhpc-in-your-research","text":"One of the requirements stemming from the AUP , is to credit and acknowle the usage of the University of Luxembourg HPC facility for ALL publications and contributions having results and/or contents obtained or derived from that usage.","title":"Crediting ULHPC in your research"},{"location":"policies/aup/#publication-tagging","text":"You are also requested to tag the publication(s) you have produced thanks to the usage of the UL HPC platform upon their registration on Orbilu : Login on MyOrbiLu Select your publication entry and click on the \"Edit\" button Select the \"2. Enrich\" category at the top of the page In the \"Research center\" field, enter \"ulhpc\" and select the proposition This tag is a very important indicator for us to quantify the concrete impact of the HPC facility on the research performed at the University. List of publications generated thanks to the UL HPC Platform","title":"Publication tagging"},{"location":"policies/maintenance/","text":"Maintenance and Downtime Policy \u00b6 Scheduled Maintenance \u00b6 The ULHPC team will schedule maintenance in one of three manners: Rolling reboots Whenever possible, ULHPC will apply updates and do other maintenance in a rolling fashion in such a manner as to have either no or as little impact as possible to ULHPC services Partial outages We will do these as needed but in a manner that impacts only some ULHPC services at a time Full outages These are outages that will affect all ULHPC services, such as outages of core datacenter networking services, datacenter power of HVAC/cooling system maintenance or global GPFS/Spectrumscale filesystem updates . Such maintenance windows typically happen on a quarterly basis . It should be noted that we are not always able to anticipate when these outages are needed . ULHPC's goal for these downtimes is to have them completed as fast as possible. However, validation and qualification of the full platform takes typically one working day, and unforeseen or unusual circumstances may occur. So count for such outages a multiple-day downtime . Notifications \u00b6 We normally inform users of cluster maintenance at least 3 weeks in advance by mail using the HPC User community mailing list (moderated): hpc-users@uni.lu . A second reminder is sent a few days prior to actual downtime. The news of the downtimes is also posted on the Live status page. Finally, a colored \" message of the day \" (motd) banner is displayed on all access/login servers such that you can quickly be informed of any incoming maintenance operation upon connection to the cluster. You can see this when you login or (again),any time by issuing the command: cat /etc/motd Detecting maintenance... During the maintenance During the maintenance period, access to the involved cluster access/login serveur is DENIED and any users still logged-in are disconnected at the beginning of the maintenance you will receive a written message in your terminal if for some reason during the maintenance you urgently need to collect data from your account, please contact the UL HPC Team by sending a mail to: hpc-team@uni.lu . We will notify you of the end of the maintenance with a summary of the performed operations. Exceptional \"EMERGENCY\" maintenance \u00b6 Unscheduled downtimes can occur for any number of reasons, including: Loss of cooling and/or power in the data center. Loss of supporting infrastructure (i.e. hardware). Critical need to make changes to hardware or software that negatively impacts performance or access. Application of critical patches that can't wait until the next scheduled maintenance. For safety or security issues that require immediate action. We will try to notify users in the advent of such event by email. Danger The ULHPC team reserves the right to intervene in user activity without notice when such activity may destabilize the platform and/or is at the expense of other users, and/or to monitor/verify/debug ongoing system activity.","title":"Downtime and Maintenance"},{"location":"policies/maintenance/#maintenance-and-downtime-policy","text":"","title":"Maintenance and Downtime Policy"},{"location":"policies/maintenance/#scheduled-maintenance","text":"The ULHPC team will schedule maintenance in one of three manners: Rolling reboots Whenever possible, ULHPC will apply updates and do other maintenance in a rolling fashion in such a manner as to have either no or as little impact as possible to ULHPC services Partial outages We will do these as needed but in a manner that impacts only some ULHPC services at a time Full outages These are outages that will affect all ULHPC services, such as outages of core datacenter networking services, datacenter power of HVAC/cooling system maintenance or global GPFS/Spectrumscale filesystem updates . Such maintenance windows typically happen on a quarterly basis . It should be noted that we are not always able to anticipate when these outages are needed . ULHPC's goal for these downtimes is to have them completed as fast as possible. However, validation and qualification of the full platform takes typically one working day, and unforeseen or unusual circumstances may occur. So count for such outages a multiple-day downtime .","title":"Scheduled Maintenance"},{"location":"policies/maintenance/#notifications","text":"We normally inform users of cluster maintenance at least 3 weeks in advance by mail using the HPC User community mailing list (moderated): hpc-users@uni.lu . A second reminder is sent a few days prior to actual downtime. The news of the downtimes is also posted on the Live status page. Finally, a colored \" message of the day \" (motd) banner is displayed on all access/login servers such that you can quickly be informed of any incoming maintenance operation upon connection to the cluster. You can see this when you login or (again),any time by issuing the command: cat /etc/motd Detecting maintenance... During the maintenance During the maintenance period, access to the involved cluster access/login serveur is DENIED and any users still logged-in are disconnected at the beginning of the maintenance you will receive a written message in your terminal if for some reason during the maintenance you urgently need to collect data from your account, please contact the UL HPC Team by sending a mail to: hpc-team@uni.lu . We will notify you of the end of the maintenance with a summary of the performed operations.","title":"Notifications"},{"location":"policies/maintenance/#exceptional-emergency-maintenance","text":"Unscheduled downtimes can occur for any number of reasons, including: Loss of cooling and/or power in the data center. Loss of supporting infrastructure (i.e. hardware). Critical need to make changes to hardware or software that negatively impacts performance or access. Application of critical patches that can't wait until the next scheduled maintenance. For safety or security issues that require immediate action. We will try to notify users in the advent of such event by email. Danger The ULHPC team reserves the right to intervene in user activity without notice when such activity may destabilize the platform and/or is at the expense of other users, and/or to monitor/verify/debug ongoing system activity.","title":"Exceptional \"EMERGENCY\" maintenance"},{"location":"policies/passwords/","text":"Password and Account Protection \u00b6 A user is given a username (also known as a login name) and associated password that permits her/him to access ULHPC resources. This username/password pair may be used by a single individual only: passwords must not be shared with any other person . Users who share their passwords will have their access to ULHPC disabled. Do not confuse your UL[HPC] password/passphrase and your SSH passphrase We sometimes receive requests to reset your SSH passphrase, which is something you control upon SSH key generation - see SSH documentation . Passwords must be changed as soon as possible after exposure or suspected compromise. Exposure of passwords and suspected compromises must immediately be reported to ULHPC and the University CISO (see below). In all cases, recommendations for the creation of strong passwords is proposed below . Password Manager \u00b6 You are strongly encouraged also to rely on password manager applications to store your different passwords. You may want to use your browser embedded solution but it's not the safest option. Here is a list of recommended applications: BitWarden - free with no limits ($10 per year for families) - Github Dashlane - free for up to 50 passwords - 40\u20ac per year for premium (60\u20ac for families) LastPass NordPass - free version limited to one device with unlimited number of passwords; 36$ per year for premium plan 1Password - paid version only (yet worth it) with 30-day free trial, 36$ per year (60$ for families) Self-Hosted solutions : KeepassXC pass : the Standard Unix Password Manager . Forgotten Passwords \u00b6 If you forget your password or if it has recently expired, you can simply contact us to initiate the process of resetting your password. Login Failures \u00b6 Your login privileges will be disabled if you have several login failures while entering your password on a ULHPC resource. You do not need a new password in this situation. The login failures will be automatically cleared after a couple of minutes. No additional actions are necessary. How To Change Your Password on IPA \u00b6 See IPA documentation Tip Passwords must be changed under any one of the following circumstances: Immediately after someone else has obtained your password (do NOT give your password to anyone else). As soon as possible, but at least within one business day after a password has been compromised or after you suspect that a password has been compromised. On direction from ULHPC staff, or by IPA password policy requesting to frequently change your password. Your new password must adhere to ULHPC's password requirements. Password Requirements and Guidelines \u00b6 One of the potentially weakest links in computer security is the individual password. Despite the University's and ULHPC's efforts to keep hackers out of your personal files and away from University resources (e.g., email, web files, licensed software), easily-guessed passwords are still a big problem so you should really pay attention to the following guidelines and recommendations. Recently, the National Institute of Standards and Technology (NIST) has updated their Digital Identity Guidelines in Special Publication 800-63B . We have updated our password policy to bring it in closer alignment with this guidelines. In particular, the updated guidance is counter to the long-held philosophy that passwords must be long and complex. In contrast, the new guidelines recommend that passwords should be \" easy to remember \" but \" hard to guess \", allowing for usability and security to go hand-in-hand. Inpired with other password policies and guidelines ( Stanford , NERSC ), ULHPC thus recommends the usage of \" pass phrases \" instead of passwords. Pass phrases are longer, but easier to remember than complex passwords, and if well-chosen can provide better protection against hackers. In addition, the following rules based on password length and usage of Multi-Factor Authentication (MFA) must be satisfied: The enforced minimum length for accounts with MFA enabled is 8 characters. If MFA is not enabled for your account the minimum password length is 14 characters. The ability to use all special characters according to the following guidelines (see also the Stanford Password Requirements Quick Guide ) depending on the password length: 8-11: mixed case letters, numbers, & symbols 12-15: mixed case letters & numbers 16-19: mixed case letters 20+: no restrictions illustrating image Restrict sequential and repetitive characters (e.g. 12345 or aaaaaa ) Restrict context specific passwords (e.g. the name of the site, etc.) Restrict commonly used passwords (e.g. p@ssw0rd , etc.) and dictionary words Restrict passwords obtained from previous breach corpuses Passwords must be changed every six months. If you are struggling to come up with a good password, you can inspire from the following approach: Creating a pass phrase (source: Stanford password policy ) A pass phrase is basically just a series of words, which can include spaces, that you employ instead of a single pass \"word.\" Pass phrases should be at least 16 to 25 characters in length (spaces count as characters), but no less. Longer is better because, though pass phrases look simple, the increased length provides so many possible permutations that a standard password-cracking program will not be effective. It is always a good thing to disguise that simplicity by throwing in elements of weirdness, nonsense, or randomness. Here, for example, are a couple pass phrase candidates: pizza with crispy spaniels mangled persimmon therapy Punctuate and capitalize your phrase: Pizza with crispy Spaniels! mangled Persimmon Therapy? Toss in a few numbers or symbols from the top row of the keyboard, plus some deliberately misspelled words, and you'll create an almost unguessable key to your account: Pizza w/ 6 krispy Spaniels! mangl3d Persimmon Th3rapy?","title":"Password Policy"},{"location":"policies/passwords/#password-and-account-protection","text":"A user is given a username (also known as a login name) and associated password that permits her/him to access ULHPC resources. This username/password pair may be used by a single individual only: passwords must not be shared with any other person . Users who share their passwords will have their access to ULHPC disabled. Do not confuse your UL[HPC] password/passphrase and your SSH passphrase We sometimes receive requests to reset your SSH passphrase, which is something you control upon SSH key generation - see SSH documentation . Passwords must be changed as soon as possible after exposure or suspected compromise. Exposure of passwords and suspected compromises must immediately be reported to ULHPC and the University CISO (see below). In all cases, recommendations for the creation of strong passwords is proposed below .","title":"Password and Account Protection"},{"location":"policies/passwords/#password-manager","text":"You are strongly encouraged also to rely on password manager applications to store your different passwords. You may want to use your browser embedded solution but it's not the safest option. Here is a list of recommended applications: BitWarden - free with no limits ($10 per year for families) - Github Dashlane - free for up to 50 passwords - 40\u20ac per year for premium (60\u20ac for families) LastPass NordPass - free version limited to one device with unlimited number of passwords; 36$ per year for premium plan 1Password - paid version only (yet worth it) with 30-day free trial, 36$ per year (60$ for families) Self-Hosted solutions : KeepassXC pass : the Standard Unix Password Manager .","title":"Password Manager"},{"location":"policies/passwords/#forgotten-passwords","text":"If you forget your password or if it has recently expired, you can simply contact us to initiate the process of resetting your password.","title":"Forgotten Passwords"},{"location":"policies/passwords/#login-failures","text":"Your login privileges will be disabled if you have several login failures while entering your password on a ULHPC resource. You do not need a new password in this situation. The login failures will be automatically cleared after a couple of minutes. No additional actions are necessary.","title":"Login Failures"},{"location":"policies/passwords/#how-to-change-your-password-on-ipa","text":"See IPA documentation Tip Passwords must be changed under any one of the following circumstances: Immediately after someone else has obtained your password (do NOT give your password to anyone else). As soon as possible, but at least within one business day after a password has been compromised or after you suspect that a password has been compromised. On direction from ULHPC staff, or by IPA password policy requesting to frequently change your password. Your new password must adhere to ULHPC's password requirements.","title":"How To Change Your Password on IPA"},{"location":"policies/passwords/#password-requirements-and-guidelines","text":"One of the potentially weakest links in computer security is the individual password. Despite the University's and ULHPC's efforts to keep hackers out of your personal files and away from University resources (e.g., email, web files, licensed software), easily-guessed passwords are still a big problem so you should really pay attention to the following guidelines and recommendations. Recently, the National Institute of Standards and Technology (NIST) has updated their Digital Identity Guidelines in Special Publication 800-63B . We have updated our password policy to bring it in closer alignment with this guidelines. In particular, the updated guidance is counter to the long-held philosophy that passwords must be long and complex. In contrast, the new guidelines recommend that passwords should be \" easy to remember \" but \" hard to guess \", allowing for usability and security to go hand-in-hand. Inpired with other password policies and guidelines ( Stanford , NERSC ), ULHPC thus recommends the usage of \" pass phrases \" instead of passwords. Pass phrases are longer, but easier to remember than complex passwords, and if well-chosen can provide better protection against hackers. In addition, the following rules based on password length and usage of Multi-Factor Authentication (MFA) must be satisfied: The enforced minimum length for accounts with MFA enabled is 8 characters. If MFA is not enabled for your account the minimum password length is 14 characters. The ability to use all special characters according to the following guidelines (see also the Stanford Password Requirements Quick Guide ) depending on the password length: 8-11: mixed case letters, numbers, & symbols 12-15: mixed case letters & numbers 16-19: mixed case letters 20+: no restrictions illustrating image Restrict sequential and repetitive characters (e.g. 12345 or aaaaaa ) Restrict context specific passwords (e.g. the name of the site, etc.) Restrict commonly used passwords (e.g. p@ssw0rd , etc.) and dictionary words Restrict passwords obtained from previous breach corpuses Passwords must be changed every six months. If you are struggling to come up with a good password, you can inspire from the following approach: Creating a pass phrase (source: Stanford password policy ) A pass phrase is basically just a series of words, which can include spaces, that you employ instead of a single pass \"word.\" Pass phrases should be at least 16 to 25 characters in length (spaces count as characters), but no less. Longer is better because, though pass phrases look simple, the increased length provides so many possible permutations that a standard password-cracking program will not be effective. It is always a good thing to disguise that simplicity by throwing in elements of weirdness, nonsense, or randomness. Here, for example, are a couple pass phrase candidates: pizza with crispy spaniels mangled persimmon therapy Punctuate and capitalize your phrase: Pizza with crispy Spaniels! mangled Persimmon Therapy? Toss in a few numbers or symbols from the top row of the keyboard, plus some deliberately misspelled words, and you'll create an almost unguessable key to your account: Pizza w/ 6 krispy Spaniels! mangl3d Persimmon Th3rapy?","title":"Password Requirements and Guidelines"},{"location":"policies/usage-charging/","text":"ULHPC Usage Charging Policy \u00b6 The advertised prices are for internal partners only The price list and all other information of this page are meant for internal partners, i.e., not for external companies. If you are not an internal partner, please contact us at hpc-partnership@uni.lu . Alternatively, you can contact LuxProvide , the national HPC center which aims at serving the private sector for HPC needs. How to estimate HPC costs for projects? \u00b6 You can use the following excel document to estimate the cost of your HPC usage: UL HPC Cost Estimates for Project Proposals [xlsx] Note that there are two sheets offering two ways to estimate based on your specific situation. Please read the red sections to ensure that you are using the correct estimation sheet. Note that even if you plan for large-scale experiments on PRACE/EuroHPC supercomputers through computing credits granted by Call for Proposals for Project Access , you should plan for ULHPC costs since you will have to demonstrate the scalability of your code -- the University's facility is ideal for that. You can contact hpc-partnership@uni.lu for more details about this. HPC price list - 2022-10-01 \u00b6 Note that ULHPC price list has been updated, see below. Compute \u00b6 Compute type Description \u20ac (excl. VAT) / node-hour CPU - small 28 cores, 128 GB RAM 0.25\u20ac CPU - regular 128 cores, 256 GB RAM 1.25\u20ac CPU - big mem 112 cores, 3 TB RAM 6.00\u20ac GPU 4 V100, 28 cores, 768 GB RAM 5.00\u20ac The prices above correspond to a full-node cost. However, jobs can use a fraction of a node and the price of the job will be computed based on that fraction. Please find below the core-hour / GPU-hour costs and how we compute how much to charge: Compute type Unit \u20ac (excl. VAT) CPU - small Core-hour 0.0089\u20ac CPU - regular Core-hour 0.0097\u20ac CPU - big mem Core-hour 0.0535\u20ac GPU GPU-hour 1.25\u20ac For CPU nodes, the fraction correspond to the number of requested cores, e.g. 64 cores on a CPU - regular node corresponds to 50% of the available cores and thus will be charged 50% of 1.25\u20ac. Regarding the RAM of a job, if you do not override the default behaviour, you will receive a percentage of the RAM corresponding to the amount of requested cores, e.g, 128G of RAM for the 64 cores example from above (50% of a CPU - regular node). If you override the default behaviour and request more RAM, we will re-compute the equivalent number of cores, e.g. if you request 256G of RAM and 64 cores, we will charge 128 cores. For GPU nodes, the fraction considers the number of GPUs. There are 4 GPUs, 28 cores and 768G of RAM on one machine. This means that for each GPU, you can have up to 7 cores and 192G of RAM. If you request more than those default, we will re-compute the GPU equivalent, e.g. if you request 1 GPU and 8 cores, we will charge 2 GPUs. Storage \u00b6 Storage type \u20ac (excl. VAT) / GB / Month Additional information Home Free 500 GB Project 0.02\u20ac 1 TB free Scratch Free 10 TB Note that for project storage, we charge the quota and not the used storage . HPC Resource allocation for UL internal R&D and training \u00b6 ULHPC resources are free of charge for UL staff for their internal work and training activities . Principal Investigators (PI) will nevertheless receive on a regular basis a usage report of their team activities on the UL HPC platform. The corresponding accumulated price will be provided even if this amount is purely indicative and won't be charged back. Any other activities will be reviewed with the rectorate and are a priori subjected to be billed. Submit project related jobs \u00b6 To allow the ULHPC team to keep track of the jobs related to a project, use the -A flag in Slurm, either in the Slurm directives preamble of your script, e.g., #SBATCH -A myproject or on the command line when you submit your job, e.g. , sbatch -A myproject /path/to/launcher.sh","title":"Usage Charging Policy"},{"location":"policies/usage-charging/#ulhpc-usage-charging-policy","text":"The advertised prices are for internal partners only The price list and all other information of this page are meant for internal partners, i.e., not for external companies. If you are not an internal partner, please contact us at hpc-partnership@uni.lu . Alternatively, you can contact LuxProvide , the national HPC center which aims at serving the private sector for HPC needs.","title":"ULHPC Usage Charging Policy"},{"location":"policies/usage-charging/#how-to-estimate-hpc-costs-for-projects","text":"You can use the following excel document to estimate the cost of your HPC usage: UL HPC Cost Estimates for Project Proposals [xlsx] Note that there are two sheets offering two ways to estimate based on your specific situation. Please read the red sections to ensure that you are using the correct estimation sheet. Note that even if you plan for large-scale experiments on PRACE/EuroHPC supercomputers through computing credits granted by Call for Proposals for Project Access , you should plan for ULHPC costs since you will have to demonstrate the scalability of your code -- the University's facility is ideal for that. You can contact hpc-partnership@uni.lu for more details about this.","title":"How to estimate HPC costs for projects?"},{"location":"policies/usage-charging/#hpc-price-list-2022-10-01","text":"Note that ULHPC price list has been updated, see below.","title":"HPC price list - 2022-10-01"},{"location":"policies/usage-charging/#compute","text":"Compute type Description \u20ac (excl. VAT) / node-hour CPU - small 28 cores, 128 GB RAM 0.25\u20ac CPU - regular 128 cores, 256 GB RAM 1.25\u20ac CPU - big mem 112 cores, 3 TB RAM 6.00\u20ac GPU 4 V100, 28 cores, 768 GB RAM 5.00\u20ac The prices above correspond to a full-node cost. However, jobs can use a fraction of a node and the price of the job will be computed based on that fraction. Please find below the core-hour / GPU-hour costs and how we compute how much to charge: Compute type Unit \u20ac (excl. VAT) CPU - small Core-hour 0.0089\u20ac CPU - regular Core-hour 0.0097\u20ac CPU - big mem Core-hour 0.0535\u20ac GPU GPU-hour 1.25\u20ac For CPU nodes, the fraction correspond to the number of requested cores, e.g. 64 cores on a CPU - regular node corresponds to 50% of the available cores and thus will be charged 50% of 1.25\u20ac. Regarding the RAM of a job, if you do not override the default behaviour, you will receive a percentage of the RAM corresponding to the amount of requested cores, e.g, 128G of RAM for the 64 cores example from above (50% of a CPU - regular node). If you override the default behaviour and request more RAM, we will re-compute the equivalent number of cores, e.g. if you request 256G of RAM and 64 cores, we will charge 128 cores. For GPU nodes, the fraction considers the number of GPUs. There are 4 GPUs, 28 cores and 768G of RAM on one machine. This means that for each GPU, you can have up to 7 cores and 192G of RAM. If you request more than those default, we will re-compute the GPU equivalent, e.g. if you request 1 GPU and 8 cores, we will charge 2 GPUs.","title":"Compute"},{"location":"policies/usage-charging/#storage","text":"Storage type \u20ac (excl. VAT) / GB / Month Additional information Home Free 500 GB Project 0.02\u20ac 1 TB free Scratch Free 10 TB Note that for project storage, we charge the quota and not the used storage .","title":"Storage"},{"location":"policies/usage-charging/#hpc-resource-allocation-for-ul-internal-rd-and-training","text":"ULHPC resources are free of charge for UL staff for their internal work and training activities . Principal Investigators (PI) will nevertheless receive on a regular basis a usage report of their team activities on the UL HPC platform. The corresponding accumulated price will be provided even if this amount is purely indicative and won't be charged back. Any other activities will be reviewed with the rectorate and are a priori subjected to be billed.","title":"HPC Resource allocation for UL internal R&D and training"},{"location":"policies/usage-charging/#submit-project-related-jobs","text":"To allow the ULHPC team to keep track of the jobs related to a project, use the -A flag in Slurm, either in the Slurm directives preamble of your script, e.g., #SBATCH -A myproject or on the command line when you submit your job, e.g. , sbatch -A myproject /path/to/launcher.sh","title":"Submit project related jobs"},{"location":"services/","text":"Services \u00b6 The ULHPC Team is committed to excellence and support of the University research community through several side services: ULHPC Gitlab , a comprehensive version control and collaboration (VC&C) solution to deliver better software faster. Etherpad - a web-based collaborative real-time editor Privatebin - secured textual data sharing Gitlab @ Uni.lu ( DEPRECATED ) \u00b6 Gitlab is an open source software to collaborate on code, very similar to Github . You can manage git repositories with fine grained access controls that keep your code secure and perform code reviews and enhance collaboration with merge requests. Each project can also have an issue tracker and a wiki. The GitLab service is available for UL HPC platform users with their ULHPC account and to their external collaborators that have a GitHub account. Decommissioning of Gitlab service Situation : the Gitlab service has been in production since 2015 and kept up-to-date until now. Nevertheless, the ULHPC Gitlab service is now replaced by a new instance administrated by the SIU Service. For more information, search the Knowledge Base or open a ticket on ServiceNow . [Github] External accounts access are BLOCKED by default By default, external (github) accounts are denied and blocked on the Gitlab service. Access can be granted on-demand after careful review of the ULHPC team and attached to the project indicated by the UL[HPC] PI in charge of the external. Note : externals cannot create groups nor projects. EtherPad \u00b6 Etherpad is a web-based collaborative real-time editor, allowing authors to simultaneously edit a text document, and see all of the participants' edits in real-time, with the ability to display each author's text in their own color. PrivateBin \u00b6 PrivateBin is a minimalist, open source online pastebin where the server has zero knowledge of pasted data. Data is encrypted and decrypted in the browser using 256bit AES in Galois Counter mode.","title":"Services"},{"location":"services/#services","text":"The ULHPC Team is committed to excellence and support of the University research community through several side services: ULHPC Gitlab , a comprehensive version control and collaboration (VC&C) solution to deliver better software faster. Etherpad - a web-based collaborative real-time editor Privatebin - secured textual data sharing","title":"Services"},{"location":"services/#gitlab-unilu-deprecated","text":"Gitlab is an open source software to collaborate on code, very similar to Github . You can manage git repositories with fine grained access controls that keep your code secure and perform code reviews and enhance collaboration with merge requests. Each project can also have an issue tracker and a wiki. The GitLab service is available for UL HPC platform users with their ULHPC account and to their external collaborators that have a GitHub account. Decommissioning of Gitlab service Situation : the Gitlab service has been in production since 2015 and kept up-to-date until now. Nevertheless, the ULHPC Gitlab service is now replaced by a new instance administrated by the SIU Service. For more information, search the Knowledge Base or open a ticket on ServiceNow . [Github] External accounts access are BLOCKED by default By default, external (github) accounts are denied and blocked on the Gitlab service. Access can be granted on-demand after careful review of the ULHPC team and attached to the project indicated by the UL[HPC] PI in charge of the external. Note : externals cannot create groups nor projects.","title":"Gitlab @ Uni.lu (DEPRECATED)"},{"location":"services/#etherpad","text":"Etherpad is a web-based collaborative real-time editor, allowing authors to simultaneously edit a text document, and see all of the participants' edits in real-time, with the ability to display each author's text in their own color.","title":"EtherPad"},{"location":"services/#privatebin","text":"PrivateBin is a minimalist, open source online pastebin where the server has zero knowledge of pasted data. Data is encrypted and decrypted in the browser using 256bit AES in Galois Counter mode.","title":"PrivateBin"},{"location":"services/jupyter/","text":"Jupyter Notebook \u00b6 JupyterLab is a flexible, popular literate-computing web application for creating notebooks containing code, equations, visualization, and text. Notebooks are documents that contain both computer code and rich text elements (paragraphs, equations, figures, widgets, links). They are human-readable documents containing analysis descriptions and results but are also executable data analytics artifacts. Notebooks are associated with kernels, processes that actually execute code. Notebooks can be shared or converted into static HTML documents. They are a powerful tool for reproducible research and teaching. Install Jupyter \u00b6 While JupyterLab runs code in Jupyter notebooks for many programming languages, Python is a requirement (Python 3.3 or greater, or Python 2.7) for installing the JupyterLab. New users may wish to install JupyterLab in a Conda environment. Hereafter, the pip package manager will be used to install JupyterLab. We strongly recommend to use the Python module provided by the ULHPC and installing jupyter inside a Python virtual environment after upgrading pip . $ si $ module load lang/Python #Loading default Python $ python -m venv ~/environments/jupyter_env $ source ~/environments/jupyter_env/bin/activate $ python -m pip install --upgrade pip $ python -m pip install jupyterlab Warning Modules are not allowed on the access servers. To test interactively Singularity, remember to ask for an interactive job first using for instance the si tool. Once JupyterLab is installed along with , you can start to configure your installation setting the environment variables corresponding to your needs: JUPYTER_CONFIG_DIR : Set this environment variable to use a particular directory, other than the default, for Jupyter config files JUPYTER_PATH : Set this environment variable to provide extra directories for the data search path. JUPYTER_PATH should contain a series of directories, separated by os.pathsep(; on Windows, : on Unix). Directories given in JUPYTER_PATH are searched before other locations. This is used in addition to other entries, rather than replacing any JUPYTER_DATA_DIR : Set this environment variable to use a particular directory, other than the default, as the user data directory JUPYTER_RUNTIME_DIR : Set this to override where Jupyter stores runtime files IPYTHONDIR : If set, this environment variable should be the path to a directory, which IPython will use for user data. IPython will create it if it does not exist. JupyterLab is now installed and ready. Installing the classic Notebook JupyterLab ( jupyterlab ) is a new package which automates many task that where performed manually in the traditional Jupyter package ( jupyter ). If you prefer to install the classic notebook, you also need to install the IPython manually as well, replacing python -m pip install jupyterlab with: python -m pip install jupyter ipykernel Providing access to kernels of other environments \u00b6 JupyterLab makes sure that a default IPython kernel is available, with the environment (and the Python version) with which the lab was created. Other environments can export a kernel to a JupyterLab instance, allowing the instance to launch interactive session inside environments others from the environment where JupyterLab is installed. You can setup kernels with different environments on the same notebook . Create the environment with the Python version and the packages you require, and then register the kernel in any environment with Jupyter (lab or classic notebook) installed. For instance, if we have installed Jupyter in ~/environments/jupyter_env : source ~/environments/other_python_venv/bin/activate python -m pip install ipykernel python -m ipykernel install --prefix = ${ HOME } /environments/jupyter_env --name other_python_env --display-name \"Other Python env\" deactivate Then all kernels and their associated environment can be started from the same Jupyter instance in the ~/environments/jupyter_env Python venv. You can also use the flag --user instead of --prefix to install the kernel in the default system location available to all Jupyter environments for a user. Kernels for Conda environments \u00b6 If you would like to install a kernel in a Conda environment, install the ipykernel from the conda-forge channel. For instance, micromamba install --name conda_env conda-forge::ipykernel micromamba run --name conda_env python -m ipykernel install --prefix = ${ HOME } /environments/jupyter_env --name other_python_env --display-name \"Other Python env\" will make your conda environment, conda_env , available in the kernel launched from the ~/environments/jupyter_env Python venv. Starting a Jupyter Notebook \u00b6 Jupyter notebooks must be started as slurm jobs . The following script is a template for Jupyter submission scripts that will rarely need modifications. Most often you will need to modify the session duration ( --time SBATCH option). Slurm Launcher script for Jupyter Notebook #!/usr/bin/bash --login #SBATCH --job-name=Jupyter #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=2 # Change accordingly, note that ~1.7GB RAM is proivisioned per core #SBATCH --partition=batch #SBATCH --qos=normal #SBATCH --output=%x_%j.out # Print messages to 'Jupyter_.out #SBATCH --error=%x_%j.err # Print debug messages to 'Jupyter_.err #SBATCH --time=0-01:00:00 # Change maximum allowable jupyter server uptime here print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } module purge || print_error_and_exit \"No 'module' command\" # Load the default Python 3 module module load lang/Python source \" ${ HOME } /environments/jupyter_env/bin/activate\" declare loopback_device = \"127.0.0.1\" declare port = \"8888\" declare connection_instructions = \"connection_instructions.log\" jupyter lab --ip = ${ loopback_device } --port = ${ port } --no-browser & declare lab_pid = $! # Add connection instruction echo \"# Connection instructions\" > \" ${ connection_instructions } \" echo \"\" >> \" ${ connection_instructions } \" echo \"To access the jupyter notebook execute on your personal machine:\" >> \" ${ connection_instructions } \" echo \"ssh -J ${ USER } @access- ${ ULHPC_CLUSTER } .uni.lu:8022 -L ${ port } : ${ loopback_device } : ${ port } ${ USER } @ $( hostname -i ) \" >> \" ${ connection_instructions } \" echo \"\" >> \" ${ connection_instructions } \" echo \"To access the jupyter notebook if you have setup a special key (e.g ulhpc_id_ed25519) to connect to cluster nodes execute on your personal machine:\" >> \" ${ connection_instructions } \" echo \"ssh -i ~/.ssh/hpc_id_ed25519 -J ${ USER } @access- ${ ULHPC_CLUSTER } .uni.lu:8022 -L ${ port } : ${ loopback_device } : ${ port } ${ USER } @ $( hostname -i ) \" >> \" ${ connection_instructions } \" echo \"\" >> \" ${ connection_instructions } \" echo \"Then navigate to:\" >> \" ${ connection_instructions } \" # Wait for the server to start sleep 2s # Wait and check that the landing page is available curl \\ --connect-timeout 10 \\ --retry 5 \\ --retry-delay 1 \\ --retry-connrefused \\ --silent --show-error --fail \\ \"http:// ${ loopback_device } : ${ port } \" > /dev/null # Note down the URL jupyter lab list 2 > & 1 \\ | grep -E '\\?token=' \\ | awk 'BEGIN {FS=\"::\"} {gsub(\"[ \\t]*\",\"\",$1); print $1}' \\ | sed -r 's/([0-9]{1,3}\\.){3}[0-9]{1,3}/127\\.0\\.0\\.1/g' \\ >> \" ${ connection_instructions } \" # Save some debug information echo -e '\\n===\\n' echo \"AVAILABLE LABS\" echo \"\" jupyter lab list echo -e '\\n===\\n' echo \"CONFIGURATION PATHS\" echo \"\" jupyter --paths echo -e '\\n===\\n' echo \"KERNEL SPECIFICATIONS\" echo \"\" jupyter kernelspec list # Wait for the user to terminate the lab wait ${ lab_pid } Once your job is running (see Joining/monitoring running jobs ), you can combine ssh forwarding , and an ssh jump through the login node, to connect to the notebook from your laptop. Open a terminal on your laptop and copy-paste the ssh command contained in the file connection_instructions.log , and then navigate to the webpage link provided. Example content of connection_instructions.log > cat connection_instructions.log # Connection instructions To access the jupyter notebook execute on your personal machine: ssh -J gkafanas@access-aion.uni.lu:8022 -L 8888 :127.0.0.1:8888 gkafanas@172.21.12.29 To access the jupyter notebook if you have setup a special key ( e.g ulhpc_id_ed25519 ) to connect to cluster nodes execute on your personal machine: ssh -i ~/.ssh/ulhpc_id_ed25519 -J gkafanas@access-aion.uni.lu:8022 -L 8888 :127.0.0.1:8888 gkafanas@172.21.12.29 Then navigate to: http://127.0.0.1:8888/?token = b7cf9d71d5c89627250e9a73d4f28cb649cd3d9ff662e7e2 As the instructions suggest, you access the jupyter lab server in the compute node by calling ssh -J gkafanas@access-aion.uni.lu:8022 -L 8888 :127.0.0.1:8888 gkafanas@172.21.12.29 an SSH command that opens a connection to your allocated cluster node jumping through the login node ( -J gkafanas@access-aion.uni.lu:8022 gkafanas@172.21.12.29 ), and exports the port to the jupyter server in the local machine ( -L 8888:127.0.0.1:8888 ). Then, open the connection to the browser in your local machine by following the given link: http://127.0.0.1:8888/?token=b7cf9d71d5c89627250e9a73d4f28cb649cd3d9ff662e7e2 The link provides the access token, so you should be able to login without a password. Warning Do not forget to click on the quit button when finished to stop the Jupyter server and release the resources. Note that in the last line of the submission script the job waits for your Jupyter service to finish. If you encounter any issues, have a look in the debug output in Jupyter_.err . Generic information about the setup of your system is printed in Jupyter_.out . Typical content of Jupyter_.err > cat Jupyter_3664038.err [ I 2024 -11-13 23 :19:52.538 ServerApp ] jupyter_lsp | extension was successfully linked. [ I 2024 -11-13 23 :19:52.543 ServerApp ] jupyter_server_terminals | extension was successfully linked. [ I 2024 -11-13 23 :19:52.547 ServerApp ] jupyterlab | extension was successfully linked. [ I 2024 -11-13 23 :19:52.766 ServerApp ] notebook_shim | extension was successfully linked. [ I 2024 -11-13 23 :19:52.808 ServerApp ] notebook_shim | extension was successfully loaded. [ I 2024 -11-13 23 :19:52.812 ServerApp ] jupyter_lsp | extension was successfully loaded. [ I 2024 -11-13 23 :19:52.813 ServerApp ] jupyter_server_terminals | extension was successfully loaded. [ I 2024 -11-13 23 :19:52.814 LabApp ] JupyterLab extension loaded from /home/users/gkafanas/environments/jupyter_env/lib/python3.11/site-packages/jupyterlab [ I 2024 -11-13 23 :19:52.814 LabApp ] JupyterLab application directory is /mnt/aiongpfs/users/gkafanas/environments/jupyter_env/share/jupyter/lab [ I 2024 -11-13 23 :19:52.815 LabApp ] Extension Manager is 'pypi' . [ I 2024 -11-13 23 :19:52.826 ServerApp ] jupyterlab | extension was successfully loaded. [ I 2024 -11-13 23 :19:52.827 ServerApp ] Serving notebooks from local directory: /mnt/aiongpfs/users/gkafanas/support/jupyter [ I 2024 -11-13 23 :19:52.827 ServerApp ] Jupyter Server 2 .14.2 is running at: [ I 2024 -11-13 23 :19:52.827 ServerApp ] http://127.0.0.1:8888/lab?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d [ I 2024 -11-13 23 :19:52.827 ServerApp ] http://127.0.0.1:8888/lab?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d [ I 2024 -11-13 23 :19:52.827 ServerApp ] Use Control-C to stop this server and shut down all kernels ( twice to skip confirmation ) . [ C 2024 -11-13 23 :19:52.830 ServerApp ] To access the server, open this file in a browser: file:///home/users/gkafanas/.local/share/jupyter/runtime/jpserver-2253096-open.html Or copy and paste one of these URLs: http://127.0.0.1:8888/lab?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d http://127.0.0.1:8888/lab?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d [ I 2024 -11-13 23 :19:52.845 ServerApp ] Skipped non-installed server ( s ) : bash-language-server, dockerfile-language-server-nodejs, javascript-typescript-langserver, jedi-language-server, julia-language-server, pyright, python-language-server, python-lsp-server, r-languageserver, sql-language-server, texlab, typescript-language-server, unified-language-server, vscode-css-languageserver-bin, vscode-html-languageserver-bin, vscode-json-languageserver-bin, yaml-language-server [ I 2024 -11-13 23 :19:53.824 ServerApp ] 302 GET / ( @127.0.0.1 ) 0 .47ms Typical content of Jupyter_.err > cat Jupyter_3664038.out === AVAILABLE LABS Currently running servers: http://127.0.0.1:8888/?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d :: /mnt/aiongpfs/users/gkafanas/support/jupyter === CONFIGURATION PATHS config: /home/users/gkafanas/environments/jupyter_env/etc/jupyter /mnt/aiongpfs/users/gkafanas/.jupyter /usr/local/etc/jupyter /etc/jupyter data: /home/users/gkafanas/environments/jupyter_env/share/jupyter /home/users/gkafanas/.local/share/jupyter /usr/local/share/jupyter /usr/share/jupyter runtime: /home/users/gkafanas/.local/share/jupyter/runtime === KERNEL SPECIFICATIONS Available kernels: other_python_env /home/users/gkafanas/environments/jupyter_env/share/jupyter/kernels/other_python_env python3 /home/users/gkafanas/environments/jupyter_env/share/jupyter/kernels/python3 Password protected access \u00b6 You can also set a password when launching the jupyter lab as detailed in the Jupyter official documentation . In that case, simply direct you browser to the URL http://127.0.0.1:8888/ and provide your password. You can see bellow an example of the login page. Typical content of a password protected login page","title":"Jupyter Notebook"},{"location":"services/jupyter/#jupyter-notebook","text":"JupyterLab is a flexible, popular literate-computing web application for creating notebooks containing code, equations, visualization, and text. Notebooks are documents that contain both computer code and rich text elements (paragraphs, equations, figures, widgets, links). They are human-readable documents containing analysis descriptions and results but are also executable data analytics artifacts. Notebooks are associated with kernels, processes that actually execute code. Notebooks can be shared or converted into static HTML documents. They are a powerful tool for reproducible research and teaching.","title":"Jupyter Notebook"},{"location":"services/jupyter/#install-jupyter","text":"While JupyterLab runs code in Jupyter notebooks for many programming languages, Python is a requirement (Python 3.3 or greater, or Python 2.7) for installing the JupyterLab. New users may wish to install JupyterLab in a Conda environment. Hereafter, the pip package manager will be used to install JupyterLab. We strongly recommend to use the Python module provided by the ULHPC and installing jupyter inside a Python virtual environment after upgrading pip . $ si $ module load lang/Python #Loading default Python $ python -m venv ~/environments/jupyter_env $ source ~/environments/jupyter_env/bin/activate $ python -m pip install --upgrade pip $ python -m pip install jupyterlab Warning Modules are not allowed on the access servers. To test interactively Singularity, remember to ask for an interactive job first using for instance the si tool. Once JupyterLab is installed along with , you can start to configure your installation setting the environment variables corresponding to your needs: JUPYTER_CONFIG_DIR : Set this environment variable to use a particular directory, other than the default, for Jupyter config files JUPYTER_PATH : Set this environment variable to provide extra directories for the data search path. JUPYTER_PATH should contain a series of directories, separated by os.pathsep(; on Windows, : on Unix). Directories given in JUPYTER_PATH are searched before other locations. This is used in addition to other entries, rather than replacing any JUPYTER_DATA_DIR : Set this environment variable to use a particular directory, other than the default, as the user data directory JUPYTER_RUNTIME_DIR : Set this to override where Jupyter stores runtime files IPYTHONDIR : If set, this environment variable should be the path to a directory, which IPython will use for user data. IPython will create it if it does not exist. JupyterLab is now installed and ready. Installing the classic Notebook JupyterLab ( jupyterlab ) is a new package which automates many task that where performed manually in the traditional Jupyter package ( jupyter ). If you prefer to install the classic notebook, you also need to install the IPython manually as well, replacing python -m pip install jupyterlab with: python -m pip install jupyter ipykernel","title":"Install Jupyter"},{"location":"services/jupyter/#providing-access-to-kernels-of-other-environments","text":"JupyterLab makes sure that a default IPython kernel is available, with the environment (and the Python version) with which the lab was created. Other environments can export a kernel to a JupyterLab instance, allowing the instance to launch interactive session inside environments others from the environment where JupyterLab is installed. You can setup kernels with different environments on the same notebook . Create the environment with the Python version and the packages you require, and then register the kernel in any environment with Jupyter (lab or classic notebook) installed. For instance, if we have installed Jupyter in ~/environments/jupyter_env : source ~/environments/other_python_venv/bin/activate python -m pip install ipykernel python -m ipykernel install --prefix = ${ HOME } /environments/jupyter_env --name other_python_env --display-name \"Other Python env\" deactivate Then all kernels and their associated environment can be started from the same Jupyter instance in the ~/environments/jupyter_env Python venv. You can also use the flag --user instead of --prefix to install the kernel in the default system location available to all Jupyter environments for a user.","title":"Providing access to kernels of other environments"},{"location":"services/jupyter/#kernels-for-conda-environments","text":"If you would like to install a kernel in a Conda environment, install the ipykernel from the conda-forge channel. For instance, micromamba install --name conda_env conda-forge::ipykernel micromamba run --name conda_env python -m ipykernel install --prefix = ${ HOME } /environments/jupyter_env --name other_python_env --display-name \"Other Python env\" will make your conda environment, conda_env , available in the kernel launched from the ~/environments/jupyter_env Python venv.","title":"Kernels for Conda environments"},{"location":"services/jupyter/#starting-a-jupyter-notebook","text":"Jupyter notebooks must be started as slurm jobs . The following script is a template for Jupyter submission scripts that will rarely need modifications. Most often you will need to modify the session duration ( --time SBATCH option). Slurm Launcher script for Jupyter Notebook #!/usr/bin/bash --login #SBATCH --job-name=Jupyter #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=2 # Change accordingly, note that ~1.7GB RAM is proivisioned per core #SBATCH --partition=batch #SBATCH --qos=normal #SBATCH --output=%x_%j.out # Print messages to 'Jupyter_.out #SBATCH --error=%x_%j.err # Print debug messages to 'Jupyter_.err #SBATCH --time=0-01:00:00 # Change maximum allowable jupyter server uptime here print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } module purge || print_error_and_exit \"No 'module' command\" # Load the default Python 3 module module load lang/Python source \" ${ HOME } /environments/jupyter_env/bin/activate\" declare loopback_device = \"127.0.0.1\" declare port = \"8888\" declare connection_instructions = \"connection_instructions.log\" jupyter lab --ip = ${ loopback_device } --port = ${ port } --no-browser & declare lab_pid = $! # Add connection instruction echo \"# Connection instructions\" > \" ${ connection_instructions } \" echo \"\" >> \" ${ connection_instructions } \" echo \"To access the jupyter notebook execute on your personal machine:\" >> \" ${ connection_instructions } \" echo \"ssh -J ${ USER } @access- ${ ULHPC_CLUSTER } .uni.lu:8022 -L ${ port } : ${ loopback_device } : ${ port } ${ USER } @ $( hostname -i ) \" >> \" ${ connection_instructions } \" echo \"\" >> \" ${ connection_instructions } \" echo \"To access the jupyter notebook if you have setup a special key (e.g ulhpc_id_ed25519) to connect to cluster nodes execute on your personal machine:\" >> \" ${ connection_instructions } \" echo \"ssh -i ~/.ssh/hpc_id_ed25519 -J ${ USER } @access- ${ ULHPC_CLUSTER } .uni.lu:8022 -L ${ port } : ${ loopback_device } : ${ port } ${ USER } @ $( hostname -i ) \" >> \" ${ connection_instructions } \" echo \"\" >> \" ${ connection_instructions } \" echo \"Then navigate to:\" >> \" ${ connection_instructions } \" # Wait for the server to start sleep 2s # Wait and check that the landing page is available curl \\ --connect-timeout 10 \\ --retry 5 \\ --retry-delay 1 \\ --retry-connrefused \\ --silent --show-error --fail \\ \"http:// ${ loopback_device } : ${ port } \" > /dev/null # Note down the URL jupyter lab list 2 > & 1 \\ | grep -E '\\?token=' \\ | awk 'BEGIN {FS=\"::\"} {gsub(\"[ \\t]*\",\"\",$1); print $1}' \\ | sed -r 's/([0-9]{1,3}\\.){3}[0-9]{1,3}/127\\.0\\.0\\.1/g' \\ >> \" ${ connection_instructions } \" # Save some debug information echo -e '\\n===\\n' echo \"AVAILABLE LABS\" echo \"\" jupyter lab list echo -e '\\n===\\n' echo \"CONFIGURATION PATHS\" echo \"\" jupyter --paths echo -e '\\n===\\n' echo \"KERNEL SPECIFICATIONS\" echo \"\" jupyter kernelspec list # Wait for the user to terminate the lab wait ${ lab_pid } Once your job is running (see Joining/monitoring running jobs ), you can combine ssh forwarding , and an ssh jump through the login node, to connect to the notebook from your laptop. Open a terminal on your laptop and copy-paste the ssh command contained in the file connection_instructions.log , and then navigate to the webpage link provided. Example content of connection_instructions.log > cat connection_instructions.log # Connection instructions To access the jupyter notebook execute on your personal machine: ssh -J gkafanas@access-aion.uni.lu:8022 -L 8888 :127.0.0.1:8888 gkafanas@172.21.12.29 To access the jupyter notebook if you have setup a special key ( e.g ulhpc_id_ed25519 ) to connect to cluster nodes execute on your personal machine: ssh -i ~/.ssh/ulhpc_id_ed25519 -J gkafanas@access-aion.uni.lu:8022 -L 8888 :127.0.0.1:8888 gkafanas@172.21.12.29 Then navigate to: http://127.0.0.1:8888/?token = b7cf9d71d5c89627250e9a73d4f28cb649cd3d9ff662e7e2 As the instructions suggest, you access the jupyter lab server in the compute node by calling ssh -J gkafanas@access-aion.uni.lu:8022 -L 8888 :127.0.0.1:8888 gkafanas@172.21.12.29 an SSH command that opens a connection to your allocated cluster node jumping through the login node ( -J gkafanas@access-aion.uni.lu:8022 gkafanas@172.21.12.29 ), and exports the port to the jupyter server in the local machine ( -L 8888:127.0.0.1:8888 ). Then, open the connection to the browser in your local machine by following the given link: http://127.0.0.1:8888/?token=b7cf9d71d5c89627250e9a73d4f28cb649cd3d9ff662e7e2 The link provides the access token, so you should be able to login without a password. Warning Do not forget to click on the quit button when finished to stop the Jupyter server and release the resources. Note that in the last line of the submission script the job waits for your Jupyter service to finish. If you encounter any issues, have a look in the debug output in Jupyter_.err . Generic information about the setup of your system is printed in Jupyter_.out . Typical content of Jupyter_.err > cat Jupyter_3664038.err [ I 2024 -11-13 23 :19:52.538 ServerApp ] jupyter_lsp | extension was successfully linked. [ I 2024 -11-13 23 :19:52.543 ServerApp ] jupyter_server_terminals | extension was successfully linked. [ I 2024 -11-13 23 :19:52.547 ServerApp ] jupyterlab | extension was successfully linked. [ I 2024 -11-13 23 :19:52.766 ServerApp ] notebook_shim | extension was successfully linked. [ I 2024 -11-13 23 :19:52.808 ServerApp ] notebook_shim | extension was successfully loaded. [ I 2024 -11-13 23 :19:52.812 ServerApp ] jupyter_lsp | extension was successfully loaded. [ I 2024 -11-13 23 :19:52.813 ServerApp ] jupyter_server_terminals | extension was successfully loaded. [ I 2024 -11-13 23 :19:52.814 LabApp ] JupyterLab extension loaded from /home/users/gkafanas/environments/jupyter_env/lib/python3.11/site-packages/jupyterlab [ I 2024 -11-13 23 :19:52.814 LabApp ] JupyterLab application directory is /mnt/aiongpfs/users/gkafanas/environments/jupyter_env/share/jupyter/lab [ I 2024 -11-13 23 :19:52.815 LabApp ] Extension Manager is 'pypi' . [ I 2024 -11-13 23 :19:52.826 ServerApp ] jupyterlab | extension was successfully loaded. [ I 2024 -11-13 23 :19:52.827 ServerApp ] Serving notebooks from local directory: /mnt/aiongpfs/users/gkafanas/support/jupyter [ I 2024 -11-13 23 :19:52.827 ServerApp ] Jupyter Server 2 .14.2 is running at: [ I 2024 -11-13 23 :19:52.827 ServerApp ] http://127.0.0.1:8888/lab?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d [ I 2024 -11-13 23 :19:52.827 ServerApp ] http://127.0.0.1:8888/lab?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d [ I 2024 -11-13 23 :19:52.827 ServerApp ] Use Control-C to stop this server and shut down all kernels ( twice to skip confirmation ) . [ C 2024 -11-13 23 :19:52.830 ServerApp ] To access the server, open this file in a browser: file:///home/users/gkafanas/.local/share/jupyter/runtime/jpserver-2253096-open.html Or copy and paste one of these URLs: http://127.0.0.1:8888/lab?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d http://127.0.0.1:8888/lab?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d [ I 2024 -11-13 23 :19:52.845 ServerApp ] Skipped non-installed server ( s ) : bash-language-server, dockerfile-language-server-nodejs, javascript-typescript-langserver, jedi-language-server, julia-language-server, pyright, python-language-server, python-lsp-server, r-languageserver, sql-language-server, texlab, typescript-language-server, unified-language-server, vscode-css-languageserver-bin, vscode-html-languageserver-bin, vscode-json-languageserver-bin, yaml-language-server [ I 2024 -11-13 23 :19:53.824 ServerApp ] 302 GET / ( @127.0.0.1 ) 0 .47ms Typical content of Jupyter_.err > cat Jupyter_3664038.out === AVAILABLE LABS Currently running servers: http://127.0.0.1:8888/?token = fe665f90872927f5f84be627f54cf9056908c34b3765e17d :: /mnt/aiongpfs/users/gkafanas/support/jupyter === CONFIGURATION PATHS config: /home/users/gkafanas/environments/jupyter_env/etc/jupyter /mnt/aiongpfs/users/gkafanas/.jupyter /usr/local/etc/jupyter /etc/jupyter data: /home/users/gkafanas/environments/jupyter_env/share/jupyter /home/users/gkafanas/.local/share/jupyter /usr/local/share/jupyter /usr/share/jupyter runtime: /home/users/gkafanas/.local/share/jupyter/runtime === KERNEL SPECIFICATIONS Available kernels: other_python_env /home/users/gkafanas/environments/jupyter_env/share/jupyter/kernels/other_python_env python3 /home/users/gkafanas/environments/jupyter_env/share/jupyter/kernels/python3","title":"Starting a Jupyter Notebook"},{"location":"services/jupyter/#password-protected-access","text":"You can also set a password when launching the jupyter lab as detailed in the Jupyter official documentation . In that case, simply direct you browser to the URL http://127.0.0.1:8888/ and provide your password. You can see bellow an example of the login page. Typical content of a password protected login page","title":"Password protected access"},{"location":"slurm/","text":"Slurm Resource and Job Management System \u00b6 ULHPC uses Slurm ( Simple Linux Utility for Resource Management ) for cluster/resource management and job scheduling. This middleware is responsible for allocating resources to users, providing a framework for starting, executing and monitoring work on allocated resources and scheduling work for future execution. Official docs Official FAQ ULHPC Tutorial/Getting Started IEEE ISPDC22: ULHPC Slurm 2.0 If you want more details on the RJMS optimizations performed upon Aion acquisition, check out our IEEE ISPDC22 conference paper (21 st IEEE Int. Symp. on Parallel and Distributed Computing) presented in Basel (Switzerland) on July 13, 2022. IEEE Reference Format | ORBilu entry | ULHPC blog post | slides Sebastien Varrette, Emmanuel Kieffer, and Frederic Pinel, \"Optimizing the Resource and Job Management System of an Academic HPC and Research Computing Facility\". In 21 st IEEE Intl. Symp. on Parallel and Distributed Computing (ISPDC\u201922) , Basel, Switzerland, 2022. TL;DR Slurm on ULHPC clusters \u00b6 In its concise form, the Slurm configuration in place on ULHPC supercomputers features the following attributes you should be aware of when interacting with it: Predefined Queues/Partitions depending on node type batch (Default Dual-CPU nodes) Max : 64 nodes, 2 days walltime gpu (GPU nodes nodes) Max : 4 nodes, 2 days walltime bigmem (Large-Memory nodes) Max : 1 node, 2 days walltime In addition: interactive (for quicks tests) Max : 2 nodes, 2h walltime for code development, testing, and debugging Queue Policy: cross-partition QOS , mainly tied to priority level ( low \\rightarrow \\rightarrow urgent ) long QOS with extended Max walltime ( MaxWall ) set to 14 days special preemptible QOS for best-effort jobs: besteffort . Accounts hierarchy associated to supervisors (multiple associations possible), projects or trainings you MUST use the proper account as a detailed usage tracking is performed and reported. Slurm Federation configuration between iris and aion ensures global policy (coherent job ID, global scheduling, etc.) within ULHPC systems easily submit jobs from one cluster to another using -M, --cluster aion|iris For more details, see the appropriate pages in the left menu (or the above conference paper ). Jobs \u00b6 A job is an allocation of resources such as compute nodes assigned to a user for an certain amount of time. Jobs can be interactive or passive (e.g., a batch script) scheduled for later execution. What characterize a job? A user jobs have the following key characteristics: set of requested resources: number of computing resources: nodes (including all their CPUs and cores) or CPUs (including all their cores) or cores amount of memory : either per node or per CPU (wall)time needed for the users tasks to complete their work a requested node partition (job queue) a requested quality of service (QoS) level which grants users specific accesses a requested account for accounting purposes Once a job is assigned a set of nodes, the user is able to initiate parallel work in the form of job steps (sets of tasks) in any configuration within the allocation. When you login to a ULHPC system you land on a access/login node . Login nodes are only for editing and preparing jobs: They are not meant for actually running jobs. From the login node you can interact with Slurm to submit job scripts or start interactive jobs, which will be further run on the compute nodes. Submit Jobs \u00b6 There are three ways of submitting jobs with slurm, using either sbatch , srun or salloc : sbatch (passive job) ### /!\\ Adapt , , and accordingly sbatch -p [ --qos ] [ -A ] [ ... ] srun (interactive job) ### /!\\ Adapt , , and accordingly srun -p [ --qos ] [ -A ] [ ... ] ---pty bash srun is also to be using within your launcher script to initiate a job step . salloc (request allocation/interactive job) # Request interactive jobs/allocations ### /!\\ Adapt , , and accordingly salloc -p [ --qos ] [ -A ] [ ... ] sbatch \u00b6 sbatch is used to submit a batch launcher script for later execution, corresponding to batch/passive submission mode . The script will typically contain one or more srun commands to launch parallel tasks. Upon submission with sbatch , Slurm will: allocate resources (nodes, tasks, partition, constraints, etc.) runs a single copy of the batch script on the first allocated node in particular, if you depend on other scripts, ensure you have refer to them with the complete path toward them. When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm. # /!\\ ADAPT path to launcher accordingly $ sbatch .sh Submitted batch job 864933 srun \u00b6 srun is used to initiate parallel job steps within a job OR to start an interactive job Upon submission with srun , Slurm will: ( eventually ) allocate resources (nodes, tasks, partition, constraints, etc.) when run for interactive submission launch a job step that will execute on the allocated resources. A job can contain multiple job steps executing sequentially or in parallel on independent or shared resources within the job's node allocation. salloc \u00b6 salloc is used to allocate resources for a job in real time. Typically this is used to allocate resources (nodes, tasks, partition, etc.) and spawn a shell. The shell is then used to execute srun commands to launch parallel tasks. Specific Resource Allocation \u00b6 Within a job, you aim at running a certain number of tasks , and Slurm allow for a fine-grain control of the resource allocation that must be satisfied for each task. Beware of Slurm terminology in Multicore Architecture ! Slurm Node = Physical node , specified with -N <#nodes> Advice : always explicit number of expected number of tasks per node using --ntasks-per-node . This way you control the node footprint of your job. Slurm Socket = Physical Socket/CPU/Processor Advice : if possible, explicit also the number of expected number of tasks per socket (processor) using --ntasks-per-socket . relations between and must be aligned with the physical NUMA characteristics of the node. For instance on aion nodes, = 8* For instance on iris regular nodes, =2* when on iris bigmem nodes, =4* . ( the most confusing ): Slurm CPU = Physical CORE use -c <#threads> to specify the number of cores reserved per task. Hyper-Threading (HT) Technology is disabled on all ULHPC compute nodes. In particular: assume #cores = #threads , thus when using -c , you can safely set OMP_NUM_THREADS = ${ SLURM_CPUS_PER_TASK :- 1 } # Default to 1 if SLURM_CPUS_PER_TASK not set to automatically abstract from the job context you have interest to match the physical NUMA characteristics of the compute node you're running at (Ex: target 16 threads per socket on Aion nodes (as there are 8 virtual sockets per nodes, 14 threads per socket on Iris regular nodes). The total number of tasks defined in a given job is stored in the $SLURM_NTASKS environment variable. The --cpus-per-task option of srun in Slurm 23.11 and later In the latest versions of Slurm srun inherits the --cpus-per-task value requested by salloc or sbatch by reading the value of SLURM_CPUS_PER_TASK , as for any other option. This behavior may differ from some older versions where special handling was required to propagate the --cpus-per-task option to srun . In case you would like to launch multiple programs in a single allocation/batch script, divide the resources accordingly by requesting resources with srun when launching the process, for instance: srun --cpus-per-task --ntasks [ ... ] We encourage you to always explicitly specify upon resource allocation the number of tasks you want per node/socket ( --ntasks-per-node --ntasks-per-socket ), to easily scale on multiple nodes with -N . Adapt the number of threads and the settings to match the physical NUMA characteristics of the nodes Aion 16 cores per socket and 8 (virtual) sockets (CPUs) per aion node. {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <8n> --ntasks-per-socket -c Total : \\times 8\\times \\times 8\\times tasks, each on threads Ensure \\times \\times = 16 Ex: -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 ( Total : 64 tasks) Iris (default Dual-CPU) 14 cores per socket and 2 sockets (physical CPUs) per regular iris . {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <2n> --ntasks-per-socket -c Total : \\times 2\\times \\times 2\\times tasks, each on threads Ensure \\times \\times = 14 Ex: -N 2 --ntasks-per-node 4 --ntasks-per-socket 2 -c 7 ( Total : 8 tasks) Iris (Bigmem) 28 cores per socket and 4 sockets (physical CPUs) per bigmem iris {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <4n> --ntasks-per-socket -c Total : \\times 4\\times \\times 4\\times tasks, each on threads Ensure \\times \\times = 28 Ex: -N 2 --ntasks-per-node 8 --ntasks-per-socket 2 -c 14 ( Total : 16 tasks) Job submission options \u00b6 There are several useful environment variables set be Slurm within an allocated job. The most important ones are detailed in the below table which summarizes the main job submission options offered with {sbatch | srun | salloc} [...] : Command-line option Description Example -N Nodes request -N 2 --ntasks-per-node= Tasks-per-node request --ntasks-per-node=28 --ntasks-per-socket= Tasks-per-socket request --ntasks-per-socket=14 -c Cores-per-task request (multithreading) -c 1 --mem=GB GB memory per node request --mem 0 -t [DD-]HH[:MM:SS]> Walltime request -t 4:00:00 -G GPU(s) request -G 4 -C Feature request ( broadwell,skylake... ) -C skylake -p Specify job partition/queue --qos Specify job qos -A Specify account -J Job name -J MyApp -d Job dependency -d singleton --mail-user= Specify email address --mail-type= Notify user by email when certain event types occur. --mail-type=END,FAIL At a minimum a job submission script must include number of nodes, time, type of partition and nodes (resource allocation constraint and features), and quality of service (QOS). If a script does not specify any of these options then a default may be applied. The full list of directives is documented in the man pages for the sbatch command (see. man sbatch ). #SBATCH directives vs. CLI options \u00b6 Each option can be specified either as an #SBATCH [...] directive in the job submission script: #!/bin/bash -l # <--- DO NOT FORGET '-l' ### Request a single task using one core on one node for 5 minutes in the batch queue #SBATCH -N 2 #SBATCH --ntasks-per-node=1 #SBATCH -c 1 #SBATCH --time=0-00:05:00 #SBATCH -p batch # [...] Or as a command line option when submitting the script: $ sbatch -p batch -N 2 --ntasks-per-node = 1 -c 1 --time = 0 -00:05:00 ./first-job.sh The command line and directive versions of an option are equivalent and interchangeable : if the same option is present both on the command line and as a directive, the command line will be honored. If the same option or directive is specified twice, the last value supplied will be used. Also, many options have both a long form, eg --nodes=2 and a short form, eg -N 2 . These are equivalent and interchangable. Common options to sbatch and srun Many options are common to both sbatch and srun , for example sbatch -N 4 ./first-job.sh allocates 4 nodes to first-job.sh , and srun -N 4 uname -n inside the job runs a copy of uname -n on each of 4 nodes. If you don't specify an option in the srun command line, srun will inherit the value of that option from sbatch . In these cases the default behavior of srun is to assume the same options as were passed to sbatch . This is achieved via environment variables: sbatch sets a number of environment variables with names like SLURM_NNODES and srun checks the values of those variables. This has two important consequences: Your job script can see the settings it was submitted with by checking these environment variables You should NOT override these environment variables. Also be aware that if your job script tries to do certain tricky things, such as using ssh to launch a command on another node, the environment might not be propagated and your job may not behave correctly HW characteristics and Slurm features of ULHPC nodes \u00b6 When selecting specific resources allocations, it is crucial to match the hardware characteristics of the computing nodes. Details are provided below: Node (type) #Nodes #Socket / #Cores RAM [GB] Features aion-[0001-0354] 354 8 / 128 256 batch,epyc iris-[001-108] 108 2 / 28 128 batch,broadwell iris-[109-168] 60 2 / 28 128 batch,skylake iris-[169-186] (GPU) 18 2 / 28 768 gpu,skylake,volta iris-[191-196] (GPU) 6 2 / 28 768 gpu,skylake,volta32 iris-[187-190] (Large-Memory) 4 4 / 112 3072 bigmem,skylake As can be seen, Slurm [features] are associated to ULHPC compute nodes and permits to easily filter with the -C option the list of nodes. To list available features, use sfeatures : sfeatures # sinfo -o '%20N %.6D %.6c %15F %12P %f' # NODELIST NODES CPUS NODES(A/I/O/T) PARTITION AVAIL_FEATURES # [...] Always try to align resource specifications for your jobs with physical characteristics The typical format of your Slurm submission should thus probably be: sbatch|srun|... [-N ] --ntasks-per-node -c [...] sbatch|srun|... [-N ] --ntasks-per-node <#sockets * s> --ntasks-per-socket -c [...] This would define a total of \\times \\times TASKS (first form) or \\times \\#sockets \\times \\times \\#sockets \\times TASKS (second form), each on threads . You MUST ensure that either: \\times \\times matches the number of cores avaiable on the target computing node (first form), or = \\#sockets \\times \\#sockets \\times , and \\times \\times matches the number of cores per socket available on the target computing node (second form). Aion (default Dual-CPU) 16 cores per socket and 8 virtual sockets (CPUs) per aion node. Depending on the selected form, you MUST ensure that either \\times \\times =128, or that =8 and \\times \\times =16. ### Example 1 - use all cores available { sbatch | srun | salloc } -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 [ ... ] # Total: 64 tasks (spread across 2 nodes), each on 4 cores/threads ### Example 2 - use all cores available { sbatch | srun | salloc } --ntasks-per-node 128 -c 1 [ ... ] # Total; 128 (single-core) tasks ### Example 3 - use all cores available { sbatch | srun | salloc } -N 1 --ntasks-per-node 8 --ntasks-per-socket 1 -c 16 [ ... ] # Total: 8 tasks, each on 16 cores/threads ### Example 4 - use all cores available { sbatch | srun | salloc } -N 1 --ntasks-per-node 2 -c 64 [ ... ] # Total: 2 tasks, each on 64 cores/threads Iris (default Dual-CPU) 14 cores per socket and 2 sockets (physical CPUs) per regular iris node. Depending on the selected form, you MUST ensure that either \\times \\times =28, or that =2 and \\times \\times =14. ### Example 1 - use all cores available { sbatch | srun | salloc } -N 3 --ntasks-per-node 14 --ntasks-per-socket 7 -c 2 [ ... ] # Total: 42 tasks (spread across 3 nodes), each on 2 cores/threads ### Example 2 - use all cores available { sbatch | srun | salloc } -N 2 --ntasks-per-node 28 -c 1 [ ... ] # Total; 56 (single-core) tasks ### Example 3 - use all cores available { sbatch | srun | salloc } -N 2 --ntasks-per-node 2 --ntasks-per-socket 1 -c 14 [ ... ] # Total: 4 tasks (spread across 2 nodes), each on 14 cores/threads Iris (Large-Memory) 28 cores per socket and 4 sockets (physical CPUs) per bigmem iris node. Depending on the selected form, you MUST ensure that either \\times \\times =112, or that =4 and \\times \\times =28. ### Example 1 - use all cores available { sbatch | srun | salloc } -N 1 --ntasks-per-node 56 --ntasks-per-socket 14 -c 2 [ ... ] # Total: 56 tasks on a single bigmem node, each on 2 cores/threads ### Example 2 - use all cores available { sbatch | srun | salloc } --ntasks-per-node 112 -c 1 [ ... ] # Total; 112 (single-core) tasks ### Example 3 - use all cores available { sbatch | srun | salloc } -N 1 --ntasks-per-node 4 --ntasks-per-socket 1 -c 28 [ ... ] # Total: 4 tasks, each on 28 cores/threads Using Slurm Environment variables \u00b6 Recall that the Slurm controller will set several SLURM_* variables in the environment of the batch script. The most important are listed in the table below - use them wisely to make your launcher script as flexible as possible to abstract and adapt from the allocation context, \" independently \" of the way the job script has been submitted. Submission option Environment variable Typical usage -N SLURM_JOB_NUM_NODES or SLURM_NNODES --ntasks-per-node= SLURM_NTASKS_PER_NODE --ntasks-per-socket= SLURM_NTASKS_PER_SOCKET -c SLURM_CPUS_PER_TASK OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} SLURM_NTASKS Total number of tasks srun -n $SLURM_NTASKS [...]","title":"Slurm Overview"},{"location":"slurm/#slurm-resource-and-job-management-system","text":"ULHPC uses Slurm ( Simple Linux Utility for Resource Management ) for cluster/resource management and job scheduling. This middleware is responsible for allocating resources to users, providing a framework for starting, executing and monitoring work on allocated resources and scheduling work for future execution. Official docs Official FAQ ULHPC Tutorial/Getting Started IEEE ISPDC22: ULHPC Slurm 2.0 If you want more details on the RJMS optimizations performed upon Aion acquisition, check out our IEEE ISPDC22 conference paper (21 st IEEE Int. Symp. on Parallel and Distributed Computing) presented in Basel (Switzerland) on July 13, 2022. IEEE Reference Format | ORBilu entry | ULHPC blog post | slides Sebastien Varrette, Emmanuel Kieffer, and Frederic Pinel, \"Optimizing the Resource and Job Management System of an Academic HPC and Research Computing Facility\". In 21 st IEEE Intl. Symp. on Parallel and Distributed Computing (ISPDC\u201922) , Basel, Switzerland, 2022.","title":"Slurm Resource and Job Management System"},{"location":"slurm/#tldr-slurm-on-ulhpc-clusters","text":"In its concise form, the Slurm configuration in place on ULHPC supercomputers features the following attributes you should be aware of when interacting with it: Predefined Queues/Partitions depending on node type batch (Default Dual-CPU nodes) Max : 64 nodes, 2 days walltime gpu (GPU nodes nodes) Max : 4 nodes, 2 days walltime bigmem (Large-Memory nodes) Max : 1 node, 2 days walltime In addition: interactive (for quicks tests) Max : 2 nodes, 2h walltime for code development, testing, and debugging Queue Policy: cross-partition QOS , mainly tied to priority level ( low \\rightarrow \\rightarrow urgent ) long QOS with extended Max walltime ( MaxWall ) set to 14 days special preemptible QOS for best-effort jobs: besteffort . Accounts hierarchy associated to supervisors (multiple associations possible), projects or trainings you MUST use the proper account as a detailed usage tracking is performed and reported. Slurm Federation configuration between iris and aion ensures global policy (coherent job ID, global scheduling, etc.) within ULHPC systems easily submit jobs from one cluster to another using -M, --cluster aion|iris For more details, see the appropriate pages in the left menu (or the above conference paper ).","title":"TL;DR Slurm on ULHPC clusters"},{"location":"slurm/#jobs","text":"A job is an allocation of resources such as compute nodes assigned to a user for an certain amount of time. Jobs can be interactive or passive (e.g., a batch script) scheduled for later execution. What characterize a job? A user jobs have the following key characteristics: set of requested resources: number of computing resources: nodes (including all their CPUs and cores) or CPUs (including all their cores) or cores amount of memory : either per node or per CPU (wall)time needed for the users tasks to complete their work a requested node partition (job queue) a requested quality of service (QoS) level which grants users specific accesses a requested account for accounting purposes Once a job is assigned a set of nodes, the user is able to initiate parallel work in the form of job steps (sets of tasks) in any configuration within the allocation. When you login to a ULHPC system you land on a access/login node . Login nodes are only for editing and preparing jobs: They are not meant for actually running jobs. From the login node you can interact with Slurm to submit job scripts or start interactive jobs, which will be further run on the compute nodes.","title":"Jobs"},{"location":"slurm/#submit-jobs","text":"There are three ways of submitting jobs with slurm, using either sbatch , srun or salloc : sbatch (passive job) ### /!\\ Adapt , , and accordingly sbatch -p [ --qos ] [ -A ] [ ... ] srun (interactive job) ### /!\\ Adapt , , and accordingly srun -p [ --qos ] [ -A ] [ ... ] ---pty bash srun is also to be using within your launcher script to initiate a job step . salloc (request allocation/interactive job) # Request interactive jobs/allocations ### /!\\ Adapt , , and accordingly salloc -p [ --qos ] [ -A ] [ ... ] ","title":"Submit Jobs"},{"location":"slurm/#sbatch","text":"sbatch is used to submit a batch launcher script for later execution, corresponding to batch/passive submission mode . The script will typically contain one or more srun commands to launch parallel tasks. Upon submission with sbatch , Slurm will: allocate resources (nodes, tasks, partition, constraints, etc.) runs a single copy of the batch script on the first allocated node in particular, if you depend on other scripts, ensure you have refer to them with the complete path toward them. When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm. # /!\\ ADAPT path to launcher accordingly $ sbatch .sh Submitted batch job 864933","title":"sbatch"},{"location":"slurm/#srun","text":"srun is used to initiate parallel job steps within a job OR to start an interactive job Upon submission with srun , Slurm will: ( eventually ) allocate resources (nodes, tasks, partition, constraints, etc.) when run for interactive submission launch a job step that will execute on the allocated resources. A job can contain multiple job steps executing sequentially or in parallel on independent or shared resources within the job's node allocation.","title":"srun"},{"location":"slurm/#salloc","text":"salloc is used to allocate resources for a job in real time. Typically this is used to allocate resources (nodes, tasks, partition, etc.) and spawn a shell. The shell is then used to execute srun commands to launch parallel tasks.","title":"salloc"},{"location":"slurm/#specific-resource-allocation","text":"Within a job, you aim at running a certain number of tasks , and Slurm allow for a fine-grain control of the resource allocation that must be satisfied for each task. Beware of Slurm terminology in Multicore Architecture ! Slurm Node = Physical node , specified with -N <#nodes> Advice : always explicit number of expected number of tasks per node using --ntasks-per-node . This way you control the node footprint of your job. Slurm Socket = Physical Socket/CPU/Processor Advice : if possible, explicit also the number of expected number of tasks per socket (processor) using --ntasks-per-socket . relations between and must be aligned with the physical NUMA characteristics of the node. For instance on aion nodes, = 8* For instance on iris regular nodes, =2* when on iris bigmem nodes, =4* . ( the most confusing ): Slurm CPU = Physical CORE use -c <#threads> to specify the number of cores reserved per task. Hyper-Threading (HT) Technology is disabled on all ULHPC compute nodes. In particular: assume #cores = #threads , thus when using -c , you can safely set OMP_NUM_THREADS = ${ SLURM_CPUS_PER_TASK :- 1 } # Default to 1 if SLURM_CPUS_PER_TASK not set to automatically abstract from the job context you have interest to match the physical NUMA characteristics of the compute node you're running at (Ex: target 16 threads per socket on Aion nodes (as there are 8 virtual sockets per nodes, 14 threads per socket on Iris regular nodes). The total number of tasks defined in a given job is stored in the $SLURM_NTASKS environment variable. The --cpus-per-task option of srun in Slurm 23.11 and later In the latest versions of Slurm srun inherits the --cpus-per-task value requested by salloc or sbatch by reading the value of SLURM_CPUS_PER_TASK , as for any other option. This behavior may differ from some older versions where special handling was required to propagate the --cpus-per-task option to srun . In case you would like to launch multiple programs in a single allocation/batch script, divide the resources accordingly by requesting resources with srun when launching the process, for instance: srun --cpus-per-task --ntasks [ ... ] We encourage you to always explicitly specify upon resource allocation the number of tasks you want per node/socket ( --ntasks-per-node --ntasks-per-socket ), to easily scale on multiple nodes with -N . Adapt the number of threads and the settings to match the physical NUMA characteristics of the nodes Aion 16 cores per socket and 8 (virtual) sockets (CPUs) per aion node. {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <8n> --ntasks-per-socket -c Total : \\times 8\\times \\times 8\\times tasks, each on threads Ensure \\times \\times = 16 Ex: -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 ( Total : 64 tasks) Iris (default Dual-CPU) 14 cores per socket and 2 sockets (physical CPUs) per regular iris . {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <2n> --ntasks-per-socket -c Total : \\times 2\\times \\times 2\\times tasks, each on threads Ensure \\times \\times = 14 Ex: -N 2 --ntasks-per-node 4 --ntasks-per-socket 2 -c 7 ( Total : 8 tasks) Iris (Bigmem) 28 cores per socket and 4 sockets (physical CPUs) per bigmem iris {sbatch|srun|salloc|si} [-N ] --ntasks-per-node <4n> --ntasks-per-socket -c Total : \\times 4\\times \\times 4\\times tasks, each on threads Ensure \\times \\times = 28 Ex: -N 2 --ntasks-per-node 8 --ntasks-per-socket 2 -c 14 ( Total : 16 tasks)","title":"Specific Resource Allocation"},{"location":"slurm/#job-submission-options","text":"There are several useful environment variables set be Slurm within an allocated job. The most important ones are detailed in the below table which summarizes the main job submission options offered with {sbatch | srun | salloc} [...] : Command-line option Description Example -N Nodes request -N 2 --ntasks-per-node= Tasks-per-node request --ntasks-per-node=28 --ntasks-per-socket= Tasks-per-socket request --ntasks-per-socket=14 -c Cores-per-task request (multithreading) -c 1 --mem=GB GB memory per node request --mem 0 -t [DD-]HH[:MM:SS]> Walltime request -t 4:00:00 -G GPU(s) request -G 4 -C Feature request ( broadwell,skylake... ) -C skylake -p Specify job partition/queue --qos Specify job qos -A Specify account -J Job name -J MyApp -d Job dependency -d singleton --mail-user= Specify email address --mail-type= Notify user by email when certain event types occur. --mail-type=END,FAIL At a minimum a job submission script must include number of nodes, time, type of partition and nodes (resource allocation constraint and features), and quality of service (QOS). If a script does not specify any of these options then a default may be applied. The full list of directives is documented in the man pages for the sbatch command (see. man sbatch ).","title":"Job submission options"},{"location":"slurm/#sbatch-directives-vs-cli-options","text":"Each option can be specified either as an #SBATCH [...] directive in the job submission script: #!/bin/bash -l # <--- DO NOT FORGET '-l' ### Request a single task using one core on one node for 5 minutes in the batch queue #SBATCH -N 2 #SBATCH --ntasks-per-node=1 #SBATCH -c 1 #SBATCH --time=0-00:05:00 #SBATCH -p batch # [...] Or as a command line option when submitting the script: $ sbatch -p batch -N 2 --ntasks-per-node = 1 -c 1 --time = 0 -00:05:00 ./first-job.sh The command line and directive versions of an option are equivalent and interchangeable : if the same option is present both on the command line and as a directive, the command line will be honored. If the same option or directive is specified twice, the last value supplied will be used. Also, many options have both a long form, eg --nodes=2 and a short form, eg -N 2 . These are equivalent and interchangable. Common options to sbatch and srun Many options are common to both sbatch and srun , for example sbatch -N 4 ./first-job.sh allocates 4 nodes to first-job.sh , and srun -N 4 uname -n inside the job runs a copy of uname -n on each of 4 nodes. If you don't specify an option in the srun command line, srun will inherit the value of that option from sbatch . In these cases the default behavior of srun is to assume the same options as were passed to sbatch . This is achieved via environment variables: sbatch sets a number of environment variables with names like SLURM_NNODES and srun checks the values of those variables. This has two important consequences: Your job script can see the settings it was submitted with by checking these environment variables You should NOT override these environment variables. Also be aware that if your job script tries to do certain tricky things, such as using ssh to launch a command on another node, the environment might not be propagated and your job may not behave correctly","title":"#SBATCH directives vs. CLI options"},{"location":"slurm/#hw-characteristics-and-slurm-features-of-ulhpc-nodes","text":"When selecting specific resources allocations, it is crucial to match the hardware characteristics of the computing nodes. Details are provided below: Node (type) #Nodes #Socket / #Cores RAM [GB] Features aion-[0001-0354] 354 8 / 128 256 batch,epyc iris-[001-108] 108 2 / 28 128 batch,broadwell iris-[109-168] 60 2 / 28 128 batch,skylake iris-[169-186] (GPU) 18 2 / 28 768 gpu,skylake,volta iris-[191-196] (GPU) 6 2 / 28 768 gpu,skylake,volta32 iris-[187-190] (Large-Memory) 4 4 / 112 3072 bigmem,skylake As can be seen, Slurm [features] are associated to ULHPC compute nodes and permits to easily filter with the -C option the list of nodes. To list available features, use sfeatures : sfeatures # sinfo -o '%20N %.6D %.6c %15F %12P %f' # NODELIST NODES CPUS NODES(A/I/O/T) PARTITION AVAIL_FEATURES # [...] Always try to align resource specifications for your jobs with physical characteristics The typical format of your Slurm submission should thus probably be: sbatch|srun|... [-N ] --ntasks-per-node -c [...] sbatch|srun|... [-N ] --ntasks-per-node <#sockets * s> --ntasks-per-socket -c [...] This would define a total of \\times \\times TASKS (first form) or \\times \\#sockets \\times \\times \\#sockets \\times TASKS (second form), each on threads . You MUST ensure that either: \\times \\times matches the number of cores avaiable on the target computing node (first form), or = \\#sockets \\times \\#sockets \\times , and \\times \\times matches the number of cores per socket available on the target computing node (second form). Aion (default Dual-CPU) 16 cores per socket and 8 virtual sockets (CPUs) per aion node. Depending on the selected form, you MUST ensure that either \\times \\times =128, or that =8 and \\times \\times =16. ### Example 1 - use all cores available { sbatch | srun | salloc } -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 [ ... ] # Total: 64 tasks (spread across 2 nodes), each on 4 cores/threads ### Example 2 - use all cores available { sbatch | srun | salloc } --ntasks-per-node 128 -c 1 [ ... ] # Total; 128 (single-core) tasks ### Example 3 - use all cores available { sbatch | srun | salloc } -N 1 --ntasks-per-node 8 --ntasks-per-socket 1 -c 16 [ ... ] # Total: 8 tasks, each on 16 cores/threads ### Example 4 - use all cores available { sbatch | srun | salloc } -N 1 --ntasks-per-node 2 -c 64 [ ... ] # Total: 2 tasks, each on 64 cores/threads Iris (default Dual-CPU) 14 cores per socket and 2 sockets (physical CPUs) per regular iris node. Depending on the selected form, you MUST ensure that either \\times \\times =28, or that =2 and \\times \\times =14. ### Example 1 - use all cores available { sbatch | srun | salloc } -N 3 --ntasks-per-node 14 --ntasks-per-socket 7 -c 2 [ ... ] # Total: 42 tasks (spread across 3 nodes), each on 2 cores/threads ### Example 2 - use all cores available { sbatch | srun | salloc } -N 2 --ntasks-per-node 28 -c 1 [ ... ] # Total; 56 (single-core) tasks ### Example 3 - use all cores available { sbatch | srun | salloc } -N 2 --ntasks-per-node 2 --ntasks-per-socket 1 -c 14 [ ... ] # Total: 4 tasks (spread across 2 nodes), each on 14 cores/threads Iris (Large-Memory) 28 cores per socket and 4 sockets (physical CPUs) per bigmem iris node. Depending on the selected form, you MUST ensure that either \\times \\times =112, or that =4 and \\times \\times =28. ### Example 1 - use all cores available { sbatch | srun | salloc } -N 1 --ntasks-per-node 56 --ntasks-per-socket 14 -c 2 [ ... ] # Total: 56 tasks on a single bigmem node, each on 2 cores/threads ### Example 2 - use all cores available { sbatch | srun | salloc } --ntasks-per-node 112 -c 1 [ ... ] # Total; 112 (single-core) tasks ### Example 3 - use all cores available { sbatch | srun | salloc } -N 1 --ntasks-per-node 4 --ntasks-per-socket 1 -c 28 [ ... ] # Total: 4 tasks, each on 28 cores/threads","title":"HW characteristics and Slurm features of ULHPC nodes"},{"location":"slurm/#using-slurm-environment-variables","text":"Recall that the Slurm controller will set several SLURM_* variables in the environment of the batch script. The most important are listed in the table below - use them wisely to make your launcher script as flexible as possible to abstract and adapt from the allocation context, \" independently \" of the way the job script has been submitted. Submission option Environment variable Typical usage -N SLURM_JOB_NUM_NODES or SLURM_NNODES --ntasks-per-node= SLURM_NTASKS_PER_NODE --ntasks-per-socket= SLURM_NTASKS_PER_SOCKET -c SLURM_CPUS_PER_TASK OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} SLURM_NTASKS Total number of tasks srun -n $SLURM_NTASKS [...]","title":"Using Slurm Environment variables"},{"location":"slurm/accounts/","text":"Slurm Account Hierarchy \u00b6 The ULHPC resources can be reserved and allocated for the execution of jobs scheduled on the platform thanks to a Resource and Job Management Systems (RJMS) - Slurm in practice. This tool is configured to collect accounting information for every job and job step executed -- see SchedMD accounting documentation . ULHPC account (login) vs. Slurm [meta-]account Your ULHPC account defines the UNIX user you can use to connect to the facility and make you known to our systems. They are managed by IPA and define your login . Slurm accounts , refered to as meta-account in the sequel, are more loosely defined in Slurm , and should be seen as something similar to a UNIX group: it may contain other (set of) slurm account(s), multiple users, or just a single user. A user may belong to multiple slurm accounts, but MUST have a DefaultAccount , which is set to your line manager or principal investigator meta-account. ULHPC Account Tree Hierarchy \u00b6 Every user job runs under a group account, granting access to specific QOS levels. Such an account is unique within the account hierarchy. Accounting records are organized as a hierarchical tree according to 3 layers (slurm accounts) as depicted in the below figure ( click to enlarge ). At the leaf hierarchy stands the End user from the IPA IdM database, bringing a total of 4 levels. Level Account Type Description Example L1 meta-account Top-level structure / organizations UL, CRP, Externals, Projects, Trainings L2 meta-account Organizational Unit (Faculty, ICs, External partner, Funding program...) FSTM, LCSB, LIST... L3 meta-account Principal investigators (PIs), project, courses/lectures . , , L4 login End-users (staff, student): your ULHPC/IPA login yourlogin Extracting your association tree By default, you will be able to see only the account hierarchy you belongs too through the association(s) set with your login. You can extract it with: $ sacctmgr show association where parent = root format = \"account,user%20,Share,QOS%50\" withsubaccounts Account User Share QOS ---------------------- -------- ----------- -------------------------------------------------- besteffort,debug,long,low,normal besteffort,debug,long,low,normal . besteffort,debug,long,low,normal . besteffort,debug,long,low,normal ( Admins ) Extract the full hierarchy The below commands assumes you have supervision rights on the root account. To list available L1 accounts (Top-level structure / organizations), use sacctmgr show association where parent = root format = \"cluster,account,Share,QOS%50\" To list L2 accounts: Under Uni.lu (UL) sacctmgr show association where parent = UL format = \"cluster,account,Share,QOS%50\" Under CRP sacctmgr show association where parent = CRP format = \"cluster,account,Share,QOS%50\" Under Externals sacctmgr show association where parent = externals format = \"cluster,account,Share,QOS%50\" Under Projects sacctmgr show association where parent = projects format = \"cluster,account,Share,QOS%50\" Under Trainings sacctmgr show association where parent = trainings format = \"cluster,account,Share,QOS%50\" To quickly list L3 accounts and its subaccounts: sassoc , or sacctmgr show association where accounts= format=\"account%20,user%20,Share,QOS%50\" To quickly list End User (L4) associations, use sassoc , or sacctmgr show association where users= format=\"account%20,user%20,Share,QOS%50\" Default account vs. multiple associations A given user can be associated to multiple accounts , but have a single DefaultAccount (a meta-account at L3 level reflecting your line manager (Format: . ). To get information about your account information in the hierarchy, use the custom acct helper function , typically as acct $USER . Get ULHPC account information with acct # /!\\ ADAPT accordingly $ acct # sacctmgr show user where name=\"\" format=user,account%20,DefaultAccount%20,share,qos%50 withassoc User Account Def Acct Share QOS ------- ----------------------- ---------------------- ------- --------------------------------------- project_ . 1 besteffort,debug,long,low,normal project_ . 1 besteffort,debug,high,long,low,normal . . 1 besteffort,debug,long,low,normal # ==> Default account: . In the above example, the user is associated to 3 meta-accounts at the L3 level of the hierarchy (his PI . and two projects account), each granting access to potentially different QOS . The account used upon job submission can be set with the -A option. With the above example: $ sbatch | srun | ... [ ... ] # Use default account: . $ sbatch | srun | ... -A project_ [ ... ] # Use account project_ $ sbatch | srun | ... -A project_ --qos high [ ... ] # Use account project_, granting access to high QOS $ sbatch | srun | ... -A anotheraccount [ ... ] # Error: non-existing association between and anotheraccount To list all associations for a given user or meta-account, use the sassoc helper function : # /!\\ ADAPT accordingly $ sassoc You may use more classically the sacctmgr show [...] command: User information: sacctmgr show user where name= [withassoc] (use the withassoc attribute to list all associations). Default account: sacctmgr show user where name=\"\" format=DefaultAccount -P -n Get the parent account: sacctmgr show account where name=ulhpc format=Org -n -P To get the current association tree : add withsubaccounts to see ALL sub accounts # L1,L2 or L3 account /!\\ ADAPT accordingly sacctmgr show association tree where accounts = format = account,share # End user (L4) sacctmgr show association where users = $USER format = account,User,share,Partition,QOS No association, no job! It is mandatory to have your login registered within at least one association toward a meta-account (PI, project name) to be able to schedule jobs on the Impact on FairSharing and Job Accounting \u00b6 Every node in the above-mentioned tree hierarchy is associated with a weight defining its Raw Share in the FairSharing mechanism in place. Different rules are applied to define these weights/shares depending on the level in the hierarchy: L1 (Organizational Unit): arbitrary shares to dedicate at least 85% of the platform to serve UL needs and projects L2 : function of the out-degree of the tree nodes, reflecting also the past year funding L3 : a function reflecting the budget contribution of the PI/project (normalized on a per-month basis) for the year in exercise. L4 (ULHPC/IPA login): efficiency score, giving incentives for a more efficient usage of the platform. More details are given on this page . Default vs. Project accounts \u00b6 Default account associations are defined as follows: For UL staff or external partners: your direct Line Manager firstname.lastname within the institution (Faculty, IC, Company) you belong too. For students: the lecture/course they are registered too Guest student/training accounts are associated to the Students meta-account. In addition, your user account (ULHPC login) may be associated to other meta-accounts such as projects or specific training events. To establish job accounting against these extra specific accounts, use: {sbatch|srun} -A project_ [...] For more details, see Project accounts . restrictions applies and do not permit to reveal all information for other accounts than yours. \u21a9","title":"Account Hierarchy"},{"location":"slurm/accounts/#slurm-account-hierarchy","text":"The ULHPC resources can be reserved and allocated for the execution of jobs scheduled on the platform thanks to a Resource and Job Management Systems (RJMS) - Slurm in practice. This tool is configured to collect accounting information for every job and job step executed -- see SchedMD accounting documentation . ULHPC account (login) vs. Slurm [meta-]account Your ULHPC account defines the UNIX user you can use to connect to the facility and make you known to our systems. They are managed by IPA and define your login . Slurm accounts , refered to as meta-account in the sequel, are more loosely defined in Slurm , and should be seen as something similar to a UNIX group: it may contain other (set of) slurm account(s), multiple users, or just a single user. A user may belong to multiple slurm accounts, but MUST have a DefaultAccount , which is set to your line manager or principal investigator meta-account.","title":"Slurm Account Hierarchy"},{"location":"slurm/accounts/#ulhpc-account-tree-hierarchy","text":"Every user job runs under a group account, granting access to specific QOS levels. Such an account is unique within the account hierarchy. Accounting records are organized as a hierarchical tree according to 3 layers (slurm accounts) as depicted in the below figure ( click to enlarge ). At the leaf hierarchy stands the End user from the IPA IdM database, bringing a total of 4 levels. Level Account Type Description Example L1 meta-account Top-level structure / organizations UL, CRP, Externals, Projects, Trainings L2 meta-account Organizational Unit (Faculty, ICs, External partner, Funding program...) FSTM, LCSB, LIST... L3 meta-account Principal investigators (PIs), project, courses/lectures . , , L4 login End-users (staff, student): your ULHPC/IPA login yourlogin Extracting your association tree By default, you will be able to see only the account hierarchy you belongs too through the association(s) set with your login. You can extract it with: $ sacctmgr show association where parent = root format = \"account,user%20,Share,QOS%50\" withsubaccounts Account User Share QOS ---------------------- -------- ----------- -------------------------------------------------- besteffort,debug,long,low,normal besteffort,debug,long,low,normal . besteffort,debug,long,low,normal . besteffort,debug,long,low,normal ( Admins ) Extract the full hierarchy The below commands assumes you have supervision rights on the root account. To list available L1 accounts (Top-level structure / organizations), use sacctmgr show association where parent = root format = \"cluster,account,Share,QOS%50\" To list L2 accounts: Under Uni.lu (UL) sacctmgr show association where parent = UL format = \"cluster,account,Share,QOS%50\" Under CRP sacctmgr show association where parent = CRP format = \"cluster,account,Share,QOS%50\" Under Externals sacctmgr show association where parent = externals format = \"cluster,account,Share,QOS%50\" Under Projects sacctmgr show association where parent = projects format = \"cluster,account,Share,QOS%50\" Under Trainings sacctmgr show association where parent = trainings format = \"cluster,account,Share,QOS%50\" To quickly list L3 accounts and its subaccounts: sassoc , or sacctmgr show association where accounts= format=\"account%20,user%20,Share,QOS%50\" To quickly list End User (L4) associations, use sassoc , or sacctmgr show association where users= format=\"account%20,user%20,Share,QOS%50\" Default account vs. multiple associations A given user can be associated to multiple accounts , but have a single DefaultAccount (a meta-account at L3 level reflecting your line manager (Format: . ). To get information about your account information in the hierarchy, use the custom acct helper function , typically as acct $USER . Get ULHPC account information with acct # /!\\ ADAPT accordingly $ acct # sacctmgr show user where name=\"\" format=user,account%20,DefaultAccount%20,share,qos%50 withassoc User Account Def Acct Share QOS ------- ----------------------- ---------------------- ------- --------------------------------------- project_ . 1 besteffort,debug,long,low,normal project_ . 1 besteffort,debug,high,long,low,normal . . 1 besteffort,debug,long,low,normal # ==> Default account: . In the above example, the user is associated to 3 meta-accounts at the L3 level of the hierarchy (his PI . and two projects account), each granting access to potentially different QOS . The account used upon job submission can be set with the -A option. With the above example: $ sbatch | srun | ... [ ... ] # Use default account: . $ sbatch | srun | ... -A project_ [ ... ] # Use account project_ $ sbatch | srun | ... -A project_ --qos high [ ... ] # Use account project_, granting access to high QOS $ sbatch | srun | ... -A anotheraccount [ ... ] # Error: non-existing association between and anotheraccount To list all associations for a given user or meta-account, use the sassoc helper function : # /!\\ ADAPT accordingly $ sassoc You may use more classically the sacctmgr show [...] command: User information: sacctmgr show user where name= [withassoc] (use the withassoc attribute to list all associations). Default account: sacctmgr show user where name=\"\" format=DefaultAccount -P -n Get the parent account: sacctmgr show account where name=ulhpc format=Org -n -P To get the current association tree : add withsubaccounts to see ALL sub accounts # L1,L2 or L3 account /!\\ ADAPT accordingly sacctmgr show association tree where accounts = format = account,share # End user (L4) sacctmgr show association where users = $USER format = account,User,share,Partition,QOS No association, no job! It is mandatory to have your login registered within at least one association toward a meta-account (PI, project name) to be able to schedule jobs on the","title":"ULHPC Account Tree Hierarchy"},{"location":"slurm/accounts/#impact-on-fairsharing-and-job-accounting","text":"Every node in the above-mentioned tree hierarchy is associated with a weight defining its Raw Share in the FairSharing mechanism in place. Different rules are applied to define these weights/shares depending on the level in the hierarchy: L1 (Organizational Unit): arbitrary shares to dedicate at least 85% of the platform to serve UL needs and projects L2 : function of the out-degree of the tree nodes, reflecting also the past year funding L3 : a function reflecting the budget contribution of the PI/project (normalized on a per-month basis) for the year in exercise. L4 (ULHPC/IPA login): efficiency score, giving incentives for a more efficient usage of the platform. More details are given on this page .","title":"Impact on FairSharing and Job Accounting"},{"location":"slurm/accounts/#default-vs-project-accounts","text":"Default account associations are defined as follows: For UL staff or external partners: your direct Line Manager firstname.lastname within the institution (Faculty, IC, Company) you belong too. For students: the lecture/course they are registered too Guest student/training accounts are associated to the Students meta-account. In addition, your user account (ULHPC login) may be associated to other meta-accounts such as projects or specific training events. To establish job accounting against these extra specific accounts, use: {sbatch|srun} -A project_ [...] For more details, see Project accounts . restrictions applies and do not permit to reveal all information for other accounts than yours. \u21a9","title":"Default vs. Project accounts"},{"location":"slurm/commands/","text":"Main Slurm Commands \u00b6 Submit Jobs \u00b6 There are three ways of submitting jobs with slurm, using either sbatch , srun or salloc : sbatch (passive job) ### /!\\ Adapt , , and accordingly sbatch -p [ --qos ] [ -A ] [ ... ] srun (interactive job) ### /!\\ Adapt , , and accordingly srun -p [ --qos ] [ -A ] [ ... ] ---pty bash srun is also to be using within your launcher script to initiate a job step . salloc (request allocation/interactive job) # Request interactive jobs/allocations ### /!\\ Adapt , , and accordingly salloc -p [ --qos ] [ -A ] [ ... ] sbatch \u00b6 sbatch is used to submit a batch launcher script for later execution, corresponding to batch/passive submission mode . The script will typically contain one or more srun commands to launch parallel tasks. Upon submission with sbatch , Slurm will: allocate resources (nodes, tasks, partition, constraints, etc.) runs a single copy of the batch script on the first allocated node in particular, if you depend on other scripts, ensure you have refer to them with the complete path toward them. When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm. # /!\\ ADAPT path to launcher accordingly $ sbatch .sh Submitted batch job 864933 srun \u00b6 srun is used to initiate parallel job steps within a job OR to start an interactive job Upon submission with srun , Slurm will: ( eventually ) allocate resources (nodes, tasks, partition, constraints, etc.) when run for interactive submission launch a job step that will execute on the allocated resources. A job can contain multiple job steps executing sequentially or in parallel on independent or shared resources within the job's node allocation. salloc \u00b6 salloc is used to allocate resources for a job in real time. Typically this is used to allocate resources (nodes, tasks, partition, etc.) and spawn a shell. The shell is then used to execute srun commands to launch parallel tasks. Interactive jobs: si* \u00b6 You should use the helper functions si , si-gpu , si-bigmem to submit an interactive job. For more details, see interactive jobs . Collect Job Information \u00b6 Command Description sacct [-X] -j [...] display accounting information on jobs. scontrol show [...] view and/or update system, nodes, job, step, partition or reservation status seff get efficiency metrics of past job smap graphically show information on jobs, nodes, partitions sprio show factors that comprise a jobs scheduling priority squeue [-u $(whoami)] display jobs[steps] and their state sstat show status of running jobs. squeue \u00b6 You can view information about jobs located in the Slurm scheduling queue (partition/qos), eventually filter on specific job state ( R :running / PD :pending / F :failed / PR :preempted) with squeue : $ squeue [ -u ] [ -p ] [ ---qos ] [ --reservation ] [ -t R | PD | F | PR ] To quickly access your jobs, you can simply use sq Live job statistics \u00b6 You can use the scurrent (for current interactive job) or (more generally) scontrol show job to collect detailed information for a running job. scontrol show job $ scontrol show job 2166371 JobId=2166371 JobName=bash UserId=() GroupId=clusterusers(666) MCS_label=N/A Priority=12741 Nice=0 Account=ulhpc QOS=debug JobState=RUNNING Reason=None [...] SubmitTime=2020-12-07T22:08:25 EligibleTime=2020-12-07T22:08:25 StartTime=2020-12-07T22:08:25 EndTime=2020-12-07T22:38:25 [...] WorkDir=/mnt/irisgpfs/users/ Past job statistics: slist , sreport \u00b6 Use the slist helper for a given job: # /!\\ ADAPT accordingly $ slist # sacct -j --format User,JobID,Jobname%30,partition,state,time,elapsed,\\ # MaxRss,MaxVMSize,nnodes,ncpus,nodelist,AveCPU,ConsumedEnergyRaw # seff You can also use sreport o generate reports of job usage and cluster utilization for Slurm jobs. For instance, to list your usage in CPU-hours since the beginning of the year: $ sreport -t hours cluster UserUtilizationByAccount Users = $USER Start = $( date +%Y ) -01-01 -------------------------------------------------------------------------------- Cluster/User/Account Utilization 2021-01-01T00:00:00 - 2021-02-13T23:59:59 (3801600 secs) Usage reported in CPU Hours ---------------------------------------------------------------------------- Cluster Login Proper Name Account Used Energy --------- --------- --------------- ---------------------- -------- -------- iris . [...] iris project_ [...] Job efficiency \u00b6 seff \u00b6 Use seff to double check a past job CPU/Memory efficiency. Below examples should be self-speaking: Good CPU Eff. $ seff 2171749 Job ID: 2171749 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 28 CPU Utilized: 41-01:38:14 CPU Efficiency: 99.64% of 41-05:09:44 core-walltime Job Wall-clock time: 1-11:19:38 Memory Utilized: 2.73 GB Memory Efficiency: 2.43% of 112.00 GB Good Memory Eff. $ seff 2117620 Job ID: 2117620 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 16 CPU Utilized: 14:24:49 CPU Efficiency: 23.72% of 2-12:46:24 core-walltime Job Wall-clock time: 03:47:54 Memory Utilized: 193.04 GB Memory Efficiency: 80.43% of 240.00 GB Good CPU and Memory Eff. $ seff 2138087 Job ID: 2138087 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 64 CPU Utilized: 87-16:58:22 CPU Efficiency: 86.58% of 101-07:16:16 core-walltime Job Wall-clock time: 1-13:59:19 Memory Utilized: 1.64 TB Memory Efficiency: 99.29% of 1.65 TB [Very] Bad efficiency This illustrates a very bad job in terms of CPU/memory efficiency (below 4%), which illustrate a case where basically the user wasted 4 hours of computation while mobilizing a full node and its 28 cores. $ seff 2199497 Job ID: 2199497 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 28 CPU Utilized: 00:08:33 CPU Efficiency: 3.55% of 04:00:48 core-walltime Job Wall-clock time: 00:08:36 Memory Utilized: 55.84 MB Memory Efficiency: 0.05% of 112.00 GB This is typical of a single-core task can could be drastically improved via GNU Parallel . Note however that demonstrating a CPU good efficiency with seff may not be enough! You may still induce an abnormal load on the reserved nodes if you spawn more processes than allowed by the Slurm reservation. To avoid that, always try to prefix your executions with srun within your launchers. See also Specific Resource Allocations . susage \u00b6 Use susage to check your past jobs walltime accuracy ( Timelimit vs. Elapsed ) $ susage -h Usage: susage [-m] [-Y] [-S YYYY-MM-DD] [-E YYYT-MM-DD] For a specific user (if accounting rights granted): susage [...] -u For a specific account (if accounting rights granted): susage [...] -A Display past job usage summary Official sacct command \u00b6 Alternatively, you can use sacct (use sacct --helpformat to get the list of) for COMPLETED or TIMEOUT jobs (see Job State Codes ). using sacct -X -S [...] --format [...],time,elapsed,[...] ADAPT -S and -E dates accordingly - Format: YYYY-MM-DD . hint : $(date +%F) will return today's date in that format, $(date +%Y) return the current year, so the below command will list your completed (or timeout jobs) since the beginning of the month: $ sacct -X -S $( date +%Y ) -01-01 -E $( date +%F ) --partition batch,gpu,bigmem --state CD,TO --format User,JobID,partition%12,qos,state,time,elapsed,nnodes,ncpus,allocGRES User JobID Partition QOS State Timelimit Elapsed NNodes NCPUS AllocGRES --------- ------------ ------------ ---------- ---------- ---------- ---------- -------- ---------- ------------ 2243517 batch normal TIMEOUT 2-00:00:00 2-00:00:05 4 112 2243518 batch normal TIMEOUT 2-00:00:00 2-00:00:05 4 112 2244056 gpu normal TIMEOUT 2-00:00:00 2-00:00:12 1 16 gpu:2 2246094 gpu high TIMEOUT 2-00:00:00 2-00:00:29 1 16 gpu:2 2246120 gpu high COMPLETED 2-00:00:00 1-02:18:00 1 16 gpu:2 2247278 bigmem normal COMPLETED 2-00:00:00 1-05:59:21 1 56 2250178 batch normal COMPLETED 2-00:00:00 10:04:32 1 1 2251232 gpu normal COMPLETED 1-00:00:00 12:05:46 1 6 gpu:1 Platform Status \u00b6 sinfo \u00b6 sinfo allow to view information about partition status ( -p ), problematic nodes ( -R ), reservations ( -T ), eventually in a summarized form ( -s ), sinfo [-p ] {-s | -R | -T |...} We are providing a certain number of helper functions based on sinfo : Command Description nodelist List available nodes allocnodes List currently allocated nodes idlenodes List currently idle nodes deadnodes List dead nodes per partition (hopefully none ;)) sissues List nodes with issues/problems, with reasons sfeatures List available node features Cluster, partition and QOS usage stats \u00b6 We have defined several custom ULHPC Slurm helpers defined in /etc/profile.d/slurm.sh to facilitate access to account/parition/qos/usage information. They are listed below. Command Description acct Get information on user/account holder in Slurm accounting DB irisstat , aionstat report cluster status (utilization, partition and QOS live stats) listpartitionjobs List jobs (and current load) of the slurm partition pload [-a] i/b/g/m Overview of the Slurm partition load qload [-a] Show current load of the slurm QOS sbill Display job charging / billing summary sjoin [-w ] join a running job sassoc Show Slurm association information for (user or account) slist [-X] List statistics of a past job sqos Show QOS information and limits susage [-m] [-Y] [...] Display past job usage summary Updating jobs \u00b6 Command Description scancel cancel a job or set of jobs. scontrol update jobid= [...] update pending job definition scontrol hold Hold job scontrol resume Resume held job The scontrol command allows certain charactistics of a job to be updated while it is still queued ( i.e. not running ), with the syntax scontrol update jobid= [...] Important Once the job is running, most changes requested with scontrol update jobid=[...] will NOT be applied. Change timelimit \u00b6 # /!\\ ADAPT and new time limit accordingly scontrol update jobid = timelimit = < [ DD- ] HH:MM::SS> Change QOS or Reservation \u00b6 # /!\\ ADAPT , , accordingly scontrol update jobid = qos = scontrol update jobid = reservationname = Change account \u00b6 If you forgot to specify the expected project account: # /!\\ ADAPT , accordingly scontrol update jobid = account = The new account must be eligible to run the job. See Account Hierarchy for more details. Hold and Resume jobs \u00b6 Prevent a pending job from being started: # /!\\ ADAPT accordingly scontrol hold Allow a held job to accrue priority and run: # /!\\ ADAPT accordingly scontrol release Cancel jobs \u00b6 Cancel a specific job: # /!\\ ADAPT accordingly scancel Cancel all jobs owned by a user (you) scancel -u $USER This only applies to jobs which are associated with your accounts.","title":"Convenient Slurm Commands"},{"location":"slurm/commands/#main-slurm-commands","text":"","title":"Main Slurm Commands"},{"location":"slurm/commands/#submit-jobs","text":"There are three ways of submitting jobs with slurm, using either sbatch , srun or salloc : sbatch (passive job) ### /!\\ Adapt , , and accordingly sbatch -p [ --qos ] [ -A ] [ ... ] srun (interactive job) ### /!\\ Adapt , , and accordingly srun -p [ --qos ] [ -A ] [ ... ] ---pty bash srun is also to be using within your launcher script to initiate a job step . salloc (request allocation/interactive job) # Request interactive jobs/allocations ### /!\\ Adapt , , and accordingly salloc -p [ --qos ] [ -A ] [ ... ] ","title":"Submit Jobs"},{"location":"slurm/commands/#sbatch","text":"sbatch is used to submit a batch launcher script for later execution, corresponding to batch/passive submission mode . The script will typically contain one or more srun commands to launch parallel tasks. Upon submission with sbatch , Slurm will: allocate resources (nodes, tasks, partition, constraints, etc.) runs a single copy of the batch script on the first allocated node in particular, if you depend on other scripts, ensure you have refer to them with the complete path toward them. When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm. # /!\\ ADAPT path to launcher accordingly $ sbatch .sh Submitted batch job 864933","title":"sbatch"},{"location":"slurm/commands/#srun","text":"srun is used to initiate parallel job steps within a job OR to start an interactive job Upon submission with srun , Slurm will: ( eventually ) allocate resources (nodes, tasks, partition, constraints, etc.) when run for interactive submission launch a job step that will execute on the allocated resources. A job can contain multiple job steps executing sequentially or in parallel on independent or shared resources within the job's node allocation.","title":"srun"},{"location":"slurm/commands/#salloc","text":"salloc is used to allocate resources for a job in real time. Typically this is used to allocate resources (nodes, tasks, partition, etc.) and spawn a shell. The shell is then used to execute srun commands to launch parallel tasks.","title":"salloc"},{"location":"slurm/commands/#interactive-jobs-si","text":"You should use the helper functions si , si-gpu , si-bigmem to submit an interactive job. For more details, see interactive jobs .","title":"Interactive jobs: si*"},{"location":"slurm/commands/#collect-job-information","text":"Command Description sacct [-X] -j [...] display accounting information on jobs. scontrol show [...] view and/or update system, nodes, job, step, partition or reservation status seff get efficiency metrics of past job smap graphically show information on jobs, nodes, partitions sprio show factors that comprise a jobs scheduling priority squeue [-u $(whoami)] display jobs[steps] and their state sstat show status of running jobs.","title":"Collect Job Information"},{"location":"slurm/commands/#squeue","text":"You can view information about jobs located in the Slurm scheduling queue (partition/qos), eventually filter on specific job state ( R :running / PD :pending / F :failed / PR :preempted) with squeue : $ squeue [ -u ] [ -p ] [ ---qos ] [ --reservation ] [ -t R | PD | F | PR ] To quickly access your jobs, you can simply use sq","title":"squeue"},{"location":"slurm/commands/#live-job-statistics","text":"You can use the scurrent (for current interactive job) or (more generally) scontrol show job to collect detailed information for a running job. scontrol show job $ scontrol show job 2166371 JobId=2166371 JobName=bash UserId=() GroupId=clusterusers(666) MCS_label=N/A Priority=12741 Nice=0 Account=ulhpc QOS=debug JobState=RUNNING Reason=None [...] SubmitTime=2020-12-07T22:08:25 EligibleTime=2020-12-07T22:08:25 StartTime=2020-12-07T22:08:25 EndTime=2020-12-07T22:38:25 [...] WorkDir=/mnt/irisgpfs/users/","title":"Live job statistics"},{"location":"slurm/commands/#past-job-statistics-slist-sreport","text":"Use the slist helper for a given job: # /!\\ ADAPT accordingly $ slist # sacct -j --format User,JobID,Jobname%30,partition,state,time,elapsed,\\ # MaxRss,MaxVMSize,nnodes,ncpus,nodelist,AveCPU,ConsumedEnergyRaw # seff You can also use sreport o generate reports of job usage and cluster utilization for Slurm jobs. For instance, to list your usage in CPU-hours since the beginning of the year: $ sreport -t hours cluster UserUtilizationByAccount Users = $USER Start = $( date +%Y ) -01-01 -------------------------------------------------------------------------------- Cluster/User/Account Utilization 2021-01-01T00:00:00 - 2021-02-13T23:59:59 (3801600 secs) Usage reported in CPU Hours ---------------------------------------------------------------------------- Cluster Login Proper Name Account Used Energy --------- --------- --------------- ---------------------- -------- -------- iris . [...] iris project_ [...]","title":"Past job statistics: slist, sreport"},{"location":"slurm/commands/#job-efficiency","text":"","title":"Job efficiency"},{"location":"slurm/commands/#seff","text":"Use seff to double check a past job CPU/Memory efficiency. Below examples should be self-speaking: Good CPU Eff. $ seff 2171749 Job ID: 2171749 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 28 CPU Utilized: 41-01:38:14 CPU Efficiency: 99.64% of 41-05:09:44 core-walltime Job Wall-clock time: 1-11:19:38 Memory Utilized: 2.73 GB Memory Efficiency: 2.43% of 112.00 GB Good Memory Eff. $ seff 2117620 Job ID: 2117620 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 16 CPU Utilized: 14:24:49 CPU Efficiency: 23.72% of 2-12:46:24 core-walltime Job Wall-clock time: 03:47:54 Memory Utilized: 193.04 GB Memory Efficiency: 80.43% of 240.00 GB Good CPU and Memory Eff. $ seff 2138087 Job ID: 2138087 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 64 CPU Utilized: 87-16:58:22 CPU Efficiency: 86.58% of 101-07:16:16 core-walltime Job Wall-clock time: 1-13:59:19 Memory Utilized: 1.64 TB Memory Efficiency: 99.29% of 1.65 TB [Very] Bad efficiency This illustrates a very bad job in terms of CPU/memory efficiency (below 4%), which illustrate a case where basically the user wasted 4 hours of computation while mobilizing a full node and its 28 cores. $ seff 2199497 Job ID: 2199497 Cluster: iris User/Group: /clusterusers State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 28 CPU Utilized: 00:08:33 CPU Efficiency: 3.55% of 04:00:48 core-walltime Job Wall-clock time: 00:08:36 Memory Utilized: 55.84 MB Memory Efficiency: 0.05% of 112.00 GB This is typical of a single-core task can could be drastically improved via GNU Parallel . Note however that demonstrating a CPU good efficiency with seff may not be enough! You may still induce an abnormal load on the reserved nodes if you spawn more processes than allowed by the Slurm reservation. To avoid that, always try to prefix your executions with srun within your launchers. See also Specific Resource Allocations .","title":"seff"},{"location":"slurm/commands/#susage","text":"Use susage to check your past jobs walltime accuracy ( Timelimit vs. Elapsed ) $ susage -h Usage: susage [-m] [-Y] [-S YYYY-MM-DD] [-E YYYT-MM-DD] For a specific user (if accounting rights granted): susage [...] -u For a specific account (if accounting rights granted): susage [...] -A Display past job usage summary","title":"susage"},{"location":"slurm/commands/#official-sacct-command","text":"Alternatively, you can use sacct (use sacct --helpformat to get the list of) for COMPLETED or TIMEOUT jobs (see Job State Codes ). using sacct -X -S [...] --format [...],time,elapsed,[...] ADAPT -S and -E dates accordingly - Format: YYYY-MM-DD . hint : $(date +%F) will return today's date in that format, $(date +%Y) return the current year, so the below command will list your completed (or timeout jobs) since the beginning of the month: $ sacct -X -S $( date +%Y ) -01-01 -E $( date +%F ) --partition batch,gpu,bigmem --state CD,TO --format User,JobID,partition%12,qos,state,time,elapsed,nnodes,ncpus,allocGRES User JobID Partition QOS State Timelimit Elapsed NNodes NCPUS AllocGRES --------- ------------ ------------ ---------- ---------- ---------- ---------- -------- ---------- ------------ 2243517 batch normal TIMEOUT 2-00:00:00 2-00:00:05 4 112 2243518 batch normal TIMEOUT 2-00:00:00 2-00:00:05 4 112 2244056 gpu normal TIMEOUT 2-00:00:00 2-00:00:12 1 16 gpu:2 2246094 gpu high TIMEOUT 2-00:00:00 2-00:00:29 1 16 gpu:2 2246120 gpu high COMPLETED 2-00:00:00 1-02:18:00 1 16 gpu:2 2247278 bigmem normal COMPLETED 2-00:00:00 1-05:59:21 1 56 2250178 batch normal COMPLETED 2-00:00:00 10:04:32 1 1 2251232 gpu normal COMPLETED 1-00:00:00 12:05:46 1 6 gpu:1","title":"Official sacct command"},{"location":"slurm/commands/#platform-status","text":"","title":"Platform Status"},{"location":"slurm/commands/#sinfo","text":"sinfo allow to view information about partition status ( -p ), problematic nodes ( -R ), reservations ( -T ), eventually in a summarized form ( -s ), sinfo [-p ] {-s | -R | -T |...} We are providing a certain number of helper functions based on sinfo : Command Description nodelist List available nodes allocnodes List currently allocated nodes idlenodes List currently idle nodes deadnodes List dead nodes per partition (hopefully none ;)) sissues List nodes with issues/problems, with reasons sfeatures List available node features","title":"sinfo"},{"location":"slurm/commands/#cluster-partition-and-qos-usage-stats","text":"We have defined several custom ULHPC Slurm helpers defined in /etc/profile.d/slurm.sh to facilitate access to account/parition/qos/usage information. They are listed below. Command Description acct Get information on user/account holder in Slurm accounting DB irisstat , aionstat report cluster status (utilization, partition and QOS live stats) listpartitionjobs List jobs (and current load) of the slurm partition pload [-a] i/b/g/m Overview of the Slurm partition load qload [-a] Show current load of the slurm QOS sbill Display job charging / billing summary sjoin [-w ] join a running job sassoc Show Slurm association information for (user or account) slist [-X] List statistics of a past job sqos Show QOS information and limits susage [-m] [-Y] [...] Display past job usage summary","title":"Cluster, partition and QOS usage stats"},{"location":"slurm/commands/#updating-jobs","text":"Command Description scancel cancel a job or set of jobs. scontrol update jobid= [...] update pending job definition scontrol hold Hold job scontrol resume Resume held job The scontrol command allows certain charactistics of a job to be updated while it is still queued ( i.e. not running ), with the syntax scontrol update jobid= [...] Important Once the job is running, most changes requested with scontrol update jobid=[...] will NOT be applied.","title":"Updating jobs"},{"location":"slurm/commands/#change-timelimit","text":"# /!\\ ADAPT and new time limit accordingly scontrol update jobid = timelimit = < [ DD- ] HH:MM::SS>","title":"Change timelimit"},{"location":"slurm/commands/#change-qos-or-reservation","text":"# /!\\ ADAPT , , accordingly scontrol update jobid = qos = scontrol update jobid = reservationname = ","title":"Change QOS or Reservation"},{"location":"slurm/commands/#change-account","text":"If you forgot to specify the expected project account: # /!\\ ADAPT , accordingly scontrol update jobid = account = The new account must be eligible to run the job. See Account Hierarchy for more details.","title":"Change account"},{"location":"slurm/commands/#hold-and-resume-jobs","text":"Prevent a pending job from being started: # /!\\ ADAPT accordingly scontrol hold Allow a held job to accrue priority and run: # /!\\ ADAPT accordingly scontrol release ","title":"Hold and Resume jobs"},{"location":"slurm/commands/#cancel-jobs","text":"Cancel a specific job: # /!\\ ADAPT accordingly scancel Cancel all jobs owned by a user (you) scancel -u $USER This only applies to jobs which are associated with your accounts.","title":"Cancel jobs"},{"location":"slurm/fairsharing/","text":"Fairsharing and Job Accounting \u00b6 Resources : Slurm Priority, Fairshare and Fair Tree (PDF) SchedMD Slurm documentation: Multifactor Priority Plugin Fair tree algorithm, FAS RC docs , Official sshare documentation Fairshare allows past resource utilization information to be taken into account into job feasibility and priority decisions to ensure a fair allocation of the computational resources between the all ULHPC users. A difference with a equal scheduling is illustrated in the side picture ( source ). Essentially fairshare is a way of ensuring that users get their appropriate portion of a system. Sadly this term is also used confusingly for different parts of fairshare listed below, so for the sake of clarity, the following terms will be used: [Raw] Share : portion of the system users have been granted [Raw] Usage : amount of the system users have actually used so far The fairshare score is the value the system calculates based on the usage and the share (see below) Priority : the priority that users are assigned based off of their fairshare score. Demystifying Fairshare While fairshare may seem complex and confusing, it is actually quite logical once you think about it. The scheduler needs some way to adjudicate who gets what resources when different groups on the cluster have been granted different resources and shares for various reasons (see Account Hierarchy ). In order to serve the great variety of groups and needs on the cluster, a method of fairly adjudicating job priority is required. This is the goal of Fairshare . Fairshare allows those users who have not fully used their resource grant to get higher priority for their jobs on the cluster, while making sure that those groups that have used more than their resource grant do not overuse the cluster. The ULHPC supercomputers are a limited shared resource, and Fairshare ensures everyone gets a fair opportunity to use it regardless of how big or small the group is . FairTree Algorithm \u00b6 There exists several fairsharing algorithms implemented in Slurm: Classic Fairshare Depth-Oblivious Fair-share Fair Tree (now implemented on ULHPC since Oct 2020) What is Fair Tree? The Fair Tree algorithm prioritizes users such that if accounts A and B are siblings and A has a higher fairshare factor than B, then all children of A will have higher fairshare factors than all children of B. This is done through a rooted plane tree (PDF) , also known as a rooted ordered tree, which is logically created then sorted by fairshare with the highest fairshare values on the left. The tree is then visited in a depth-first traversal way. Users are ranked in pre-order as they are found. The ranking is used to create the final fairshare factor for the user. Fair Tree Traversal Illustrated - initial post Some of the benefits include: All users from a higher priority account receive a higher fair share factor than all users from a lower priority account. Users are sorted and ranked to prevent errors due to precision loss. Ties are allowed. Account coordinators cannot accidentally harm the priority of their users relative to users in other accounts. Users are extremely unlikely to have exactly the same fairshare factor as another user due to loss of precision in calculations. New jobs are immediately assigned a priority. Overview of Fair Tree for End Users Level Fairshare Calculation Shares \u00b6 On ULHPC facilities, each user is associated by default to a meta-account reflecting its direct Line Manager within the institution (Faculty, IC, Company) you belong too -- see ULHPC Account Hierarchy . You may have other account associations (typically toward projects accounts, granting access to different QOS for instance), and each accounts have Shares granted to them. These Shares determine how much of the cluster that group/account has been granted . Users when they run are charged back for their runs against the account used upon job submission -- you can use sbatch|srun|... -A [...] to change that account. ULHPC Usage Charging Policy Different rules are applied to define these weights/shares depending on the level in the hierarchy: L1 (Organizational Unit): arbitrary shares to dedicate at least 85% of the platform to serve UL needs and projects L2 : function of the out-degree of the tree nodes, reflecting also the past year funding L3 : a function reflecting the budget contribution of the PI/project (normalized on a per-month basis) for the year in exercise. L4 (ULHPC/IPA login): efficiency score, giving incentives for a more efficient usage of the platform. Fair Share Factor \u00b6 The Fairshare score is the value Slurm calculates based off of user's usage reflecting the difference between the portion of the computing resource that has been promised (share) and the amount of resources that has been consumed. It thus influences the order in which a user's queued jobs are scheduled to run based on the portion of the computing resources they have been allocated and the resources their jobs have already consumed. In practice, Slurm's fair-share factor is a floating point number between 0.0 and 1.0 that reflects the shares of a computing resource that a user has been allocated and the amount of computing resources the user's jobs have consumed. The higher the value, the higher is the placement in the queue of jobs waiting to be scheduled. Reciprocally, the more resources the users is consuming, the lower the fair share factor will be which will result in lower priorities. ulhpcshare helper \u00b6 Listing the ULHPC shares: ulhpcshare helper sshare can be used to view the fair share factors and corresponding promised and actual usage for all users. However , you are encouraged to use the ulhpcshare helper function: # your current shares and fair-share factors among your associations ulhpcshare # as above, but for user '' ulhpcshare -u # as above, but for account '' ulhpcshare -A The column that contains the actual factor is called \"FairShare\". Official sshare utility \u00b6 ulhpcshare is a wrapper around the official sshare utility. You can quickly see your score with $ sshare [ -A ] [ -l ] [ --format = Account,User,RawShares,NormShares,EffectvUsage,LevelFS,FairShare ] It will show the Level Fairshare value as Level FS . The field shows the value for each association, thus allowing users to see the results of the fairshare calculation at each level. Note : Unlike the Effective Usage, the Norm Usage is not used by Fair Tree but is still displayed in this case. Slurm Parameter Definitions \u00b6 In this part some of the set slurm parameters are explained which are used to set up the Fair Tree Fairshare Algorithm. For a more detailed explanation please consult the official documentation PriorityCalcPeriod=HH:MM::SS : frequency in minutes that job half-life decay and Fair Tree calculations are performed. PriorityDecayHalfLife=[number of days]-[number of hours] : the time, of which the resource consumption is taken into account for the Fairshare Algorithm, can be set by this. PriorityMaxAge=[number of days]-[number of hours] : the maximal queueing time which counts for the priority calculation. Note that queueing times above are possible but do not contribute to the priority factor. A quick way to check the currently running configuration is: scontrol show config | grep -i priority Trackable RESources (TRES) Billing Weights \u00b6 Slurm saves accounting data for every job or job step that the user submits. On ULHPC facilities, Slurm Trackable RESources (TRES) is enabled to allow for the scheduler to charge back users for how much they have used of different features (i.e. not only CPU) on the cluster -- see Job Accounting and Billing . This is important as the usage of the cluster factors into the Fairshare calculation. As explained in the ULHPC Usage Charging Policy , we set TRES for CPU, GPU, and Memory usage according to weights defined as follows: Weight Description \\alpha_{cpu} \\alpha_{cpu} Normalized relative performance of CPU processor core (ref.: skylake 73.6 GFlops/core) \\alpha_{mem} \\alpha_{mem} Inverse of the average available memory size per core \\alpha_{GPU} \\alpha_{GPU} Weight per GPU accelerator Each partition has its own weights (combined into TRESBillingWeight ) you can check with # /!\\ ADAPT accordingly scontrol show partition FAQ \u00b6 Q: My user fairshare is low, what can I do? \u00b6 We have introduced an efficiency score evaluated on a regular basis (by default, every year) to measure how efficient you use the computational resources of the University according to several measures for completed jobs: How efficient you were to estimate the walltime of your jobs (Average Walltime Accuracy) How CPU/Memory efficient were your completed jobs (see seff ) Without entering into the details, we combine these metrics to compute an unique score value S_\\text{efficiency} S_\\text{efficiency} and you obtain a grade: A (very good), B , C , or D (very bad) which can increase your user share. Q: My account fairshare is low, what can I do? \u00b6 There are several things that can be done when your fairshare is low: Do not run jobs : Fairshare recovers via two routes. The first is via your group not running any jobs and letting others use the resource. That allows your fractional usage to decrease which in turn increases your fairshare score. The second is via the half-life we apply to fairshare which ages out old usage over time. Both of these method require not action but inaction on the part of your group. Thus to recover your fairshare simply stop running jobs until your fairshare reaches the level you desire. Be warned this could take several weeks to accomplish depending on your current usage. Be patient , as a corollary to the previous point. Even if your fairshare is low, your job gains priority by sitting the queue (see Job Priority ) The longer it sits the higher priority it gains. So even if you have very low fairshare your jobs will eventually run, it just may take several days to accomplish. Leverage Backfill : Slurm runs in two scheduling loops. The first loop is the main loop which simply looks at the top of the priority chain for the partition and tries to schedule that job. It will schedule jobs until it hits a job it cannot schedule and then it restarts the loop. The second loop is the backfill loop. This loop looks through jobs further down in the queue and asks can I schedule this job now and not interfere with the start time of the top priority job. Think of it as the scheduler playing giant game of three dimensional tetris, where the dimensions are number of cores, amount of memory, and amount of time. If your job will fit in the gaps that the scheduler has it will put your job in that spot even if it is low priority. This requires you to be very accurate in specifying the core, memory, and time usage ( typically below ) of your job. The better constrained your job is the more likely the scheduler is to fit you in to these gaps**. The seff utility is a great way of figuring out your job performance. Plan : Better planning and knowledge of your historic usage can help you better budget your time on the cluster. Our clusters are not infinite resources . You have been allocated a slice of the cluster, thus it is best to budget your usage so that you can run high priority jobs when you need to. HPC Budget contribution : If your group has persistent high demand that cannot be met with your current allocation, serious consideration should be given to contributing to the ULHPC budget line. This should be done for funded research projects - see HPC Resource Allocations for Research Project This can be done by each individual PI, Dean or IC director In all cases, any contribution on year Y grants additional shares for the group starting year Y+1 . We apply a consistent (complex) function taking into account depreciation of the investment. Contact us (by mail or by a ticket for more details.","title":"Fairsharing"},{"location":"slurm/fairsharing/#fairsharing-and-job-accounting","text":"Resources : Slurm Priority, Fairshare and Fair Tree (PDF) SchedMD Slurm documentation: Multifactor Priority Plugin Fair tree algorithm, FAS RC docs , Official sshare documentation Fairshare allows past resource utilization information to be taken into account into job feasibility and priority decisions to ensure a fair allocation of the computational resources between the all ULHPC users. A difference with a equal scheduling is illustrated in the side picture ( source ). Essentially fairshare is a way of ensuring that users get their appropriate portion of a system. Sadly this term is also used confusingly for different parts of fairshare listed below, so for the sake of clarity, the following terms will be used: [Raw] Share : portion of the system users have been granted [Raw] Usage : amount of the system users have actually used so far The fairshare score is the value the system calculates based on the usage and the share (see below) Priority : the priority that users are assigned based off of their fairshare score. Demystifying Fairshare While fairshare may seem complex and confusing, it is actually quite logical once you think about it. The scheduler needs some way to adjudicate who gets what resources when different groups on the cluster have been granted different resources and shares for various reasons (see Account Hierarchy ). In order to serve the great variety of groups and needs on the cluster, a method of fairly adjudicating job priority is required. This is the goal of Fairshare . Fairshare allows those users who have not fully used their resource grant to get higher priority for their jobs on the cluster, while making sure that those groups that have used more than their resource grant do not overuse the cluster. The ULHPC supercomputers are a limited shared resource, and Fairshare ensures everyone gets a fair opportunity to use it regardless of how big or small the group is .","title":"Fairsharing and Job Accounting"},{"location":"slurm/fairsharing/#fairtree-algorithm","text":"There exists several fairsharing algorithms implemented in Slurm: Classic Fairshare Depth-Oblivious Fair-share Fair Tree (now implemented on ULHPC since Oct 2020) What is Fair Tree? The Fair Tree algorithm prioritizes users such that if accounts A and B are siblings and A has a higher fairshare factor than B, then all children of A will have higher fairshare factors than all children of B. This is done through a rooted plane tree (PDF) , also known as a rooted ordered tree, which is logically created then sorted by fairshare with the highest fairshare values on the left. The tree is then visited in a depth-first traversal way. Users are ranked in pre-order as they are found. The ranking is used to create the final fairshare factor for the user. Fair Tree Traversal Illustrated - initial post Some of the benefits include: All users from a higher priority account receive a higher fair share factor than all users from a lower priority account. Users are sorted and ranked to prevent errors due to precision loss. Ties are allowed. Account coordinators cannot accidentally harm the priority of their users relative to users in other accounts. Users are extremely unlikely to have exactly the same fairshare factor as another user due to loss of precision in calculations. New jobs are immediately assigned a priority. Overview of Fair Tree for End Users Level Fairshare Calculation","title":"FairTree Algorithm"},{"location":"slurm/fairsharing/#shares","text":"On ULHPC facilities, each user is associated by default to a meta-account reflecting its direct Line Manager within the institution (Faculty, IC, Company) you belong too -- see ULHPC Account Hierarchy . You may have other account associations (typically toward projects accounts, granting access to different QOS for instance), and each accounts have Shares granted to them. These Shares determine how much of the cluster that group/account has been granted . Users when they run are charged back for their runs against the account used upon job submission -- you can use sbatch|srun|... -A [...] to change that account. ULHPC Usage Charging Policy Different rules are applied to define these weights/shares depending on the level in the hierarchy: L1 (Organizational Unit): arbitrary shares to dedicate at least 85% of the platform to serve UL needs and projects L2 : function of the out-degree of the tree nodes, reflecting also the past year funding L3 : a function reflecting the budget contribution of the PI/project (normalized on a per-month basis) for the year in exercise. L4 (ULHPC/IPA login): efficiency score, giving incentives for a more efficient usage of the platform.","title":"Shares"},{"location":"slurm/fairsharing/#fair-share-factor","text":"The Fairshare score is the value Slurm calculates based off of user's usage reflecting the difference between the portion of the computing resource that has been promised (share) and the amount of resources that has been consumed. It thus influences the order in which a user's queued jobs are scheduled to run based on the portion of the computing resources they have been allocated and the resources their jobs have already consumed. In practice, Slurm's fair-share factor is a floating point number between 0.0 and 1.0 that reflects the shares of a computing resource that a user has been allocated and the amount of computing resources the user's jobs have consumed. The higher the value, the higher is the placement in the queue of jobs waiting to be scheduled. Reciprocally, the more resources the users is consuming, the lower the fair share factor will be which will result in lower priorities.","title":"Fair Share Factor"},{"location":"slurm/fairsharing/#ulhpcshare-helper","text":"Listing the ULHPC shares: ulhpcshare helper sshare can be used to view the fair share factors and corresponding promised and actual usage for all users. However , you are encouraged to use the ulhpcshare helper function: # your current shares and fair-share factors among your associations ulhpcshare # as above, but for user '' ulhpcshare -u # as above, but for account '' ulhpcshare -A The column that contains the actual factor is called \"FairShare\".","title":"ulhpcshare helper"},{"location":"slurm/fairsharing/#official-sshare-utility","text":"ulhpcshare is a wrapper around the official sshare utility. You can quickly see your score with $ sshare [ -A ] [ -l ] [ --format = Account,User,RawShares,NormShares,EffectvUsage,LevelFS,FairShare ] It will show the Level Fairshare value as Level FS . The field shows the value for each association, thus allowing users to see the results of the fairshare calculation at each level. Note : Unlike the Effective Usage, the Norm Usage is not used by Fair Tree but is still displayed in this case.","title":"Official sshare utility"},{"location":"slurm/fairsharing/#slurm-parameter-definitions","text":"In this part some of the set slurm parameters are explained which are used to set up the Fair Tree Fairshare Algorithm. For a more detailed explanation please consult the official documentation PriorityCalcPeriod=HH:MM::SS : frequency in minutes that job half-life decay and Fair Tree calculations are performed. PriorityDecayHalfLife=[number of days]-[number of hours] : the time, of which the resource consumption is taken into account for the Fairshare Algorithm, can be set by this. PriorityMaxAge=[number of days]-[number of hours] : the maximal queueing time which counts for the priority calculation. Note that queueing times above are possible but do not contribute to the priority factor. A quick way to check the currently running configuration is: scontrol show config | grep -i priority","title":"Slurm Parameter Definitions"},{"location":"slurm/fairsharing/#trackable-resources-tres-billing-weights","text":"Slurm saves accounting data for every job or job step that the user submits. On ULHPC facilities, Slurm Trackable RESources (TRES) is enabled to allow for the scheduler to charge back users for how much they have used of different features (i.e. not only CPU) on the cluster -- see Job Accounting and Billing . This is important as the usage of the cluster factors into the Fairshare calculation. As explained in the ULHPC Usage Charging Policy , we set TRES for CPU, GPU, and Memory usage according to weights defined as follows: Weight Description \\alpha_{cpu} \\alpha_{cpu} Normalized relative performance of CPU processor core (ref.: skylake 73.6 GFlops/core) \\alpha_{mem} \\alpha_{mem} Inverse of the average available memory size per core \\alpha_{GPU} \\alpha_{GPU} Weight per GPU accelerator Each partition has its own weights (combined into TRESBillingWeight ) you can check with # /!\\ ADAPT accordingly scontrol show partition ","title":"Trackable RESources (TRES) Billing Weights"},{"location":"slurm/fairsharing/#faq","text":"","title":"FAQ"},{"location":"slurm/fairsharing/#q-my-user-fairshare-is-low-what-can-i-do","text":"We have introduced an efficiency score evaluated on a regular basis (by default, every year) to measure how efficient you use the computational resources of the University according to several measures for completed jobs: How efficient you were to estimate the walltime of your jobs (Average Walltime Accuracy) How CPU/Memory efficient were your completed jobs (see seff ) Without entering into the details, we combine these metrics to compute an unique score value S_\\text{efficiency} S_\\text{efficiency} and you obtain a grade: A (very good), B , C , or D (very bad) which can increase your user share.","title":"Q: My user fairshare is low, what can I do?"},{"location":"slurm/fairsharing/#q-my-account-fairshare-is-low-what-can-i-do","text":"There are several things that can be done when your fairshare is low: Do not run jobs : Fairshare recovers via two routes. The first is via your group not running any jobs and letting others use the resource. That allows your fractional usage to decrease which in turn increases your fairshare score. The second is via the half-life we apply to fairshare which ages out old usage over time. Both of these method require not action but inaction on the part of your group. Thus to recover your fairshare simply stop running jobs until your fairshare reaches the level you desire. Be warned this could take several weeks to accomplish depending on your current usage. Be patient , as a corollary to the previous point. Even if your fairshare is low, your job gains priority by sitting the queue (see Job Priority ) The longer it sits the higher priority it gains. So even if you have very low fairshare your jobs will eventually run, it just may take several days to accomplish. Leverage Backfill : Slurm runs in two scheduling loops. The first loop is the main loop which simply looks at the top of the priority chain for the partition and tries to schedule that job. It will schedule jobs until it hits a job it cannot schedule and then it restarts the loop. The second loop is the backfill loop. This loop looks through jobs further down in the queue and asks can I schedule this job now and not interfere with the start time of the top priority job. Think of it as the scheduler playing giant game of three dimensional tetris, where the dimensions are number of cores, amount of memory, and amount of time. If your job will fit in the gaps that the scheduler has it will put your job in that spot even if it is low priority. This requires you to be very accurate in specifying the core, memory, and time usage ( typically below ) of your job. The better constrained your job is the more likely the scheduler is to fit you in to these gaps**. The seff utility is a great way of figuring out your job performance. Plan : Better planning and knowledge of your historic usage can help you better budget your time on the cluster. Our clusters are not infinite resources . You have been allocated a slice of the cluster, thus it is best to budget your usage so that you can run high priority jobs when you need to. HPC Budget contribution : If your group has persistent high demand that cannot be met with your current allocation, serious consideration should be given to contributing to the ULHPC budget line. This should be done for funded research projects - see HPC Resource Allocations for Research Project This can be done by each individual PI, Dean or IC director In all cases, any contribution on year Y grants additional shares for the group starting year Y+1 . We apply a consistent (complex) function taking into account depreciation of the investment. Contact us (by mail or by a ticket for more details.","title":"Q: My account fairshare is low, what can I do?"},{"location":"slurm/launchers/","text":"Slurm Launcher Examples \u00b6 ULHPC Tutorial / Getting Started ULHPC Tutorial / OpenMP/MPI When setting your default #SBATCH directive, always keep in mind your expected default resource allocation that would permit to submit your launchers without options sbatch (you will be glad in a couple of month not to have to remember the options you need to pass) and try to stick to a single node (to avoid to accidentally induce a huge submission). Resource allocation Guidelines \u00b6 General guidelines Always try to align resource specifications for your jobs with physical characteristics. Always prefer the use of --ntasks-per-{node,socket} over -n when defining your tasks allocation request to automatically scale appropriately upon multi-nodes submission with for instance sbatch -N 2 . Launcher template: #!/bin/bash -l # <--- DO NOT FORGET '-l' to facilitate further access to ULHPC modules #SBATCH -p #SBATCH -p #SBATCH -N 1 #SBATCH -N 1 #SBATCH --ntasks-per-node= #SBATCH --ntasks-per-node <#sockets * s> #SBATCH -c #SBATCH --ntasks-per-socket #SBATCH -c This would define by default a total of (left) or \\#sockets \\times \\#sockets \\times (right) tasks per node , each on threads . You MUST ensure that either: \\times \\times matches the number of cores avaiable on the target computing node (left), or = \\#sockets \\times \\#sockets \\times , and \\times \\times matches the number of cores per socket available on the target computing node (right). See Specific Resource Allocation Node (type) #Nodes #Socket / #Cores RAM [GB] Features aion-[0001-0354] 354 8 / 128 256 batch,epyc iris-[001-108] 108 2 / 28 128 batch,broadwell iris-[109-168] 60 2 / 28 128 batch,skylake iris-[169-186] (GPU) 18 2 / 28 768 gpu,skylake,volta iris-[191-196] (GPU) 6 2 / 28 768 gpu,skylake,volta32 iris-[187-190] (Large-Memory) 4 4 / 112 3072 bigmem,skylake Aion (default Dual-CPU) 16 cores per socket and 8 (virtual) sockets (CPUs) per aion node. Examples: #SBATCH -p batch #SBATCH -p batch #SBATCH -p batch #SBATCH -N 1 #SBATCH -N 1 #SBATCH -N 1 #SBATCH --ntasks-per-node=128 #SBATCH --ntasks-per-node 16 #SBATCH --ntasks-per-node 8 #SBATCH --ntasks-per-socket 16 #SBATCH --ntasks-per-socket 2 #SBATCH --ntasks-per-socket 1 #SBATCH -c 1 #SBATCH -c 8 #SBATCH -c 16 Iris (default Dual-CPU) 14 cores per socket and 2 sockets (physical CPUs) per regular iris . Examples: #SBATCH -p batch #SBATCH -p batch #SBATCH -p batch #SBATCH -N 1 #SBATCH -N 1 #SBATCH -N 1 #SBATCH --ntasks-per-node=28 #SBATCH --ntasks-per-node 14 #SBATCH --ntasks-per-node 4 #SBATCH --ntasks-per-socket=14 #SBATCH --ntasks-per-socket 7 #SBATCH --ntasks-per-socket 2 #SBATCH -c 1 #SBATCH -c 2 #SBATCH -c 7 Iris (GPU) 14 cores per socket and 2 sockets (physical CPUs) per gpu iris , 4 GPU accelerator cards per node. You probably want to dedicate 1 task and \\frac{1}{4} \\frac{1}{4} of the available cores to the management of each GPU accelerator. Examples: #SBATCH -p gpu #SBATCH -p gpu #SBATCH -p gpu #SBATCH -N 1 #SBATCH -N 1 #SBATCH -N 1 #SBATCH --ntasks-per-node=1 #SBATCH --ntasks-per-node 2 #SBATCH --ntasks-per-node 4 #SBATCH -c 7 #SBATCH --ntasks-per-socket 1 #SBATCH --ntasks-per-socket 2 #SBATCH -G 1 #SBATCH -c 7 #SBATCH -c 7 #SBATCH -G 2 #SBATCH -G 4 Iris (Large-Memory) 28 cores per socket and 4 sockets (physical CPUs) per bigmem iris node. Examples: #SBATCH -p bigmem #SBATCH -p bigmem #SBATCH -p bigmem #SBATCH -N 1 #SBATCH -N 1 #SBATCH -N 1 #SBATCH --ntasks-per-node=4 #SBATCH --ntasks-per-node 8 #SBATCH --ntasks-per-node 16 #SBATCH --ntasks-per-socket=1 #SBATCH --ntasks-per-socket 2 #SBATCH --ntasks-per-socket 4 #SBATCH -c 28 #SBATCH -c 14 #SBATCH -c 7 You probably want to play with a single task but define the expected memory allocation with --mem= (Default units are megabytes - Different units can be specified using the suffix [K|M|G|T] ) Basic Slurm Launcher Examples \u00b6 Single core task 1 task per job (Note: prefer GNU Parallel in that case - see below) #!/bin/bash -l # <--- DO NOT FORGET '-l' ### Request a single task using one core on one node for 5 minutes in the batch queue #SBATCH -N 1 #SBATCH --ntasks-per-node=1 #SBATCH -c 1 #SBATCH --time=0-00:05:00 #SBATCH -p batch print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } # Safeguard for NOT running this launcher on access/login nodes module purge || print_error_and_exit \"No 'module' command\" # List modules required for execution of the task module load <...> # [...] Multiple Single core tasks 28 single-core tasks per job #!/bin/bash -l ### Request as many tasks as cores available on a single node for 3 hours #SBATCH -N 1 #SBATCH --ntasks-per-node=28 # On iris; for aion, use --ntasks-per-node=128 #SBATCH -c 1 #SBATCH --time=0-03:00:00 #SBATCH -p batch print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } module purge || print_error_and_exit \"No 'module' command\" module load <...> # [...] Multithreaded parallel tasks 7 multithreaded tasks per job (4 threads each) #!/bin/bash -l ### Request as many tasks as cores available on a single node for 3 hours #SBATCH -N 1 #SBATCH --ntasks-per-node=7 # On iris; for aion, use --ntasks-per-node=32 #SBATCH -c 4 #SBATCH --time=0-03:00:00 #SBATCH -p batch print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } module purge || print_error_and_exit \"No 'module' command\" module load <...> # [...] Embarrassingly Parallel Tasks \u00b6 For many users, the reason to consider (or being encouraged) to offload their computing executions on a (remote) HPC or Cloud facility is tied to the limits reached by their computing devices (laptop or workstation). It is generally motivated by time constraints \"My computations take several hours/days to complete. On an HPC, it will last a few minutes, no?\" or search-space explorations: \"I need to check my application against a huge number of input pieces (files) - it worked on a few of them locally but takes ages for a single check. How to proceed on HPC?\" In most of the cases, your favorite Java application or R/python (custom) development scripts, iterated again over multiple input conditions, are inherently SERIAL : they are able to use only one core when executed. You thus deal with what is often call a Bag of (independent) tasks , also referred to as embarrassingly parallel tasks . In this case, you MUST NOT overload the job scheduler with a large number of small (single-core) jobs. Instead, you should use GNU Parallel which permits the effective management of such tasks in a way that optimize both the resource allocation and the completion time. More specifically, GNU Parallel is a tool for executing tasks in parallel, typically on a single machine. When coupled with the Slurm command srun, parallel becomes a powerful way of distributing a set of tasks amongst a number of workers. This is particularly useful when the number of tasks is significantly larger than the number of available workers (i.e. $SLURM_NTASKS ), and each tasks is independent of the others. ULHPC Tutorial: GNU Parallel launcher for Embarrassingly Parallel Jobs Luckily, we have prepared a generic GNU Parallel launcher that should be straight forward to adapt to your own workflow following our tutorial : Create a dedicated script run_ responsible to run your java/R/Python tasks while taking as argument the parameter of each run. You can inspire from run_stressme for instance. test it in interactive rename the generic launcher launcher.parallel.sh to launcher_.sh , enable #SBATCH --dependency singleton set the jobname change TASK to point to the absolute path to run_ script set TASKLISTFILE to point to a files with the parameters to pass to your script for each task adapt eventually the #SBATCH --ntasks-per-node [...] and #SBATCH -c [...] to match your needs AND the hardware configs of a single node (28 cores on iris, 128 cores on Aion) -- see guidelines test a batch run -- stick to a single node to take the best out of one full node. Serial Task script Launcher \u00b6 Serial Killer (Generic template) #!/bin/bash -l # <--- DO NOT FORGET '-l' #SBATCH -N 1 #SBATCH --ntasks-per-node=1 #SBATCH -c 1 #SBATCH --time=0-01:00:00 #SBATCH -p batch print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } module purge || print_error_and_exit \"No 'module' command\" # C/C++: module load toolchain/intel # OR: module load toolchain/foss # Java: module load lang/Java/1.8 # Ruby/Perl/Rust...: module load lang/{Ruby,Perl,Rust...} # /!\\ ADAPT TASK variable accordingly - absolute path to the (serial) task to be executed TASK = ${ TASK := ${ HOME } /bin/app.exe } OPTS = $* srun ${ TASK } ${ OPTS } Serial Python #!/bin/bash -l #SBATCH -N 1 #SBATCH --ntasks-per-node=1 #SBATCH -c 1 #SBATCH --time=0-01:00:00 #SBATCH -p batch print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } module purge || print_error_and_exit \"No 'module' command\" # Python 3.X by default (also on system) module load lang/Python # module load lang/SciPy-bundle # and/or: activate the virtualenv you previously generated with # python -m venv source .//bin/activate OPTS = $* srun python [ ... ] ${ OPTS } R #!/bin/bash -l #SBATCH -N 1 #SBATCH --ntasks-per-node=1 #SBATCH -c 28 #SBATCH --time=0-01:00:00 #SBATCH -p batch print_error_and_exit () { echo \"***ERROR*** $* \" ; exit 1 ; } module purge || print_error_and_exit \"No 'module' command\" module load lang/R export OMP_NUM_THREADS = ${ SLURM_CPUS_PER_TASK :- 1 } OPTS = $* srun Rscript + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/services/jupyter/index.html b/services/jupyter/index.html new file mode 100644 index 00000000..aede0593 --- /dev/null +++ b/services/jupyter/index.html @@ -0,0 +1,3186 @@ + + + + + + + + + + + + + + + + + + + + + + + + Jupyter Notebook - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Jupyter Notebook

    +

    +

    JupyterLab is a flexible, popular literate-computing web application for creating notebooks containing code, equations, visualization, and text. Notebooks are documents that contain both computer code and rich text elements (paragraphs, equations, figures, widgets, links). They are human-readable documents containing analysis descriptions and results but are also executable data analytics artifacts. Notebooks are associated with kernels, processes that actually execute code. Notebooks can be shared or converted into static HTML documents. They are a powerful tool for reproducible research and teaching.

    +

    Install Jupyter

    +

    While JupyterLab runs code in Jupyter notebooks for many programming languages, Python is a requirement (Python 3.3 or greater, or Python 2.7) for installing the JupyterLab. New users may wish to install JupyterLab in a Conda environment. Hereafter, the pip package manager will be used to install JupyterLab.

    +

    We strongly recommend to use the Python module provided by the ULHPC and installing jupyter inside a Python virtual environment after upgrading pip.

    +
    $ si
    +$ module load lang/Python #Loading default Python
    +$ python -m venv ~/environments/jupyter_env
    +$ source ~/environments/jupyter_env/bin/activate
    +$ python -m pip install --upgrade pip
    +$ python -m pip install jupyterlab
    +
    + +
    +

    Warning

    +

    Modules are not allowed on the access servers. To test interactively Singularity, remember to ask for an interactive job first using for instance the si tool.

    +
    +

    Once JupyterLab is installed along with , you can start to configure your installation setting the environment variables corresponding to your needs:

    +
      +
    • JUPYTER_CONFIG_DIR: Set this environment variable to use a particular directory, other than the default, for Jupyter config files
    • +
    • JUPYTER_PATH: Set this environment variable to provide extra directories for the data search path. JUPYTER_PATH should contain a series of directories, separated by os.pathsep(; on Windows, : on Unix). Directories given in JUPYTER_PATH are searched before other locations. This is used in addition to other entries, rather than replacing any
    • +
    • JUPYTER_DATA_DIR: Set this environment variable to use a particular directory, other than the default, as the user data directory
    • +
    • JUPYTER_RUNTIME_DIR: Set this to override where Jupyter stores runtime files
    • +
    • IPYTHONDIR: If set, this environment variable should be the path to a directory, which IPython will use for user data. IPython will create it if it does not exist.
    • +
    +

    JupyterLab is now installed and ready.

    +
    Installing the classic Notebook

    JupyterLab (jupyterlab) is a new package which automates many task that where performed manually in the traditional Jupyter package (jupyter). If you prefer to install the classic notebook, you also need to install the IPython manually as well, replacing +

    python -m pip install jupyterlab
    +
    +with: +
    python -m pip install jupyter ipykernel
    +

    +
    +

    Providing access to kernels of other environments

    +

    JupyterLab makes sure that a default IPython kernel is available, with the environment (and the Python version) with which the lab was created. Other environments can export a kernel to a JupyterLab instance, allowing the instance to launch interactive session inside environments others from the environment where JupyterLab is installed.

    +

    You can setup kernels with different environments on the same notebook. Create the environment with the Python version and the packages you require, and then register the kernel in any environment with Jupyter (lab or classic notebook) installed. For instance, if we have installed Jupyter in ~/environments/jupyter_env: +

    source ~/environments/other_python_venv/bin/activate
    +python -m pip install ipykernel
    +python -m ipykernel install --prefix=${HOME}/environments/jupyter_env --name other_python_env --display-name "Other Python env"
    +deactivate
    +
    +Then all kernels and their associated environment can be started from the same Jupyter instance in the ~/environments/jupyter_env Python venv.

    +

    You can also use the flag --user instead of --prefix to install the kernel in the default system location available to all Jupyter environments for a user.

    +

    Kernels for Conda environments

    +

    If you would like to install a kernel in a Conda environment, install the ipykernel from the conda-forge channel. For instance, +

    micromamba install --name conda_env conda-forge::ipykernel
    +micromamba run --name conda_env python -m ipykernel install --prefix=${HOME}/environments/jupyter_env --name other_python_env --display-name "Other Python env"
    +
    +will make your conda environment, conda_env, available in the kernel launched from the ~/environments/jupyter_env Python venv.

    +

    Starting a Jupyter Notebook

    +

    Jupyter notebooks must be started as slurm jobs. The following script is a template for Jupyter submission scripts that will rarely need modifications. Most often you will need to modify the session duration (--time SBATCH option).

    +
    +

    Slurm Launcher script for Jupyter Notebook

    +
    #!/usr/bin/bash --login
    +#SBATCH --job-name=Jupyter
    +#SBATCH --nodes=1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH --cpus-per-task=2   # Change accordingly, note that ~1.7GB RAM is proivisioned per core
    +#SBATCH --partition=batch
    +#SBATCH --qos=normal
    +#SBATCH --output=%x_%j.out  # Print messages to 'Jupyter_<job id>.out
    +#SBATCH --error=%x_%j.err   # Print debug messages to 'Jupyter_<job id>.err
    +#SBATCH --time=0-01:00:00   # Change maximum allowable jupyter server uptime here
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +
    +# Load the default Python 3 module
    +module load lang/Python
    +source "${HOME}/environments/jupyter_env/bin/activate"
    +
    +declare loopback_device="127.0.0.1"
    +declare port="8888"
    +declare connection_instructions="connection_instructions.log"
    +
    +jupyter lab --ip=${loopback_device} --port=${port} --no-browser &
    +declare lab_pid=$!
    +
    +# Add connection instruction
    +echo "# Connection instructions" > "${connection_instructions}"
    +echo "" >> "${connection_instructions}"
    +echo "To access the jupyter notebook execute on your personal machine:" >> "${connection_instructions}"
    +echo "ssh -J ${USER}@access-${ULHPC_CLUSTER}.uni.lu:8022 -L ${port}:${loopback_device}:${port} ${USER}@$(hostname -i)" >> "${connection_instructions}"
    +echo "" >> "${connection_instructions}"
    +echo "To access the jupyter notebook if you have setup a special key (e.g ulhpc_id_ed25519) to connect to cluster nodes execute on your personal machine:" >> "${connection_instructions}"
    +echo "ssh -i ~/.ssh/hpc_id_ed25519 -J ${USER}@access-${ULHPC_CLUSTER}.uni.lu:8022 -L ${port}:${loopback_device}:${port} ${USER}@$(hostname -i)" >> "${connection_instructions}"
    +echo "" >> "${connection_instructions}"
    +echo "Then navigate to:" >> "${connection_instructions}"
    +
    +# Wait for the server to start
    +sleep 2s
    +# Wait and check that the landing page is available
    +curl \
    +    --connect-timeout 10 \
    +    --retry 5 \
    +    --retry-delay 1 \
    +    --retry-connrefused \
    +    --silent --show-error --fail \
    +    "http://${loopback_device}:${port}" > /dev/null
    +# Note down the URL
    +jupyter lab list 2>&1 \
    +    | grep -E '\?token=' \
    +    | awk 'BEGIN {FS="::"} {gsub("[ \t]*","",$1); print $1}' \
    +    | sed -r 's/([0-9]{1,3}\.){3}[0-9]{1,3}/127\.0\.0\.1/g' \
    +    >> "${connection_instructions}"
    +
    +# Save some debug information
    +echo -e '\n===\n'
    +
    +echo "AVAILABLE LABS"
    +echo ""
    +jupyter lab list
    +
    +echo -e '\n===\n'
    +
    +echo "CONFIGURATION PATHS"
    +echo ""
    +jupyter --paths
    +
    +echo -e '\n===\n'
    +
    +echo "KERNEL SPECIFICATIONS"
    +echo ""
    +jupyter kernelspec list
    +
    +# Wait for the user to terminate the lab
    +wait ${lab_pid}
    +
    + +
    +

    Once your job is running (see Joining/monitoring running jobs), you can combine

    + +

    to connect to the notebook from your laptop. Open a terminal on your laptop and copy-paste the ssh command contained in the file connection_instructions.log, and then navigate to the webpage link provided.

    +
    +

    Example content of connection_instructions.log

    +
    > cat connection_instructions.log
    +# Connection instructions
    +
    +To access the jupyter notebook execute on your personal machine:
    +ssh -J gkafanas@access-aion.uni.lu:8022 -L 8888:127.0.0.1:8888 gkafanas@172.21.12.29
    +
    +To access the jupyter notebook if you have setup a special key (e.g ulhpc_id_ed25519) to connect to cluster nodes execute on your personal machine:
    +ssh -i ~/.ssh/ulhpc_id_ed25519 -J gkafanas@access-aion.uni.lu:8022 -L 8888:127.0.0.1:8888 gkafanas@172.21.12.29
    +
    +Then navigate to:
    +http://127.0.0.1:8888/?token=b7cf9d71d5c89627250e9a73d4f28cb649cd3d9ff662e7e2
    +
    + +
    +

    As the instructions suggest, you access the jupyter lab server in the compute node by calling +

    ssh -J gkafanas@access-aion.uni.lu:8022 -L 8888:127.0.0.1:8888 gkafanas@172.21.12.29
    +
    +an SSH command that

    +
      +
    • opens a connection to your allocated cluster node jumping through the login node (-J gkafanas@access-aion.uni.lu:8022 gkafanas@172.21.12.29), and
    • +
    • exports the port to the jupyter server in the local machine (-L 8888:127.0.0.1:8888).
    • +
    +

    Then, open the connection to the browser in your local machine by following the given link: +

    http://127.0.0.1:8888/?token=b7cf9d71d5c89627250e9a73d4f28cb649cd3d9ff662e7e2
    +

    +

    The link provides the access token, so you should be able to login without a password.

    +
    +

    Warning

    +

    Do not forget to click on the quit button when finished to stop the Jupyter server and release the resources. Note that in the last line of the submission script the job waits for your Jupyter service to finish.

    +
    +

    If you encounter any issues, have a look in the debug output in Jupyter_<job id>.err. Generic information about the setup of your system is printed in Jupyter_<job id>.out.

    +
    Typical content of Jupyter_<job id>.err
    > cat Jupyter_3664038.err 
    +[I 2024-11-13 23:19:52.538 ServerApp] jupyter_lsp | extension was successfully linked.
    +[I 2024-11-13 23:19:52.543 ServerApp] jupyter_server_terminals | extension was successfully linked.
    +[I 2024-11-13 23:19:52.547 ServerApp] jupyterlab | extension was successfully linked.
    +[I 2024-11-13 23:19:52.766 ServerApp] notebook_shim | extension was successfully linked.
    +[I 2024-11-13 23:19:52.808 ServerApp] notebook_shim | extension was successfully loaded.
    +[I 2024-11-13 23:19:52.812 ServerApp] jupyter_lsp | extension was successfully loaded.
    +[I 2024-11-13 23:19:52.813 ServerApp] jupyter_server_terminals | extension was successfully loaded.
    +[I 2024-11-13 23:19:52.814 LabApp] JupyterLab extension loaded from /home/users/gkafanas/environments/jupyter_env/lib/python3.11/site-packages/jupyterlab
    +[I 2024-11-13 23:19:52.814 LabApp] JupyterLab application directory is /mnt/aiongpfs/users/gkafanas/environments/jupyter_env/share/jupyter/lab
    +[I 2024-11-13 23:19:52.815 LabApp] Extension Manager is 'pypi'.
    +[I 2024-11-13 23:19:52.826 ServerApp] jupyterlab | extension was successfully loaded.
    +[I 2024-11-13 23:19:52.827 ServerApp] Serving notebooks from local directory: /mnt/aiongpfs/users/gkafanas/support/jupyter
    +[I 2024-11-13 23:19:52.827 ServerApp] Jupyter Server 2.14.2 is running at:
    +[I 2024-11-13 23:19:52.827 ServerApp] http://127.0.0.1:8888/lab?token=fe665f90872927f5f84be627f54cf9056908c34b3765e17d
    +[I 2024-11-13 23:19:52.827 ServerApp]     http://127.0.0.1:8888/lab?token=fe665f90872927f5f84be627f54cf9056908c34b3765e17d
    +[I 2024-11-13 23:19:52.827 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
    +[C 2024-11-13 23:19:52.830 ServerApp] 
    +
    +    To access the server, open this file in a browser:
    +        file:///home/users/gkafanas/.local/share/jupyter/runtime/jpserver-2253096-open.html
    +    Or copy and paste one of these URLs:
    +        http://127.0.0.1:8888/lab?token=fe665f90872927f5f84be627f54cf9056908c34b3765e17d
    +        http://127.0.0.1:8888/lab?token=fe665f90872927f5f84be627f54cf9056908c34b3765e17d
    +[I 2024-11-13 23:19:52.845 ServerApp] Skipped non-installed server(s): bash-language-server, dockerfile-language-server-nodejs, javascript-typescript-langserver, jedi-language-server, julia-language-server, pyright, python-language-server, python-lsp-server, r-languageserver, sql-language-server, texlab, typescript-language-server, unified-language-server, vscode-css-languageserver-bin, vscode-html-languageserver-bin, vscode-json-languageserver-bin, yaml-language-server
    +[I 2024-11-13 23:19:53.824 ServerApp] 302 GET / (@127.0.0.1) 0.47ms
    +
    + +
    +
    Typical content of Jupyter_<job id>.err
    > cat Jupyter_3664038.out
    +
    +===
    +
    +AVAILABLE LABS
    +
    +Currently running servers:
    +http://127.0.0.1:8888/?token=fe665f90872927f5f84be627f54cf9056908c34b3765e17d :: /mnt/aiongpfs/users/gkafanas/support/jupyter
    +
    +===
    +
    +CONFIGURATION PATHS
    +
    +config:
    +    /home/users/gkafanas/environments/jupyter_env/etc/jupyter
    +    /mnt/aiongpfs/users/gkafanas/.jupyter
    +    /usr/local/etc/jupyter
    +    /etc/jupyter
    +data:
    +    /home/users/gkafanas/environments/jupyter_env/share/jupyter
    +    /home/users/gkafanas/.local/share/jupyter
    +    /usr/local/share/jupyter
    +    /usr/share/jupyter
    +runtime:
    +    /home/users/gkafanas/.local/share/jupyter/runtime
    +
    +===
    +
    +KERNEL SPECIFICATIONS
    +
    +Available kernels:
    +  other_python_env    /home/users/gkafanas/environments/jupyter_env/share/jupyter/kernels/other_python_env
    +  python3             /home/users/gkafanas/environments/jupyter_env/share/jupyter/kernels/python3 
    +
    + +
    +

    Password protected access

    +

    You can also set a password when launching the jupyter lab as detailed in the Jupyter official documentation. In that case, simply direct you browser to the URL http://127.0.0.1:8888/ and provide your password. You can see bellow an example of the login page.

    +
    Typical content of a password protected login page

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/setup/index.html b/setup/index.html new file mode 100644 index 00000000..aa1f9161 --- /dev/null +++ b/setup/index.html @@ -0,0 +1,2774 @@ + + + + + + + + + + + + + + + + + + + + + + + + Pre-Requisites and Laptop Setup - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    + +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..a5054809 --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,503 @@ + + + https://hpc-docs.uni.lu/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/getting-started/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/hpc-schools/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/accounts/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/policies/passwords/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/accounts/collaboration_accounts/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/accounts/projects/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/connect/ipa/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/slurm/accounts/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/teaching-with-the-ulhpc/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/data-center/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/systems/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/systems/aion/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/systems/aion/compute/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/systems/aion/interconnect/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/systems/aion/timeline/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/systems/iris/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/systems/iris/compute/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/systems/iris/interconnect/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/systems/iris/timeline/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/interconnect/ib/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/interconnect/ethernet/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/filesystems/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/filesystems/gpfs/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/filesystems/lustre/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/filesystems/isilon/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/connect/access/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/connect/ssh/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/connect/ipa/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/connect/ood/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/connect/troubleshooting/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/data/layout/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/data/sharing/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/data/transfer/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/data/gdpr/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/filesystems/quotas/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/data/project/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/filesystems/lfs/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/data/backups/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/filesystems/unix-file-permissions/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/data/encryption/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/environment/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/environment/modules/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/environment/easybuild/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/environment/conda/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/policies/aup/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/policies/maintenance/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/policies/usage-charging/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/slurm/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/slurm/commands/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/slurm/partitions/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/slurm/qos/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/slurm/accounts/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/jobs/reason-codes/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/slurm/fairsharing/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/jobs/priority/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/jobs/billing/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/jobs/interactive/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/jobs/submit/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/jobs/gpu/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/jobs/long/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/jobs/best-effort/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/slurm/launchers/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/services/jupyter/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/all_softwares/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/bio/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/cae/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/chem/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/compiler/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/data/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/debugger/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/devel/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/lang/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/lib/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/math/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/mpi/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/numlib/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/perf/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/phys/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/system/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/toolchain/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/tools/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/vis/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/2019b/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/swsets/2020b/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/build/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/eessi/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/containers/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/cae/fenics/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/cae/ansys/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/cae/openfoam/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/cae/abaqus/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/cae/fds/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/cae/meshing-tools/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/physics/wrf/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/electronics/abinit/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/electronics/ase/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/electronics/meep/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/electronics/quantum-espresso/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/electronics/vasp/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/molecular-dynamics/cp2k/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/molecular-dynamics/gromacs/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/molecular-dynamics/namd/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/molecular-dynamics/nwchem/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/computational-chemistry/molecular-dynamics/helping-libraries/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/maths/matlab/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/maths/mathematica/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/maths/stata/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/maths/julia/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/optim/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/software/visu/paraview/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/development/performance-debugging-tools/arm-forge/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/development/performance-debugging-tools/vtune/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/development/performance-debugging-tools/advisor/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/development/performance-debugging-tools/inspector/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/development/performance-debugging-tools/itac/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/development/performance-debugging-tools/scalasca/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/development/performance-debugging-tools/valgrind/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/development/performance-debugging-tools/aps/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/containers/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/services/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/support/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/contributing/ + 2024-11-13 + daily + + https://hpc-docs.uni.lu/contributing/versioning/ + 2024-11-13 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..a5e37031 Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/slurm/accounts/index.html b/slurm/accounts/index.html new file mode 100644 index 00000000..10dcee00 --- /dev/null +++ b/slurm/accounts/index.html @@ -0,0 +1,3081 @@ + + + + + + + + + + + + + + + + + + + + + + + + Account Hierarchy - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Slurm Account Hierarchy

    +

    The ULHPC resources can be reserved and allocated for the execution of jobs scheduled on the platform thanks to a Resource and Job Management Systems (RJMS) - Slurm in practice. +This tool is configured to collect accounting information for every job and job step executed -- see SchedMD accounting documentation.

    +
    ULHPC account (login) vs. Slurm [meta-]account
      +
    • +

      Your ULHPC account defines the UNIX user you can use to connect to the facility and make you known to our systems. They are managed by IPA and define your login.

      +
    • +
    • +

      Slurm accounts, refered to as meta-account in the sequel, are more loosely defined in Slurm, and should be seen as something similar to a UNIX group: it may contain other (set of) slurm account(s), multiple users, or just a single user. A user may belong to multiple slurm accounts, but MUST have a DefaultAccount, which is set to your line manager or principal investigator meta-account.

      +
    • +
    +
    +

    ULHPC Account Tree Hierarchy

    +

    Every user job runs under a group account, granting access to specific QOS levels. +Such an account is unique within the account hierarchy. +Accounting records are organized as a hierarchical tree according to 3 layers (slurm accounts) as depicted in the below figure (click to enlarge). +At the leaf hierarchy stands the End user <login> from the IPA IdM database, bringing a total of 4 levels.

    +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    LevelAccount TypeDescriptionExample
    L1meta-accountTop-level structure / organizationsUL, CRP, Externals, Projects, Trainings
    L2meta-accountOrganizational Unit (Faculty, ICs, External partner, Funding program...)FSTM, LCSB, LIST...
    L3meta-accountPrincipal investigators (PIs), project, courses/lectures<firstname>.<lastname>, <acronym>, <course>
    L4loginEnd-users (staff, student): your ULHPC/IPA loginyourlogin
    +
    +

    Extracting your association tree

    +

    By default, you will be able to see only the account hierarchy you belongs too through the association(s) set with your login. +You can extract it with:

    +
    $ sacctmgr show association where parent=root format="account,user%20,Share,QOS%50" withsubaccounts
    +   Account                 User     Share                                                QOS
    +---------------------- -------- ----------- --------------------------------------------------
    +                 <top>            <L1share>                   besteffort,debug,long,low,normal
    +             <orgunit>            <L2share>                   besteffort,debug,long,low,normal
    +<firstname>.<lastname>            <L3share>                   besteffort,debug,long,low,normal
    +<firstname>.<lastname>  <login>   <L4share>                   besteffort,debug,long,low,normal
    +
    + +
    +
    (Admins) Extract the full hierarchy

    The below commands assumes you have supervision rights on the root account.

    +

    To list available L1 accounts (Top-level structure / organizations), use +

    sacctmgr show association where parent=root format="cluster,account,Share,QOS%50"
    +
    +To list L2 accounts:

    +
    +
    sacctmgr show association where parent=UL format="cluster,account,Share,QOS%50"
    +
    + +
    +
    +
    sacctmgr show association where parent=CRP format="cluster,account,Share,QOS%50"
    +
    + +
    +
    +
    sacctmgr show association where parent=externals format="cluster,account,Share,QOS%50"
    +
    + +
    +
    +
    sacctmgr show association where parent=projects format="cluster,account,Share,QOS%50"
    +
    + +
    +
    +
    sacctmgr show association where parent=trainings format="cluster,account,Share,QOS%50"
    +
    + +
    +
    +

    To quickly list L3 accounts and its subaccounts: sassoc <account>, or +

    sacctmgr show association where accounts=<L3account> format="account%20,user%20,Share,QOS%50"
    +
    +To quickly list End User (L4) associations, use sassoc <login>, or +
    sacctmgr show association where users=<login> format="account%20,user%20,Share,QOS%50"
    +

    +
    +
    +

    Default account vs. multiple associations

    +

    A given user <login> can be associated to multiple accounts, but have a single DefaultAccount (a meta-account at L3 level reflecting your line manager (Format: <firstname>.<lastname>).

    +
    +

    To get information about your account information in the hierarchy, use the custom acct helper function, typically as acct $USER.

    +
    +

    Get ULHPC account information with acct <login>

    +

    # /!\ ADAPT <login> accordingly
    +$ acct <login>
    +# sacctmgr show user where name="<login>" format=user,account%20,DefaultAccount%20,share,qos%50 withassoc
    +     User                Account             Def Acct       Share                                     QOS
    +  ------- ----------------------- ----------------------  ------- ---------------------------------------
    +  <login>         project_<name1> <firstname>.<lastname>        1        besteffort,debug,long,low,normal
    +  <login>         project_<name2> <firstname>.<lastname>        1   besteffort,debug,high,long,low,normal
    +  <login>  <firstname>.<lastname> <firstname>.<lastname>        1        besteffort,debug,long,low,normal
    +# ==> <login> Default account: <firstname>.<lastname>
    +
    +In the above example, the user <login> is associated to 3 meta-accounts at the L3 level of the hierarchy (his PI <firstname>.<lastname> and two projects account), each granting access to potentially different QOS. +The account used upon job submission can be set with the -A <account> option. With the above example: +
    $ sbatch|srun|... [...]                     # Use default account: <firstname>.<lastname>
    +$ sbatch|srun|... -A project_<name1> [...]  # Use account project_<name1>
    +$ sbatch|srun|... -A project_<name2> --qos high [...] # Use account project_<name2>, granting access to high QOS
    +$ sbatch|srun|... -A anotheraccount [...]   # Error: non-existing association between <login> and anotheraccount
    +

    +
    +

    To list all associations for a given user or meta-account, use the sassoc helper function: +

    # /!\ ADAPT <login> accordingly
    +$ sassoc <login>
    +
    +You may use more classically the sacctmgr show [...] command:

    +
      +
    • User information: sacctmgr show user where name=<login> [withassoc] (use the withassoc attribute to list all associations).
    • +
    • Default account: sacctmgr show user where name="<login>" format=DefaultAccount -P -n
    • +
    • Get the parent account: sacctmgr show account where name=ulhpc format=Org -n -P
    • +
    +

    To get the current association tree: add withsubaccounts to see ALL sub accounts

    +
    # L1,L2 or L3 account /!\ ADAPT <name> accordingly
    +sacctmgr show association tree where accounts=<name> format=account,share
    +# End user (L4)
    +sacctmgr show association where users=$USER  format=account,User,share,Partition,QOS
    +
    + +
    No association, no job!

    It is mandatory to have your login registered within at least one association toward a meta-account (PI, project name) to be able to schedule jobs on the

    +
    +

    Impact on FairSharing and Job Accounting

    +

    Every node in the above-mentioned tree hierarchy is associated with a weight defining its Raw Share in the FairSharing mechanism in place. + +Different rules are applied to define these weights/shares depending on the level in the hierarchy:

    +
      +
    • L1 (Organizational Unit): arbitrary shares to dedicate at least 85% of the platform to serve UL needs and projects
    • +
    • L2: function of the out-degree of the tree nodes, reflecting also the past year funding
    • +
    • L3: a function reflecting the budget contribution of the PI/project (normalized on a per-month basis) for the year in exercise.
    • +
    • L4 (ULHPC/IPA login): efficiency score, giving incentives for a more efficient usage of the platform.
    • +
    + + +

    More details are given on this page.

    +

    Default vs. Project accounts

    +

    Default account associations are defined as follows:

    +
      +
    • For UL staff or external partners: your direct Line Manager firstname.lastname within the institution (Faculty, IC, Company) you belong too.
    • +
    • For students: the lecture/course they are registered too
        +
      • Guest student/training accounts are associated to the Students meta-account.
      • +
      +
    • +
    +

    In addition, your user account (ULHPC login) may be associated to other meta-accounts such as projects or specific training events.

    +

    To establish job accounting against these extra specific accounts, use:

    +
    {sbatch|srun} -A project_<name> [...]
    +
    + +

    For more details, see Project accounts.

    +
    +
    +
      +
    1. +

      restrictions applies and do not permit to reveal all information for other accounts than yours. 

      +
    2. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/slurm/commands/index.html b/slurm/commands/index.html new file mode 100644 index 00000000..80dcd512 --- /dev/null +++ b/slurm/commands/index.html @@ -0,0 +1,3650 @@ + + + + + + + + + + + + + + + + + + + + + + + + Convenient Slurm Commands - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Main Slurm Commands

    +

    Submit Jobs

    + + +

    There are three ways of submitting jobs with slurm, using either sbatch, srun or salloc:

    +
    +
    ### /!\ Adapt <partition>, <qos>, <account> and <command> accordingly
    +sbatch -p <partition> [--qos <qos>] [-A <account>] [...] <path/to/launcher.sh>
    +
    + +
    +
    +

    ### /!\ Adapt <partition>, <qos>, <account> and <command> accordingly
    +srun -p <partition> [--qos <qos>] [-A <account>] [...] ---pty bash
    +
    +srun is also to be using within your launcher script to initiate a job step.

    +
    +
    +
    # Request interactive jobs/allocations
    +### /!\ Adapt <partition>, <qos>, <account> and <command> accordingly
    +salloc -p <partition> [--qos <qos>] [-A <account>] [...] <command>
    +
    + +
    +
    +

    sbatch

    + + +

    sbatch is used to submit a batch launcher script for later execution, corresponding to batch/passive submission mode. +The script will typically contain one or more srun commands to launch parallel tasks. +Upon submission with sbatch, Slurm will:

    +
      +
    • allocate resources (nodes, tasks, partition, constraints, etc.)
    • +
    • runs a single copy of the batch script on the first allocated node
        +
      • in particular, if you depend on other scripts, ensure you have refer to them with the complete path toward them.
      • +
      +
    • +
    +

    When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm.

    +

    # /!\ ADAPT path to launcher accordingly
    +$ sbatch <path/to/launcher>.sh
    +Submitted batch job 864933
    +
    +

    +

    srun

    +

    srun is used to initiate parallel job steps within a job OR to start an interactive job +Upon submission with srun, Slurm will:

    +
      +
    • (eventually) allocate resources (nodes, tasks, partition, constraints, etc.) when run for interactive submission
    • +
    • launch a job step that will execute on the allocated resources.
    • +
    +

    A job can contain multiple job steps executing sequentially +or in parallel on independent or shared resources within the job's +node allocation.

    +

    salloc

    +

    salloc is used to allocate resources for a job +in real time. Typically this is used to allocate resources (nodes, tasks, partition, etc.) and spawn a +shell. The shell is then used to execute srun commands to launch +parallel tasks.

    + + +

    Interactive jobs: si*

    +

    You should use the helper functions si, si-gpu, si-bigmem to submit an interactive job.

    +

    For more details, see interactive jobs.

    +

    Collect Job Information

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CommandDescription
    sacct [-X] -j <jobid> [...]display accounting information on jobs.
    scontrol show [...]view and/or update system, nodes, job, step, partition or reservation status
    seff <jobid>get efficiency metrics of past job
    smapgraphically show information on jobs, nodes, partitions
    sprioshow factors that comprise a jobs scheduling priority
    squeue [-u $(whoami)]display jobs[steps] and their state
    sstatshow status of running jobs.
    + + +

    squeue

    +

    You can view information about jobs located in the Slurm scheduling queue (partition/qos), eventually filter on specific job state (R:running /PD:pending / F:failed / PR:preempted) with squeue:

    +
    $ squeue [-u <user>] [-p <partition>] [---qos <qos>] [--reservation <name>] [-t R|PD|F|PR]
    +
    + +

    To quickly access your jobs, you can simply use sq

    +

    Live job statistics

    +

    You can use the scurrent (for current interactive job) or (more generally) scontrol show job <jobid> to collect detailed information for a running job.

    +
    scontrol show job <jobid>
    $  scontrol show job 2166371
    +JobId=2166371 JobName=bash
    +   UserId=<login>(<uid>) GroupId=clusterusers(666) MCS_label=N/A
    +   Priority=12741 Nice=0 Account=ulhpc QOS=debug JobState=RUNNING Reason=None
    +   [...]
    +   SubmitTime=2020-12-07T22:08:25 EligibleTime=2020-12-07T22:08:25
    +   StartTime=2020-12-07T22:08:25 EndTime=2020-12-07T22:38:25
    +   [...]
    +   WorkDir=/mnt/irisgpfs/users/<login>
    +
    + +
    +

    Past job statistics: slist, sreport

    +

    Use the slist helper for a given job:

    +
    # /!\ ADAPT <jobid> accordingly
    +$ slist <jobid>
    +# sacct -j <JOBID> --format User,JobID,Jobname%30,partition,state,time,elapsed,\
    +#              MaxRss,MaxVMSize,nnodes,ncpus,nodelist,AveCPU,ConsumedEnergyRaw
    +# seff <jobid>
    +
    + +

    You can also use sreport o generate reports of job usage and cluster utilization for Slurm jobs. For instance, to list your usage in CPU-hours since the beginning of the year:

    +
    $ sreport -t hours cluster UserUtilizationByAccount Users=$USER  Start=$(date +%Y)-01-01
    +--------------------------------------------------------------------------------
    +Cluster/User/Account Utilization 2021-01-01T00:00:00 - 2021-02-13T23:59:59 (3801600 secs)
    +Usage reported in CPU Hours
    +----------------------------------------------------------------------------
    +  Cluster     Login     Proper Name                Account     Used   Energy
    +--------- --------- --------------- ---------------------- -------- --------
    +     iris   <login>          <name> <firstname>.<lastname>    [...]
    +     iris   <login>          <name>      project_<acronym>    [...]
    +
    + +

    Job efficiency

    +

    seff

    + + +

    Use seff to double check a past job CPU/Memory efficiency. Below examples should be self-speaking:

    +
    +
    $ seff 2171749
    +Job ID: 2171749
    +Cluster: iris
    +User/Group: <login>/clusterusers
    +State: COMPLETED (exit code 0)
    +Nodes: 1
    +Cores per node: 28
    +CPU Utilized: 41-01:38:14
    +CPU Efficiency: 99.64% of 41-05:09:44 core-walltime
    +Job Wall-clock time: 1-11:19:38
    +Memory Utilized: 2.73 GB
    +Memory Efficiency: 2.43% of 112.00 GB
    +
    + +
    +
    +
    $ seff 2117620
    +Job ID: 2117620
    +Cluster: iris
    +User/Group: <login>/clusterusers
    +State: COMPLETED (exit code 0)
    +Nodes: 1
    +Cores per node: 16
    +CPU Utilized: 14:24:49
    +CPU Efficiency: 23.72% of 2-12:46:24 core-walltime
    +Job Wall-clock time: 03:47:54
    +Memory Utilized: 193.04 GB
    +Memory Efficiency: 80.43% of 240.00 GB
    +
    + +
    +
    +
    $ seff 2138087
    +Job ID: 2138087
    +Cluster: iris
    +User/Group: <login>/clusterusers
    +State: COMPLETED (exit code 0)
    +Nodes: 1
    +Cores per node: 64
    +CPU Utilized: 87-16:58:22
    +CPU Efficiency: 86.58% of 101-07:16:16 core-walltime
    +Job Wall-clock time: 1-13:59:19
    +Memory Utilized: 1.64 TB
    +Memory Efficiency: 99.29% of 1.65 TB
    +
    + +
    +
    +

    This illustrates a very bad job in terms of CPU/memory efficiency (below 4%), which illustrate a case where basically the user wasted 4 hours of computation while mobilizing a full node and its 28 cores. +

    $ seff 2199497
    +Job ID: 2199497
    +Cluster: iris
    +User/Group: <login>/clusterusers
    +State: COMPLETED (exit code 0)
    +Nodes: 1
    +Cores per node: 28
    +CPU Utilized: 00:08:33
    +CPU Efficiency: 3.55% of 04:00:48 core-walltime
    +Job Wall-clock time: 00:08:36
    +Memory Utilized: 55.84 MB
    +Memory Efficiency: 0.05% of 112.00 GB
    +
    + This is typical of a single-core task can could be drastically improved via GNU Parallel.

    +
    +
    +

    Note however that demonstrating a CPU good efficiency with seff may not be enough! +You may still induce an abnormal load on the reserved nodes if you spawn more processes than allowed by the Slurm reservation. +To avoid that, always try to prefix your executions with srun within your launchers. See also Specific Resource Allocations.

    + + +

    susage

    + + +

    Use susage to check your past jobs walltime accuracy (Timelimit vs. Elapsed)

    +

    $ susage -h
    +Usage: susage [-m] [-Y] [-S YYYY-MM-DD] [-E YYYT-MM-DD]
    +  For a specific user (if accounting rights granted):    susage [...] -u <user>
    +  For a specific account (if accounting rights granted): susage [...] -A <account>
    +Display past job usage summary
    +
    +

    +

    Official sacct command

    + + +

    Alternatively, you can use sacct (use sacct --helpformat to get the list of) for COMPLETED or TIMEOUT jobs (see Job State Codes).

    +
    using sacct -X -S <start> [...] --format [...],time,elapsed,[...]

    ADAPT -S <start> and -E <end> dates accordingly - Format: YYYY-MM-DD. +hint: $(date +%F) will return today's date in that format, $(date +%Y) return the current year, so the below command will list your completed (or timeout jobs) since the beginning of the month: +

    $ sacct -X -S $(date +%Y)-01-01 -E $(date +%F) --partition batch,gpu,bigmem --state CD,TO --format User,JobID,partition%12,qos,state,time,elapsed,nnodes,ncpus,allocGRES
    +     User        JobID    Partition        QOS      State  Timelimit    Elapsed   NNodes      NCPUS    AllocGRES
    +--------- ------------ ------------ ---------- ---------- ---------- ---------- -------- ---------- ------------
    + <login> 2243517             batch     normal    TIMEOUT 2-00:00:00 2-00:00:05        4        112
    + <login> 2243518             batch     normal    TIMEOUT 2-00:00:00 2-00:00:05        4        112
    + <login> 2244056               gpu     normal    TIMEOUT 2-00:00:00 2-00:00:12        1         16        gpu:2
    + <login> 2246094               gpu       high    TIMEOUT 2-00:00:00 2-00:00:29        1         16        gpu:2
    + <login> 2246120               gpu       high  COMPLETED 2-00:00:00 1-02:18:00        1         16        gpu:2
    + <login> 2247278            bigmem     normal  COMPLETED 2-00:00:00 1-05:59:21        1         56
    + <login> 2250178             batch     normal  COMPLETED 2-00:00:00   10:04:32        1          1
    + <login> 2251232               gpu     normal  COMPLETED 1-00:00:00   12:05:46        1          6        gpu:1
    +

    +
    + + +

    Platform Status

    +

    sinfo

    +

    sinfo allow to view information about partition status (-p <partition>), problematic nodes (-R), reservations (-T), eventually in a summarized form (-s),

    +
    sinfo [-p <partition>] {-s | -R | -T |...}
    +
    + +

    We are providing a certain number of helper functions based on sinfo:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CommandDescription
    nodelistList available nodes
    allocnodesList currently allocated nodes
    idlenodesList currently idle nodes
    deadnodesList dead nodes per partition (hopefully none ;))
    sissuesList nodes with issues/problems, with reasons
    sfeaturesList available node features
    +

    Cluster, partition and QOS usage stats

    +

    We have defined several custom ULHPC Slurm helpers defined in /etc/profile.d/slurm.sh to facilitate access to account/parition/qos/usage information. +They are listed below.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CommandDescription
    acct <name>Get information on user/account holder <name> in Slurm accounting DB
    irisstat, aionstatreport cluster status (utilization, partition and QOS live stats)
    listpartitionjobs <part>List jobs (and current load) of the slurm partition <part>
    pload [-a] i/b/g/mOverview of the Slurm partition load
    qload [-a] <qos>Show current load of the slurm QOS <qos>
    sbill <jobid>Display job charging / billing summary
    sjoin [-w <node>]join a running job
    sassoc <name>Show Slurm association information for <name> (user or account)
    slist <jobid> [-X]List statistics of a past job
    sqosShow QOS information and limits
    susage [-m] [-Y] [...]Display past job usage summary
    +

    Updating jobs

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    CommandDescription
    scancel <jobid>cancel a job or set of jobs.
    scontrol update jobid=<jobid> [...]update pending job definition
    scontrol hold <jobid>Hold job
    scontrol resume <jobid>Resume held job
    +

    The scontrol command allows certain charactistics of a job to be +updated while it is still queued (i.e. not running ), with the syntax scontrol update jobid=<jobid> [...]

    +
    +

    Important

    +

    Once the job is running, most changes requested with scontrol update jobid=[...] will NOT be applied.

    +
    +

    Change timelimit

    +
    # /!\ ADAPT <jobid> and new time limit accordingly
    +scontrol update jobid=<jobid> timelimit=<[DD-]HH:MM::SS>
    +
    + +

    Change QOS or Reservation

    +
    # /!\ ADAPT <jobid>, <qos>, <resname> accordingly
    +scontrol update jobid=<jobid> qos=<qos>
    +scontrol update jobid=<jobid> reservationname=<resname>
    +
    + +

    Change account

    +

    If you forgot to specify the expected project account:

    +
    # /!\ ADAPT <jobid>, <account> accordingly
    +scontrol update jobid=<jobid> account=<account>
    +
    + +
    +

    The new account must be eligible to run the job. See Account Hierarchy for more details.

    +
    +

    Hold and Resume jobs

    +

    Prevent a pending job from being started:

    +
    # /!\ ADAPT <jobid>  accordingly
    +scontrol hold <jobid>
    +
    + +

    Allow a held job to accrue priority and run:

    +
    # /!\ ADAPT <jobid>  accordingly
    +scontrol release <jobid>
    +
    + +

    Cancel jobs

    +

    Cancel a specific job:

    +
    # /!\ ADAPT <jobid> accordingly
    +scancel <jobid>
    +
    + +
    Cancel all jobs owned by a user (you)

    scancel -u $USER
    +
    +This only applies to jobs which are associated with your +accounts.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/slurm/fairsharing/index.html b/slurm/fairsharing/index.html new file mode 100644 index 00000000..8f3f468b --- /dev/null +++ b/slurm/fairsharing/index.html @@ -0,0 +1,3289 @@ + + + + + + + + + + + + + + + + + + + + + + + + Fairsharing - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Fairsharing and Job Accounting

    + +

    Fairshare allows past resource utilization information to be taken into +account into job feasibility and priority decisions to ensure a fair +allocation of the computational resources between the all ULHPC users. +A difference with a equal scheduling is illustrated in the side picture (source).

    +

    +

    Essentially fairshare is a way of ensuring that users get their appropriate +portion of a system. Sadly this term is also used confusingly for different +parts of fairshare listed below, so for the sake of clarity, the following terms +will be used:

    +
      +
    • [Raw] Share: portion of the system users have been granted
    • +
    • [Raw] Usage: amount of the system users have actually used so far
        +
      • The fairshare score is the value the system calculates based on the usage + and the share (see below)
      • +
      +
    • +
    • Priority: the priority that users are assigned based off of their fairshare score.
    • +
    +
    +

    Demystifying Fairshare

    +

    While fairshare may seem complex and confusing, it is actually quite logical +once you think about it. +The scheduler needs some way to adjudicate who gets what resources when +different groups on the cluster have been granted different resources and shares +for various reasons (see Account Hierarchy).

    +

    In order to serve the great variety of groups and needs on the cluster, a +method of fairly adjudicating job priority is required. +This is the goal of Fairshare. +Fairshare allows those users who have not fully used their resource +grant to get higher priority for their jobs on the cluster, while making +sure that those groups that have used more than their resource grant +do not overuse the cluster.

    +

    The ULHPC supercomputers are a limited shared resource, and Fairshare +ensures everyone gets a fair opportunity to use it regardless of +how big or small the group is.

    +
    +

    FairTree Algorithm

    +

    There exists several fairsharing +algorithms +implemented in Slurm:

    + +
    +

    What is Fair Tree?

    +

    The Fair Tree algorithm +prioritizes users such that if accounts A and B are siblings and A has a +higher fairshare factor than B, then all children of A will have higher +fairshare factors than all children of B.

    +

    This is done through a rooted plane tree +(PDF), also known as a +rooted ordered tree, which is logically created then sorted by fairshare +with the highest fairshare values on the left. +The tree is then visited in a depth-first traversal way. +Users are ranked in pre-order as they are found. The ranking is used to +create the final fairshare factor for the user. +Fair Tree Traversal +Illustrated - +initial post

    +

    Some of the benefits include:

    +
      +
    • All users from a higher priority account receive a higher fair share +factor than all users from a lower priority account.
    • +
    • Users are sorted and ranked to prevent errors due to precision loss. +Ties are allowed.
    • +
    • Account coordinators cannot accidentally harm the priority of their users +relative to users in other accounts.
    • +
    • Users are extremely unlikely to have exactly the same fairshare factor as +another user due to loss of precision in calculations.
    • +
    • New jobs are immediately assigned a priority.
    • +
    +

    Overview of Fair Tree for End Users + Level Fairshare Calculation

    +
    +

    Shares

    +

    On ULHPC facilities, each user is associated by default to a meta-account reflecting its +direct Line Manager within the institution (Faculty, IC, Company) you belong +too -- see ULHPC Account Hierarchy. +You may have other account associations (typically toward projects accounts, granting +access to different QOS for instance), and each accounts have Shares granted to +them. These Shares determine how much of the cluster that group/account has +been granted. +Users when they run are charged back for their runs against the account used +upon job submission -- you can use sbatch|srun|... -A <account> [...] to +change that account.

    +

    ULHPC Usage Charging +Policy

    + + +

    Different rules are applied to define these weights/shares depending on the level in the hierarchy:

    +
      +
    • L1 (Organizational Unit): arbitrary shares to dedicate at least 85% of the platform to serve UL needs and projects
    • +
    • L2: function of the out-degree of the tree nodes, reflecting also the past year funding
    • +
    • L3: a function reflecting the budget contribution of the PI/project (normalized on a per-month basis) for the year in exercise.
    • +
    • L4 (ULHPC/IPA login): efficiency score, giving incentives for a more efficient usage of the platform.
    • +
    + + +

    Fair Share Factor

    +

    The Fairshare score is the value Slurm calculates based off of user's +usage reflecting the difference between the portion of the computing resource +that has been promised (share) and the amount of resources that has been +consumed. +It thus influences the order in which a user's queued jobs are scheduled to run based on the portion of the computing resources they have been allocated and the resources their jobs have already consumed.

    +

    In practice, Slurm's fair-share factor is a floating point number between 0.0 and 1.0 that reflects the shares of a computing resource that a user has been allocated and the amount of computing resources the user's jobs have consumed.

    +
      +
    • The higher the value, the higher is the placement in the queue of jobs waiting to be scheduled.
    • +
    • Reciprocally, the more resources the users is consuming, the lower the fair share factor will be which will result in lower priorities.
    • +
    +

    ulhpcshare helper

    +
    +

    Listing the ULHPC shares: ulhpcshare helper

    +

    sshare can be used to view the fair share factors and corresponding promised and actual usage for all users. +However, you are encouraged to use the ulhpcshare helper function: +

    # your current shares and fair-share factors among your associations
    +ulhpcshare
    +# as above, but for user '<login>'
    +ulhpcshare -u <login>
    +# as above, but for account '<account>'
    +ulhpcshare -A <account>
    +
    +The column that contains the actual factor is called "FairShare".

    +
    +

    Official sshare utility

    +

    ulhpcshare is a wrapper around the official sshare utility. +You can quickly see your score with +

    $ sshare  [-A <account>] [-l] [--format=Account,User,RawShares,NormShares,EffectvUsage,LevelFS,FairShare]
    +
    +It will show the Level Fairshare value as Level FS. +The field shows the value for each association, thus allowing users to see the results of the fairshare calculation at each level.

    +

    Note: Unlike the Effective Usage, the Norm Usage is not used by Fair Tree but is still displayed in this case.

    +

    Slurm Parameter Definitions

    +

    In this part some of the set slurm parameters are explained which are used to set up the Fair Tree Fairshare Algorithm. For a more detailed explanation please consult the official documentation

    +
      +
    • PriorityCalcPeriod=HH:MM::SS: frequency in minutes that job half-life decay and Fair Tree calculations are performed.
    • +
    • PriorityDecayHalfLife=[number of days]-[number of hours]: the time, of which the resource consumption is taken into account for the Fairshare Algorithm, can be set by this.
    • +
    • PriorityMaxAge=[number of days]-[number of hours]: the maximal queueing time which counts for the priority calculation. Note that queueing times above are possible but do not contribute to the priority factor.
    • +
    +

    A quick way to check the currently running configuration is:

    +
    scontrol show config | grep -i priority
    +
    + +

    Trackable RESources (TRES) Billing Weights

    +

    Slurm saves accounting data for every job or job step that the user submits. +On ULHPC facilities, Slurm Trackable RESources +(TRES) is enabled to allow for +the scheduler to charge back users for how much they have used of different +features (i.e. not only CPU) on the cluster -- see Job Accounting and Billing. +This is important as the usage of the cluster factors into the Fairshare +calculation.

    + + +

    As explained in the ULHPC Usage Charging +Policy, we set TRES for CPU, GPU, and Memory +usage according to weights defined as follows:

    + + + + + + + + + + + + + + + + + + + + + +
    WeightDescription
    \alpha_{cpu}Normalized relative performance of CPU processor core (ref.: skylake 73.6 GFlops/core)
    \alpha_{mem}Inverse of the average available memory size per core
    \alpha_{GPU}Weight per GPU accelerator
    +

    Each partition has its own weights +(combined into TRESBillingWeight) you can check with

    +
    # /!\ ADAPT <partition> accordingly
    +scontrol show partition <partition>
    +
    + + + + + + + +

    FAQ

    +

    Q: My user fairshare is low, what can I do?

    +

    We have introduced an efficiency score evaluated on a regular basis (by default, +every year) to measure how efficient you use the computational resources of the +University according to several measures for completed jobs:

    +
      +
    • How efficient you were to estimate the walltime of your jobs (Average Walltime +Accuracy)
    • +
    • How CPU/Memory efficient were your completed jobs (see seff)
    • +
    +

    Without entering into the details, we combine these metrics to compute an unique +score value S_\text{efficiency} and you obtain a grade: A (very good), B, +C, or D (very bad) which can increase your user share.

    +

    Q: My account fairshare is low, what can I do?

    +

    There are several things that can be done when your fairshare is low:

    +
      +
    1. Do not run jobs: Fairshare recovers via two routes.
        +
      • The first is via your group not running any jobs and letting others use the +resource. That allows your fractional usage to decrease which in turn +increases your fairshare score.
      • +
      • The second is via the half-life we apply to fairshare which ages out old + usage over time. + Both of these method require not action but inaction on the part of your + group. + Thus to recover your fairshare simply stop running jobs until your fairshare + reaches the level you desire. + Be warned this could take several weeks to accomplish depending on your + current usage.
      • +
      +
    2. +
    3. Be patient, as a corollary to the previous point. Even if your + fairshare is low, your job gains priority by sitting the queue (see Job Priority) + The longer it sits the higher priority it gains. So even if you have very + low fairshare your jobs will eventually run, it just may take several days to + accomplish.
    4. +
    5. Leverage Backfill: Slurm runs in two scheduling loops.
        +
      • The first loop is the main loop which simply looks at the top of the + priority chain for the partition and tries to schedule that job. It will + schedule jobs until it hits a job it cannot schedule and then it restarts + the loop.
      • +
      • The second loop is the backfill loop. This loop looks through jobs + further down in the queue and asks can I schedule this job now and not + interfere with the start time of the top priority job. Think of it as the + scheduler playing giant game of three dimensional tetris, where the + dimensions are number of cores, amount of memory, and amount of time. If + your job will fit in the gaps that the scheduler has it will put your job + in that spot even if it is low priority. This requires you to be very + accurate in specifying the core, memory, and time usage (typically below + ) of your job. + The better constrained your job is the more likely the scheduler is to + fit you in to these gaps**. + The seff utility is a great way of figuring out your job performance.
      • +
      +
    6. +
    7. Plan: Better planning and knowledge of your historic usage can help you + better budget your time on the cluster. Our clusters are not infinite + resources. You have been allocated a slice of the cluster, thus it is best + to budget your usage so that you can run high priority jobs when you need to.
    8. +
    9. HPC Budget contribution: If your group has persistent high demand that cannot be met + with your current allocation, serious consideration should be given to + contributing to the ULHPC budget line.
        +
      • This should be done for funded research projects - see + HPC Resource Allocations for Research Project
      • +
      • This can be done by each individual PI, Dean or IC director +In all cases, any contribution on year Y grants additional shares for the +group starting year Y+1. We apply a consistent (complex) function taking +into account depreciation of the investment. Contact us (by mail or by a ticket for more details.
      • +
      +
    10. +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/slurm/images/equal_vs_fair_share.jpg b/slurm/images/equal_vs_fair_share.jpg new file mode 100644 index 00000000..d13895f7 Binary files /dev/null and b/slurm/images/equal_vs_fair_share.jpg differ diff --git a/slurm/images/slurm_mc_support.png b/slurm/images/slurm_mc_support.png new file mode 100644 index 00000000..b188125e Binary files /dev/null and b/slurm/images/slurm_mc_support.png differ diff --git a/slurm/index.html b/slurm/index.html new file mode 100644 index 00000000..f2f8b3bd --- /dev/null +++ b/slurm/index.html @@ -0,0 +1,3574 @@ + + + + + + + + + + + + + + + + + + + + + + + + Slurm Overview - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Slurm Resource and Job Management System

    +

    ULHPC uses Slurm (Simple Linux Utility for Resource Management) for cluster/resource management and job scheduling. +This middleware is responsible for allocating resources to users, providing a framework for starting, executing and monitoring work on allocated resources and scheduling work for future execution.

    +

    Official docs + Official FAQ + ULHPC Tutorial/Getting Started

    +

    +
    +

    IEEE ISPDC22: ULHPC Slurm 2.0

    +

    If you want more details on the RJMS optimizations performed upon Aion acquisition, check out our IEEE ISPDC22 conference paper (21st IEEE Int. Symp. on Parallel and Distributed Computing) presented in Basel (Switzerland) on July 13, 2022.

    +
    +

    IEEE Reference Format | ORBilu entry | ULHPC blog post | slides
    +Sebastien Varrette, Emmanuel Kieffer, and Frederic Pinel, "Optimizing the Resource and Job Management System of an Academic HPC and Research Computing Facility". In 21st IEEE Intl. Symp. on Parallel and Distributed Computing (ISPDC’22), Basel, Switzerland, 2022.

    +
    +
    +

    TL;DR Slurm on ULHPC clusters

    + + +

    In its concise form, the Slurm configuration in place on ULHPC +supercomputers features the following attributes you +should be aware of when interacting with it:

    +
      +
    • Predefined Queues/Partitions depending on node type
        +
      • batch (Default Dual-CPU nodes) Max: 64 nodes, 2 days walltime
      • +
      • gpu (GPU nodes nodes) Max: 4 nodes, 2 days walltime
      • +
      • bigmem (Large-Memory nodes) Max: 1 node, 2 days walltime
      • +
      • In addition: interactive (for quicks tests) Max: 2 nodes, 2h walltime
          +
        • for code development, testing, and debugging
        • +
        +
      • +
      +
    • +
    • Queue Policy: cross-partition QOS, mainly tied to priority level (low \rightarrow urgent)
        +
      • long QOS with extended Max walltime (MaxWall) set to 14 days
      • +
      • special preemptible QOS for best-effort jobs: besteffort.
      • +
      +
    • +
    • Accounts hierarchy associated to supervisors (multiple + associations possible), projects or trainings +
    • +
    • Slurm Federation configuration between iris and aion
        +
      • ensures global policy (coherent job ID, global scheduling, etc.) within ULHPC systems
      • +
      • easily submit jobs from one cluster to another using -M, --cluster aion|iris
      • +
      +
    • +
    + + +

    For more details, see the appropriate pages in the left menu (or the above conference paper).

    +

    Jobs

    +

    A job is an allocation of resources such as compute nodes assigned to a user for an certain amount of time. +Jobs can be interactive or passive (e.g., a batch script) scheduled for later execution.

    +
    +

    What characterize a job?

    +

    A user jobs have the following key characteristics:

    +
      +
    • set of requested resources:
        +
      • number of computing resources: nodes (including all their CPUs and cores) or CPUs (including all their cores) or cores
      • +
      • amount of memory: either per node or per CPU
      • +
      • (wall)time needed for the users tasks to complete their work
      • +
      +
    • +
    • a requested node partition (job queue)
    • +
    • a requested quality of service (QoS) level which grants users specific accesses
    • +
    • a requested account for accounting purposes
    • +
    +
    +

    Once a job is assigned a set of nodes, the user is able to initiate parallel work in the form of job steps (sets of tasks) in any configuration within the allocation.

    +

    When you login to a ULHPC system you land on a access/login node. Login nodes are only for editing and preparing jobs: They are not meant for actually running jobs. +From the login node you can interact with Slurm to submit job scripts or start interactive jobs, which will be further run on the compute nodes.

    +

    Submit Jobs

    + + +

    There are three ways of submitting jobs with slurm, using either sbatch, srun or salloc:

    +
    +
    ### /!\ Adapt <partition>, <qos>, <account> and <command> accordingly
    +sbatch -p <partition> [--qos <qos>] [-A <account>] [...] <path/to/launcher.sh>
    +
    + +
    +
    +

    ### /!\ Adapt <partition>, <qos>, <account> and <command> accordingly
    +srun -p <partition> [--qos <qos>] [-A <account>] [...] ---pty bash
    +
    +srun is also to be using within your launcher script to initiate a job step.

    +
    +
    +
    # Request interactive jobs/allocations
    +### /!\ Adapt <partition>, <qos>, <account> and <command> accordingly
    +salloc -p <partition> [--qos <qos>] [-A <account>] [...] <command>
    +
    + +
    +
    +

    sbatch

    + + +

    sbatch is used to submit a batch launcher script for later execution, corresponding to batch/passive submission mode. +The script will typically contain one or more srun commands to launch parallel tasks. +Upon submission with sbatch, Slurm will:

    +
      +
    • allocate resources (nodes, tasks, partition, constraints, etc.)
    • +
    • runs a single copy of the batch script on the first allocated node
        +
      • in particular, if you depend on other scripts, ensure you have refer to them with the complete path toward them.
      • +
      +
    • +
    +

    When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm.

    +

    # /!\ ADAPT path to launcher accordingly
    +$ sbatch <path/to/launcher>.sh
    +Submitted batch job 864933
    +
    +

    +

    srun

    +

    srun is used to initiate parallel job steps within a job OR to start an interactive job +Upon submission with srun, Slurm will:

    +
      +
    • (eventually) allocate resources (nodes, tasks, partition, constraints, etc.) when run for interactive submission
    • +
    • launch a job step that will execute on the allocated resources.
    • +
    +

    A job can contain multiple job steps executing sequentially +or in parallel on independent or shared resources within the job's +node allocation.

    +

    salloc

    +

    salloc is used to allocate resources for a job +in real time. Typically this is used to allocate resources (nodes, tasks, partition, etc.) and spawn a +shell. The shell is then used to execute srun commands to launch +parallel tasks.

    + + +

    Specific Resource Allocation

    + + +

    Within a job, you aim at running a certain number of tasks, and Slurm allow for a fine-grain control of the resource allocation that must be satisfied for each task.

    +
    +

    Beware of Slurm terminology in Multicore Architecture!

    +

    +
      +
    • Slurm Node = Physical node, specified with -N <#nodes>
        +
      • Advice: always explicit number of expected number of tasks per node using --ntasks-per-node <n>. This way you control the node footprint of your job.
      • +
      +
    • +
    • Slurm Socket = Physical Socket/CPU/Processor
        +
      • Advice: if possible, explicit also the number of expected number of tasks per socket (processor) using --ntasks-per-socket <s>.
          +
        • relations between <s> and <n> must be aligned with the physical NUMA characteristics of the node.
        • +
        • For instance on aion nodes, <n> = 8*<s>
        • +
        • For instance on iris regular nodes, <n>=2*<s> when on iris bigmem nodes, <n>=4*<s>.
        • +
        +
      • +
      +
    • +
    • (the most confusing): Slurm CPU = Physical CORE
        +
      • use -c <#threads> to specify the number of cores reserved per task.
      • +
      • Hyper-Threading (HT) Technology is disabled on all ULHPC compute nodes. In particular:
          +
        • assume #cores = #threads, thus when using -c <threads>, you can safely set +
          OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1} # Default to 1 if SLURM_CPUS_PER_TASK not set
          +
          +to automatically abstract from the job context
        • +
        • you have interest to match the physical NUMA characteristics of the compute node you're running at (Ex: target 16 threads per socket on Aion nodes (as there are 8 virtual sockets per nodes, 14 threads per socket on Iris regular nodes).
        • +
        +
      • +
      +
    • +
    +
    +

    The total number of tasks defined in a given job is stored in the $SLURM_NTASKS environment variable.

    +
    +

    The --cpus-per-task option of srun in Slurm 23.11 and later

    +

    In the latest versions of Slurm srun inherits the --cpus-per-task value requested by salloc or sbatch by reading the value of SLURM_CPUS_PER_TASK, as for any other option. This behavior may differ from some older versions where special handling was required to propagate the --cpus-per-task option to srun.

    +
    +

    In case you would like to launch multiple programs in a single allocation/batch script, divide the resources accordingly by requesting resources with srun when launching the process, for instance: +

    srun --cpus-per-task <some of the SLURM_CPUS_PER_TASK> --ntasks <some of the SLURM_NTASKS> [...] <program>
    +

    +

    We encourage you to always explicitly specify upon resource allocation the number of tasks you want per node/socket (--ntasks-per-node <n> --ntasks-per-socket <s>), to easily scale on multiple nodes with -N <N>. Adapt the number of threads and the settings to match the physical NUMA characteristics of the nodes

    +
    +

    16 cores per socket and 8 (virtual) sockets (CPUs) per aion node.

    +
      +
    • {sbatch|srun|salloc|si} [-N <N>] --ntasks-per-node <8n> --ntasks-per-socket <n> -c <thread>
        +
      • Total: <N>\times 8\times<n> tasks, each on <thread> threads
      • +
      • Ensure <n>\times<thread>= 16
      • +
      • Ex: -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 (Total: 64 tasks)
      • +
      +
    • +
    +
    +
    +

    14 cores per socket and 2 sockets (physical CPUs) per regular iris.

    +
      +
    • {sbatch|srun|salloc|si} [-N <N>] --ntasks-per-node <2n> --ntasks-per-socket <n> -c <thread>
        +
      • Total: <N>\times 2\times<n> tasks, each on <thread> threads
      • +
      • Ensure <n>\times<thread>= 14
      • +
      • Ex: -N 2 --ntasks-per-node 4 --ntasks-per-socket 2 -c 7 (Total: 8 tasks)
      • +
      +
    • +
    +
    +
    +

    28 cores per socket and 4 sockets (physical CPUs) per bigmem iris

    +
      +
    • {sbatch|srun|salloc|si} [-N <N>] --ntasks-per-node <4n> --ntasks-per-socket <n> -c <thread>
        +
      • Total: <N>\times 4\times<n> tasks, each on <thread> threads
      • +
      • Ensure <n>\times<thread>= 28
      • +
      • Ex: -N 2 --ntasks-per-node 8 --ntasks-per-socket 2 -c 14 (Total: 16 tasks)
      • +
      +
    • +
    +
    +
    + + +

    Job submission options

    + + +

    There are several useful environment variables set be Slurm within an allocated job. +The most important ones are detailed in the below table which summarizes the main job submission options offered with {sbatch | srun | salloc} [...]:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Command-line optionDescriptionExample
    -N <N><N> Nodes request-N 2
    --ntasks-per-node=<n><n> Tasks-per-node request--ntasks-per-node=28
    --ntasks-per-socket=<s><s> Tasks-per-socket request--ntasks-per-socket=14
    -c <c><c> Cores-per-task request (multithreading)-c 1
    --mem=<m>GB<m>GB memory per node request--mem 0
    -t [DD-]HH[:MM:SS]>Walltime request-t 4:00:00
    -G <gpu><gpu> GPU(s) request-G 4
    -C <feature>Feature request (broadwell,skylake...)-C skylake
    -p <partition>Specify job partition/queue
    --qos <qos>Specify job qos
    -A <account>Specify account
    -J <name>Job name-J MyApp
    -d <specification>Job dependency-d singleton
    --mail-user=<email>Specify email address
    --mail-type=<type>Notify user by email when certain event types occur.--mail-type=END,FAIL
    +

    At a minimum a job submission script must include number of nodes, time, type of partition and nodes (resource allocation constraint and features), and quality of service (QOS). +If a script does not specify any of these options then a default may be applied. +The full list of directives is documented in the man pages for the sbatch command (see. man sbatch).

    + + +

    #SBATCH directives vs. CLI options

    +

    Each option can be specified either as an #SBATCH [...] directive in the job submission script:

    +
    #!/bin/bash -l                # <--- DO NOT FORGET '-l'
    +### Request a single task using one core on one node for 5 minutes in the batch queue
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 1
    +#SBATCH --time=0-00:05:00
    +#SBATCH -p batch
    +# [...]
    +
    + +

    Or as a command line option when submitting the script:

    +
    $ sbatch -p batch -N 2 --ntasks-per-node=1 -c 1 --time=0-00:05:00 ./first-job.sh
    +
    + +
    +

    The command line and directive versions of an option are +equivalent and interchangeable: +if the same option is present both on the command line and as a directive, +the command line will be honored. +If the same option or directive is specified twice, the last value supplied +will be used. +Also, many options have both a long form, eg --nodes=2 and a short form, eg -N 2. These are equivalent and interchangable.

    +
    +
    Common options to sbatch and srun

    Many options are common to both sbatch and srun, for example +sbatch -N 4 ./first-job.sh allocates 4 nodes to first-job.sh, and +srun -N 4 uname -n inside the job runs a copy of uname -n on each of 4 nodes.

    +

    If you don't specify an option in the srun command line, srun will +inherit the value of that option from sbatch. +In these cases the default behavior of srun is to assume the same +options as were passed to sbatch. This is achieved via environment +variables: sbatch sets a number of environment variables with names +like SLURM_NNODES and srun checks the values of those +variables. This has two important consequences:

    +
      +
    1. Your job script can see the settings it was submitted with by + checking these environment variables
    2. +
    3. You should NOT override these environment variables. Also be aware + that if your job script tries to do certain tricky things, such as using + ssh to launch a command on another node, the environment might not + be propagated and your job may not behave correctly
    4. +
    +
    +

    HW characteristics and Slurm features of ULHPC nodes

    +

    When selecting specific resources allocations, it is crucial to match the +hardware characteristics of the computing nodes. +Details are provided below:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Node (type)#Nodes#Socket / #CoresRAM [GB]Features
    aion-[0001-0354]3548 / 128256batch,epyc
    iris-[001-108]1082 / 28128batch,broadwell
    iris-[109-168]602 / 28128batch,skylake
    iris-[169-186] (GPU)182 / 28768gpu,skylake,volta
    iris-[191-196] (GPU)62 / 28768gpu,skylake,volta32
    iris-[187-190]
    (Large-Memory)
    44 / 1123072bigmem,skylake
    + + +

    As can be seen, Slurm [features] are associated to ULHPC compute nodes and permits to easily filter with the -C <feature> option the list of nodes.

    +

    To list available features, use sfeatures:

    +

    sfeatures
    +# sinfo  -o '%20N %.6D %.6c %15F %12P %f'
    +# NODELIST              NODES   CPUS NODES(A/I/O/T)  PARTITION    AVAIL_FEATURES
    +# [...]
    +
    +

    +
    +

    Always try to align resource specifications for your jobs with physical characteristics

    +

    The typical format of your Slurm submission should thus probably be: +

    sbatch|srun|... [-N <N>] --ntasks-per-node <n> -c <thread> [...]
    +sbatch|srun|... [-N <N>] --ntasks-per-node <#sockets * s> --ntasks-per-socket <s> -c <thread> [...]
    +
    +This would define a total of <N>\times<n> TASKS (first form) or +<N>\times \#sockets \times<s> TASKS (second form), each on +<thread> threads. + You MUST ensure that either:

    +
      +
    • <n>\times<thread> matches the number of cores avaiable on the target +computing node (first form), or
    • +
    • <n>=\#sockets \times<s>, and <s>\times<thread> matches the +number of cores per socket available on the target computing node (second form).
    • +
    +
    +

    16 cores per socket and 8 virtual sockets (CPUs) per aion node. +Depending on the selected form, you MUST ensure that either +<n>\times<thread>=128, or that <n>=8<s> and <s>\times<thread>=16. +

    ### Example 1 - use all cores available
    +{sbatch|srun|salloc} -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 [...]
    +# Total: 64 tasks (spread across 2 nodes), each on 4 cores/threads
    +
    +### Example 2 - use all cores available
    +{sbatch|srun|salloc} --ntasks-per-node 128 -c 1  [...]
    +# Total; 128 (single-core) tasks
    +
    +### Example 3 - use all cores available
    +{sbatch|srun|salloc} -N 1 --ntasks-per-node 8 --ntasks-per-socket 1 -c 16 [...]
    +# Total: 8 tasks, each on 16 cores/threads
    +
    +### Example 4 - use all cores available
    +{sbatch|srun|salloc} -N 1 --ntasks-per-node 2 -c 64 [...]
    +# Total: 2 tasks, each on 64 cores/threads
    +

    +
    +
    +

    14 cores per socket and 2 sockets (physical CPUs) per regular iris +node. Depending on the selected form, you MUST ensure that either +<n>\times<thread>=28, or that <n>=2<s> and <s>\times<thread>=14. +

    ### Example 1 - use all cores available
    +{sbatch|srun|salloc} -N 3 --ntasks-per-node 14 --ntasks-per-socket 7 -c 2 [...]
    +# Total: 42 tasks (spread across 3 nodes), each on 2 cores/threads
    +
    +### Example 2 - use all cores available
    +{sbatch|srun|salloc} -N 2 --ntasks-per-node 28 -c 1  [...]
    +# Total; 56 (single-core) tasks
    +
    +### Example 3 - use all cores available
    +{sbatch|srun|salloc} -N 2 --ntasks-per-node 2 --ntasks-per-socket 1 -c 14 [...]
    +# Total: 4 tasks (spread across 2 nodes), each on 14 cores/threads
    +

    +
    +
    +

    28 cores per socket and 4 sockets (physical CPUs) per bigmem iris +node. +Depending on the selected form, you MUST ensure that either +<n>\times<thread>=112, or that <n>=4<s> and <s>\times<thread>=28. +

    ### Example 1 - use all cores available
    +{sbatch|srun|salloc} -N 1 --ntasks-per-node 56 --ntasks-per-socket 14 -c 2 [...]
    +# Total: 56 tasks on a single bigmem node, each on 2 cores/threads
    +
    +### Example 2 - use all cores available
    +{sbatch|srun|salloc} --ntasks-per-node 112 -c 1  [...]
    +# Total; 112 (single-core) tasks
    +
    +### Example 3 - use all cores available
    +{sbatch|srun|salloc} -N 1 --ntasks-per-node 4 --ntasks-per-socket 1 -c 28 [...]
    +# Total: 4 tasks, each on 28 cores/threads
    +

    +
    +
    +
    + + +

    Using Slurm Environment variables

    +

    Recall that the Slurm controller will set several SLURM_* variables in the environment of +the batch script. +The most important are listed in the table below - use them wisely to make your +launcher script as flexible as possible to abstract and adapt from the +allocation context, "independently" of the way the job script has been +submitted.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Submission optionEnvironment variableTypical usage
    -N <N>SLURM_JOB_NUM_NODES or
    SLURM_NNODES
    --ntasks-per-node=<n>SLURM_NTASKS_PER_NODE
    --ntasks-per-socket=<s>SLURM_NTASKS_PER_SOCKET
    -c <c>SLURM_CPUS_PER_TASKOMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
    SLURM_NTASKS
    Total number of tasks
    srun -n $SLURM_NTASKS [...]
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/slurm/launchers/index.html b/slurm/launchers/index.html new file mode 100644 index 00000000..bdc04fbb --- /dev/null +++ b/slurm/launchers/index.html @@ -0,0 +1,3639 @@ + + + + + + + + + + + + + + + + + + + + + + + + Launcher Scripts Examples - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Slurm Launcher Examples

    +

    ULHPC Tutorial / Getting Started + ULHPC Tutorial / OpenMP/MPI

    +

    When setting your default #SBATCH directive, always keep in mind your expected default resource allocation that would permit to submit your launchers

    +
      +
    1. without options sbatch <launcher> (you will be glad in a couple of month not to have to remember the options you need to pass) and
    2. +
    3. try to stick to a single node (to avoid to accidentally induce a huge submission).
    4. +
    +

    Resource allocation Guidelines

    +
    +

    General guidelines

    +

    Always try to align resource specifications for your jobs with physical characteristics. +Always prefer the use of --ntasks-per-{node,socket} over -n when defining your tasks allocation request to automatically scale appropriately upon multi-nodes submission with for instance sbatch -N 2 <launcher>. Launcher template: +

    #!/bin/bash -l # <--- DO NOT FORGET '-l' to facilitate further access to ULHPC modules
    +#SBATCH -p <partition>                     #SBATCH -p <partition>
    +#SBATCH -N 1                               #SBATCH -N 1
    +#SBATCH --ntasks-per-node=<n>              #SBATCH --ntasks-per-node <#sockets * s>
    +#SBATCH -c <thread>                        #SBATCH --ntasks-per-socket <s>
    +                                           #SBATCH -c <thread>
    +
    +This would define by default a total of <n> (left) or \#sockets \times<s> (right) tasks per node, each on <thread> threads. +You MUST ensure that either:

    +
      +
    • <n>\times<thread> matches the number of cores avaiable on the target +computing node (left), or
    • +
    • <n>=\#sockets \times<s>, and <s>\times<thread> matches the +number of cores per socket available on the target computing node (right).
    • +
    +

    See Specific Resource Allocation

    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Node (type)#Nodes#Socket / #CoresRAM [GB]Features
    aion-[0001-0354]3548 / 128256batch,epyc
    iris-[001-108]1082 / 28128batch,broadwell
    iris-[109-168]602 / 28128batch,skylake
    iris-[169-186] (GPU)182 / 28768gpu,skylake,volta
    iris-[191-196] (GPU)62 / 28768gpu,skylake,volta32
    iris-[187-190]
    (Large-Memory)
    44 / 1123072bigmem,skylake
    + + +
    +

    16 cores per socket and 8 (virtual) sockets (CPUs) per aion node. Examples: +

    #SBATCH -p batch                 #SBATCH -p batch                #SBATCH -p batch
    +#SBATCH -N 1                     #SBATCH -N 1                    #SBATCH -N 1
    +#SBATCH --ntasks-per-node=128    #SBATCH --ntasks-per-node 16    #SBATCH --ntasks-per-node 8
    +#SBATCH --ntasks-per-socket 16   #SBATCH --ntasks-per-socket 2   #SBATCH --ntasks-per-socket 1
    +#SBATCH -c 1                     #SBATCH -c 8                    #SBATCH -c 16
    +

    +
    +
    +

    14 cores per socket and 2 sockets (physical CPUs) per regular iris. Examples: +

    #SBATCH -p batch                #SBATCH -p batch                 #SBATCH -p batch
    +#SBATCH -N 1                    #SBATCH -N 1                     #SBATCH -N 1
    +#SBATCH --ntasks-per-node=28    #SBATCH --ntasks-per-node 14     #SBATCH --ntasks-per-node 4
    +#SBATCH --ntasks-per-socket=14  #SBATCH --ntasks-per-socket 7    #SBATCH --ntasks-per-socket 2
    +#SBATCH -c 1                    #SBATCH -c 2                     #SBATCH -c 7
    +

    +
    +
    +

    14 cores per socket and 2 sockets (physical CPUs) per gpu iris, 4 GPU accelerator cards per node. +You probably want to dedicate 1 task and \frac{1}{4} of the available cores to the management of each GPU accelerator. Examples: +

    #SBATCH -p gpu                  #SBATCH -p gpu                   #SBATCH -p gpu
    +#SBATCH -N 1                    #SBATCH -N 1                     #SBATCH -N 1
    +#SBATCH --ntasks-per-node=1     #SBATCH --ntasks-per-node 2      #SBATCH --ntasks-per-node 4
    +#SBATCH -c 7                    #SBATCH --ntasks-per-socket 1    #SBATCH --ntasks-per-socket 2
    +#SBATCH -G 1                    #SBATCH -c 7                     #SBATCH -c 7
    +                                #SBATCH -G 2                     #SBATCH -G 4
    +

    +
    +
    +

    28 cores per socket and 4 sockets (physical CPUs) per bigmem iris +node. Examples: +

    #SBATCH -p bigmem              #SBATCH -p bigmem                 #SBATCH -p bigmem
    +#SBATCH -N 1                   #SBATCH -N 1                      #SBATCH -N 1
    +#SBATCH --ntasks-per-node=4    #SBATCH --ntasks-per-node 8       #SBATCH --ntasks-per-node 16
    +#SBATCH --ntasks-per-socket=1  #SBATCH --ntasks-per-socket 2     #SBATCH --ntasks-per-socket 4
    +#SBATCH -c 28                  #SBATCH -c 14                     #SBATCH -c 7
    +
    +You probably want to play with a single task but define the expected memory allocation with --mem=<size[units]> (Default units are megabytes - Different units can be specified using the suffix [K|M|G|T])

    +
    +
    +

    Basic Slurm Launcher Examples

    +
    +
    +

    1 task per job (Note: prefer GNU Parallel in that case - see below)

    +
    #!/bin/bash -l                # <--- DO NOT FORGET '-l'
    +### Request a single task using one core on one node for 5 minutes in the batch queue
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 1
    +#SBATCH --time=0-00:05:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +# Safeguard for NOT running this launcher on access/login nodes
    +module purge || print_error_and_exit "No 'module' command"
    +# List modules required for execution of the task
    +module load <...>
    +# [...]
    +
    + +
    +
    +
    +
    +

    28 single-core tasks per job

    +
    #!/bin/bash -l
    +### Request as many tasks as cores available on a single node for 3 hours
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=28  # On iris; for aion, use --ntasks-per-node=128
    +#SBATCH -c 1
    +#SBATCH --time=0-03:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load <...>
    +# [...]
    +
    + +
    +
    +
    +
    +

    7 multithreaded tasks per job (4 threads each)

    +
    #!/bin/bash -l
    +### Request as many tasks as cores available on a single node for 3 hours
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=7  # On iris; for aion, use --ntasks-per-node=32
    +#SBATCH -c 4
    +#SBATCH --time=0-03:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load <...>
    +# [...]
    +
    + +
    +
    +
    +

    Embarrassingly Parallel Tasks

    +

    +

    For many users, the reason to consider (or being encouraged) to offload their computing executions on a (remote) HPC or Cloud facility is tied to the limits reached by their computing devices (laptop or workstation). It is generally motivated by time constraints

    +
    +

    "My computations take several hours/days to complete. On an HPC, it will last a few minutes, no?"

    +
    +

    or search-space explorations:

    +
    +

    "I need to check my application against a huge number of input pieces (files) - it worked on a few of them locally but takes ages for a single check. How to proceed on HPC?"

    +
    +

    In most of the cases, your favorite Java application or R/python (custom) development scripts, iterated again over multiple input conditions, are inherently SERIAL: they are able to use only one core when executed. You thus deal with what is often call a Bag of (independent) tasks, also referred to as embarrassingly parallel tasks.

    +

    In this case, you MUST NOT overload the job scheduler with a large number of small (single-core) jobs. +Instead, you should use GNU Parallel which permits the effective management of such tasks in a way that optimize both the resource allocation and the completion time.

    +

    More specifically, GNU Parallel is a tool for executing tasks in parallel, typically on a single machine. When coupled with the Slurm command srun, parallel becomes a powerful way of distributing a set of tasks amongst a number of workers. This is particularly useful when the number of tasks is significantly larger than the number of available workers (i.e. $SLURM_NTASKS), and each tasks is independent of the others.

    +

    ULHPC Tutorial: GNU Parallel launcher for Embarrassingly Parallel Jobs

    +

    Luckily, we have prepared a generic GNU Parallel launcher that should be straight forward to adapt to your own workflow following our tutorial:

    +
      +
    1. Create a dedicated script run_<task> responsible to run your java/R/Python tasks while taking as argument the parameter of each run. You can inspire from run_stressme for instance.
        +
      • test it in interactive
      • +
      +
    2. +
    3. +

      rename the generic launcher launcher.parallel.sh to launcher_<task>.sh,

      +
        +
      • enable #SBATCH --dependency singleton
      • +
      • set the jobname
      • +
      • change TASK to point to the absolute path to run_<task> script
      • +
      • set TASKLISTFILE to point to a files with the parameters to pass to your script for each task
      • +
      • adapt eventually the #SBATCH --ntasks-per-node [...] and #SBATCH -c [...] to match your needs AND the hardware configs of a single node (28 cores on iris, 128 cores on Aion) -- see guidelines
      • +
      +
    4. +
    5. +

      test a batch run -- stick to a single node to take the best out of one full node.

      +
    6. +
    +

    Serial Task script Launcher

    +
    +
    +
    #!/bin/bash -l     # <--- DO NOT FORGET '-l'
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 1
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +# C/C++: module load toolchain/intel # OR: module load toolchain/foss
    +# Java:  module load lang/Java/1.8
    +# Ruby/Perl/Rust...:  module load lang/{Ruby,Perl,Rust...}
    +# /!\ ADAPT TASK variable accordingly - absolute path to the (serial) task to be executed
    +TASK=${TASK:=${HOME}/bin/app.exe}
    +OPTS=$*
    +
    +srun ${TASK} ${OPTS}
    +
    + +
    +
    +
    +
    +
    #!/bin/bash -l
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 1
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +# Python 3.X by default (also on system)
    +module load lang/Python
    +# module load lang/SciPy-bundle
    +# and/or: activate the virtualenv <name> you previously generated with
    +#     python -m venv <name>
    +source ./<name>/bin/activate
    +OPTS=$*
    +
    +srun python [...] ${OPTS}
    +
    + +
    +
    +
    +
    +
    #!/bin/bash -l
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 28
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load lang/R
    +export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +OPTS=$*
    +
    +srun Rscript <script>.R ${OPTS}  |& tee job_${SLURM_JOB_NAME}.out
    +
    + +
    +
    +
    +

    ... but why? just use Python or R.

    +
    +
    #!/bin/bash -l
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 28
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load math/MATLAB
    +
    +matlab -nodisplay -nosplash < INPUTFILE.m > OUTPUTFILE.out
    +
    + +
    +
    +
    +

    Specialized BigData/GPU launchers

    +
    +

    BigData/[Large-]memory single-core tasks

    +
    #!/bin/bash -l
    +### Request one sequential task requiring half the memory of a regular iris node for 1 day
    +#SBATCH -J MyLargeMemorySequentialJob       # Job name
    +#SBATCH --mail-user=Your.Email@Address.lu   # mail me ...
    +#SBATCH --mail-type=end,fail                # ... upon end or failure
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 1
    +#SBATCH --mem=64GB         # if above 112GB: consider bigmem partition (USE WITH CAUTION)
    +#SBATCH --time=1-00:00:00
    +#SBATCH -p batch           # if above 112GB: consider bigmem partition (USE WITH CAUTION)
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load <...>
    +# [...]
    +
    + +
    +
    +

    AI/DL task tasks

    +
    #!/bin/bash -l
    +### Request one GPU tasks for 4 hours - dedicate 1/4 of available cores for its management
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 7
    +#SBATCH -G 1
    +#SBATCH --time=04:00:00
    +#SBATCH -p gpu
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load <...>    # USE apps compiled against the {foss,intel}cuda toolchain !
    +# Ex: 
    +# module load numlib/cuDNN
    +
    +# This should report a single GPU (over 4 available per gpu node)
    +nvidia-smi
    +# [...]
    +srun [...]
    +
    + +
    +

    pthreads/OpenMP Launcher

    +
    +

    Always set OMP_NUM_THREADS to match ${SLURM_CPUS_PER_TASK:-1}

    +

    You MUST enforce the use of -c <threads> in your launcher to ensure the variable $SLURM_CPUS_PER_TASK exists within your launcher scripts. +This is the appropriate value to set for OMP_NUM_THREAD, with default to 1 as extra safely which can be obtained with the following affectation:

    +
    export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +
    + +
    +
    +
    +

    Single node, threaded (pthreads/OpenMP) application launcher

    +
    #!/bin/bash -l
    +# Single node, threaded (pthreads/OpenMP) application launcher, using all 128 cores of an aion cluster node
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 128
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/foss
    +
    +export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +OPTS=$*
    +
    +srun /path/to/your/threaded.app ${OPTS}
    +
    + +
    +
    +
    +
    +

    Single node, threaded (pthreads/OpenMP) application launcher

    +
    #!/bin/bash -l
    +# Single node, threaded (pthreads/OpenMP) application launcher, using all 28 cores of an iris cluster node:
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 28
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/foss
    +
    +export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +OPTS=$*
    +
    +srun /path/to/your/threaded.app ${OPTS}
    +
    + +
    +
    +
    +

    MPI

    +

    Intel MPI Launchers

    +
    +

    Official Slurm guide for Intel MPI

    +
    +
    +
    +

    Multi-node parallel application IntelMPI launcher

    +

    #!/bin/bash -l
    +# Multi-node parallel application IntelMPI launcher, using 256 MPI processes
    +
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node 128    # MPI processes per node
    +#SBATCH -c 1
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/intel
    +OPTS=$*
    +
    +srun -n $SLURM_NTASKS /path/to/your/intel-toolchain-compiled-application ${OPTS}
    +
    +Recall to use si-bigmem to request an interactive job when testing your script.

    +
    +
    +
    +
    +

    Multi-node parallel application IntelMPI launcher

    +

    #!/bin/bash -l
    +# Multi-node parallel application IntelMPI launcher, using 56 MPI processes
    +
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node 28    # MPI processes per node
    +#SBATCH -c 1
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/intel
    +OPTS=$*
    +
    +srun -n $SLURM_NTASKS /path/to/your/intel-toolchain-compiled-application ${OPTS}
    +
    +Recall to use si-gpu to request an interactive job when testing your script on a GPU node.

    +
    +
    +
    +

    You may want to use PMIx as MPI initiator -- use srun --mpi=list to list the available implementations (default: pmi2), and srun --mpi=pmix[_v3] [...] to use PMIx.

    +

    OpenMPI Slurm Launchers

    +
    +

    Official Slurm guide for Open MPI

    +
    +
    +
    +

    Multi-node parallel application OpenMPI launcher

    +
    #!/bin/bash -l
    +# Multi-node parallel application OpenMPI launcher, using 256 MPI processes
    +
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node 128    # MPI processes per node
    +#SBATCH -c 1
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/foss
    +module load mpi/OpenMPI
    +OPTS=$*
    +
    +srun -n $SLURM_NTASKS /path/to/your/foss-toolchain-openMPIcompiled-application ${OPTS}
    +
    + +
    +
    +
    +
    +

    Multi-node parallel application OpenMPI launcher

    +
    #!/bin/bash -l
    +# Multi-node parallel application OpenMPI launcher, using 56 MPI processes
    +
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node 28    # MPI processes per node
    +#SBATCH -c 1
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/foss
    +module load mpi/OpenMPI
    +OPTS=$*
    +
    +srun -n $SLURM_NTASKS /path/to/your/foss-toolchain-openMPIcompiled-application ${OPTS}
    +
    + +
    +
    +
    +

    Hybrid Intel MPI+OpenMP Launcher

    +
    +
    +

    Multi-node hybrid parallel application IntelMPI/OpenMP launcher

    +
    #!/bin/bash -l
    +# Multi-node hybrid application IntelMPI+OpenMP launcher, using 16 threads per socket(CPU) on 2 nodes (256 cores):
    +
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node   8    # MPI processes per node
    +#SBATCH --ntasks-per-socket 1    # MPI processes per (virtual) processor
    +#SBATCH -c 16
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/intel
    +export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +OPTS=$*
    +
    +srun -n $SLURM_NTASKS /path/to/your/parallel-hybrid-app ${OPTS}
    +
    + +
    +
    +
    +
    +

    Multi-node hybrid parallel application IntelMPI/OpenMP launcher

    +
    #!/bin/bash -l
    +# Multi-node hybrid application IntelMPI+OpenMP launcher, using 14 threads per socket(CPU) on 2 nodes (56 cores):
    +
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node   2    # MPI processes per node
    +#SBATCH --ntasks-per-socket 1    # MPI processes per processor
    +#SBATCH -c 14
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/intel
    +export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +OPTS=$*
    +
    +srun -n $SLURM_NTASKS /path/to/your/parallel-hybrid-app ${OPTS}
    +
    + +
    +
    +
    +

    Hybrid OpenMPI+OpenMP Launcher

    +
    +
    +

    Multi-node hybrid parallel application OpenMPI/OpenMP launcher

    +
    #!/bin/bash -l
    +# Multi-node hybrid application OpenMPI+OpenMP launcher, using 16 threads per socket(CPU) on 2 nodes (256 cores):
    +
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node   8    # MPI processes per node
    +#SBATCH --ntasks-per-socket 1    # MPI processes per processor
    +#SBATCH -c 16
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/foss
    +module load mpi/OpenMPI
    +export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +OPTS=$*
    +
    +srun -n $SLURM_NTASKS /path/to/your/parallel-hybrid-app ${OPTS}
    +
    + +
    +
    +
    +
    +

    Multi-node hybrid parallel application OpenMPI/OpenMP launcher

    +
    #!/bin/bash -l
    +# Multi-node hybrid application OpenMPI+OpenMP launcher, using 14 threads per socket(CPU) on 2 nodes (56 cores):
    +
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node   2    # MPI processes per node
    +#SBATCH --ntasks-per-socket 1    # MPI processes per processor
    +#SBATCH -c 14
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load toolchain/foss
    +module load mpi/OpenMPI
    +export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
    +OPTS=$*
    +
    +srun -n $SLURM_NTASKS /path/to/your/parallel-hybrid-app ${OPTS}
    +
    + +
    +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/slurm/partitions/index.html b/slurm/partitions/index.html new file mode 100644 index 00000000..c4889afa --- /dev/null +++ b/slurm/partitions/index.html @@ -0,0 +1,3065 @@ + + + + + + + + + + + + + + + + + + + + + + + + Partition/Queues - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ULHPC Slurm Partitions 2.0

    +

    In Slurm multiple nodes can be grouped into partitions which are sets of nodes aggregated by shared characteristics or objectives, with associated limits for wall-clock time, job size, etc. These limits are hard limits for the jobs and can not be overruled.

    +

    To select a given partition with a Slurm command, use the -p <partition> option:

    +
    srun|sbatch|salloc|sinfo|squeue... -p <partition> [...]
    +
    + +

    You will find on ULHPC resources the following partitions (mostly matching the 3 types of computing resources)

    +
      +
    • batch is intended for running parallel scientific applications as passive jobs on "regular" nodes (Dual CPU, no accelerators, 128 to 256 GB of RAM)
    • +
    • gpu is intended for running GPU-accelerated scientific applications as passive jobs on "gpu" nodes (Dual CPU, 4 Nvidia accelerators, 768 GB RAM)
    • +
    • bigmem is dedicated for memory intensive data processing jobs on "bigmem" nodes (Quad-CPU, no accelerators, 3072 GB RAM)
    • +
    • interactive: a floating partition intended for quick interactive jobs, allowing for quick tests and compilation/preparation work.
        +
      • this is the only partition crossing all type of nodes (thus floating).
      • +
      • use si, si-gpu or si-bigmem to submit an interactive job on either a regular, gpu or bigmem node
      • +
      +
    • +
    +

    Aion

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    AION (type)#Nodes (cores/node)Default/MaxTimeMaxNodesPriorityTier
    interactive (floating)35430min - 2h2100
    batch (default)354 (128c)2h - 48h641
    +

    Iris

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    IRIS (type)#Nodes (cores/n)Default/MaxTimeMaxNodesPriorityTier
    interactive (floating)19630min - 2h2100
    batch (default)168 (28c)2h - 48h641
    gpu24 (28c)2h - 48h41
    bigmem4 (112c)2h - 48h11
    +

    Queues/Partitions State Information

    +

    For detailed information about all available partitions and their definition/limits: +

    scontrol show partitions [name]
    +

    +

    Partition load status

    +

    You can of course use squeue -p <partition> to list the jobs currently scheduled on a given, partition <partition>.

    +

    As part of the custom ULHPC Slurm helpers defined in /etc/profile.d/slurm.sh, the following commands have been made to facilitate the review of the current load usage of the partitions.

    + + + + + + + + + + + + + + + + + + + + + +
    CommandDescription
    irisstat, aionstatreport cluster status (utilization, partition and QOS live stats)
    pload [-a] i/b/g/mOverview of the Slurm partition load
    listpartitionjobs <part>List jobs (and current load) of the slurm partition <part>
    +
    +

    Partition load with pload

    +
    $ pload -h
    +Usage: pload [-a] [--no-header] <partition>
    + => Show current load of the slurm partition <partition>, eventually without header
    +    <partition> shortcuts: i=interactive b=batch g=gpu m=bigmem
    + Options:
    +   -a: show all partition
    +$ pload -a
    +  Partition  CPU Max  CPU Used  CPU Free     Usage[%]
    +      batch     4704      4223       481       89.8%
    +        gpu      672       412       260       61.3% GPU: 61/96 (63.5%)
    +     bigmem      448       431        17       96.2%
    +
    + +
    +

    Partition Limits

    +

    At partition level, only the following limits can be enforced:

    +
      +
    • DefaultTime: Default time limit
    • +
    • MaxNodes: Maximum number of nodes per job
    • +
    • MinNodes: Minimum number of nodes per job
    • +
    • MaxCPUsPerNode: Maximum number of CPUs job can be allocated on any node
    • +
    • MaxMemPerCPU/Node: Maximum memory job can be allocated on any CPU or node
    • +
    • MaxTime: Maximum length of time user's job can run
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/slurm/qos/index.html b/slurm/qos/index.html new file mode 100644 index 00000000..e84e9198 --- /dev/null +++ b/slurm/qos/index.html @@ -0,0 +1,3018 @@ + + + + + + + + + + + + + + + + + + + + + + + + Quality of Service (QOS) - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ULHPC Slurm QOS 2.0

    +

    Quality of Service or QOS is used to +constrain or modify the characteristics that a job can have. +This could come in the form of specifying a QoS to request for a longer run time +or a high priority queue for a given job.

    +

    To select a given QOS with a Slurm command, use the --qos <qos> option:

    +
    srun|sbatch|salloc|sinfo|squeue... [-p <partition>] --qos <qos> [...]
    +
    + +
    +

    The default QoS of your jobs depends on your account and affiliation. +Normally, the --qos <qos> directive does not need to be set for most jobs

    +
    +

    We favor in general cross-partition QOS, mainly tied to priority level +(low \rightarrow urgent). +A special preemptible QOS exists for best-effort +jobs and is named besteffort.

    +

    Available QOS

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    QOS (partition)PrioGrpTRESMaxTresPJMaxJobPUMaxWall
    besteffort (*)150
    low (*)102
    normal (*)10050
    long (*)100node=6node=2414-00:00:00
    debug (interactive)150node=810
    high (*)20050
    urgent (*)1000100
    + + +

    List QOS Limits

    + + +

    Use the sqos utility function to list the existing QOS limits.

    +
    +

    List current ULHPC QOS limits with sqos

    +
    $ sqos
    +\# sacctmgr show qos  format="name%20,preempt,priority,GrpTRES,MaxTresPerJob,MaxJobsPerUser,MaxWall,flags"
    +                Name    Preempt   Priority       GrpTRES       MaxTRES MaxJobsPU     MaxWall                Flags
    +-------------------- ---------- ---------- ------------- ------------- --------- ----------- --------------------
    +              normal besteffort        100                                   100                      DenyOnLimit
    +          besteffort                     1                                   300                        NoReserve
    +                 low besteffort         10                                     4                      DenyOnLimit
    +                high besteffort        200                                    50                      DenyOnLimit
    +              urgent besteffort       1000                                   100                      DenyOnLimit
    +               debug besteffort        150        node=8                      10                      DenyOnLimit
    +                long besteffort        100        node=6        node=2         4 14-00:00:00 DenyOnLimit,Partiti+
    +
    + +
    + + +
    +

    What are the possible limits set on ULHPC QOS?

    +

    At the QOS level, the following elements are composed to define the resource limits for our QOS:

    +
      +
    • Limits on Trackable RESources (TRES - a resource (cpu,node,etc.) tracked for usage or used to enforce limits against), in particular:
        +
      • GrpTRES: The total count of TRES able to be used at any given time from jobs running from the QOS.
          +
        • If this limit is reached new jobs will be queued but only allowed to run after resources have been relinquished from this group.
        • +
        +
      • +
      • MaxTresPerJob: the maximum size in TRES (cpu,nodes,...) any given job can have from the QOS
      • +
      +
    • +
    • MaxJobsPerUser: The maximum number of jobs a user can have running at a given time
    • +
    • MaxWall[DurationPerJob]= The maximum wall clock time any individual job can run for in the given QOS.
    • +
    +
    +

    As explained in the Limits section, there are basically three layers of Slurm limits, from least to most priority:

    +
      +
    1. None
    2. +
    3. partitions
    4. +
    5. account associations: Root/Cluster -> Account (ascending the hierarchy) -> User
    6. +
    7. Job/Partition QOS
    8. +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/build/index.html b/software/build/index.html new file mode 100644 index 00000000..084993aa --- /dev/null +++ b/software/build/index.html @@ -0,0 +1,3235 @@ + + + + + + + + + + + + + + + + + + + + + + + + Compiling/building your own software - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Compiling/Building your own software

    +

    We try to provide within the ULHPC software sets the most used application among our users. +It may however happen that you may find a given software you expect to use to be either missing among the available software sets, or provided in a version you considered not enough recent.

    +

    In that case, the RECOMMENDED approach is to rely on Easybuild to EXTEND the available software set. +Below are guidelines to support that case.

    +

    Alternatively, you can of course follow the installation guidelines provided on the software website to compile it the way it should be. For that case, you MUST rely on the provided toolchains and compilers.

    +
    +

    In all cases, NEVER compile or build softawre from the ULHPC frontends! +Always perform these actions from the expected compute node, either reserved within an interactive job or through a passive submission

    +
    +

    Missing or Outdated Software

    +

    You should first search if an existing Easyconfig exists for the software:

    +
    # Typical check for user on ULHPC clusters
    +$ si    # get an interactive job - use 'si-gpu' for GPU nodes on iris
    +$ module load tools/EasyBuild
    +$ eb -S <name>
    +
    + +

    It shoud match the available software set versions summarized below:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    NameType2019b (legacy)2020a2020b (prod)2021a2021b (devel)
    GCCCorecompiler8.3.09.3.010.2.010.3.011.2.0
    fosstoolchain2019b2020a2020b2021a2021b
    inteltoolchain2019b2020a2020b2021a2021b
    binutils2.322.342.352.362.37
    Python3.7.4 (and 2.7.16)3.8.2 (and 2.7.18)3.8.63.9.23.9.6
    LLVMcompiler9.0.110.0.111.0.011.1.012.0.1
    OpenMPIMPI3.1.44.0.34.0.54.1.14.1.2
    + + +

    You will then be confronted to the following cases.

    +

    An existing easyconfigs exists for the target toolchain version

    +

    You're lucky but this is very likely to happen (and justify to rely on streamline Easyconfigs)

    +
      +
    • Typical Example:
        +
      • CMake-<version>-GCCcore-<gccversion>.eb: depends on GCCcore, thus common to both foss and intel. The same happens with GCC
      • +
      • Go-<version>.eb (no dependency on any toolchain)
      • +
      • Boost-<version>-{gompi,iimpi}-<toolchainversion>.eb, derived toolchains, compliant with foss (resp. intel) ones;
      • +
      • GDAL-<version>-{foss,intel}-<toolchainversion>-Python-<pythonversion>.eb
      • +
      +
    • +
    +

    In that case, you MUST test the build in your home or in a shared project using the resif-load-{home,project}-swset-{prod,devel} helpers to set a consistent environment for your builds compilant with the ULHPC software sets layout (in particular with regards the $EASYBUILD_PREFIX and $MODULEPATH environment variables). See below for building instructions.

    +

    An outdated easyconfig exists

    +

    Then the easiest way is to adapt the existing easyconfig file for the target softare version AND one of the available toolchain version. You may want also to ensure an ongoing Pull-Request is not dealing with the version you're looking for.

    +

    Assuming you're looking for the software <name> (first letter <letter (in lower case), for instance if <name>=NWChem, then <letter>=n), first copy the existing easyconfig file in a convenient place

    +
    # Create host directory for your custom easyconfigs
    +$ mkdir -p ~/easyconfigs/<letter>/<name>
    +
    +$ eb -S <name>` # find the complete path to the easyconfig file
    +CFGS1=[...]/path/to/easyconfigs
    +* $CFGS1/<letter>/<name>/<name>-<oldversion>[...].eb
    +* $CFGS1/<letter>/<name>/<name>-[...].patch     # Eventual Patch file
    +
    +# copy/paste the definition of the CFGS1 variable (top line)
    +CFGS1=[...]/path/to/easyconfigs
    +# copy the eb file
    +cp $CFGS1/<letter>/<name>/<name>-<oldversion>[...].eb ~/easyconfigs/<letter>/<name>
    +
    + +

    Now (eventually) check on the software website for the most up-to-date version <version> of the software released. Adapt the filename of the copied easyconfig to match the target version / toolchain

    +
    cd ~/easyconfigs/<letter>/<name>
    +mv <name>-<oldversion>[...].eb <name>-<version>[...].eb
    +
    + +
    +

    Example

    +
    cd ~/easyconfigs/n/NWCHem
    +mv NWChem-7.0.0-intel-2019b-Python-3.7.4.eb NWChem-7.0.2-intel-2021b.eb  # Target 2021b intel toolchain, no more need for python suffix
    +
    + +
    +

    Now you shall edit the content of the easyconfig -- you'll typically have to adapt the version of the dependencies and the checksum(s) to match the static versions set for the target toolchain, enforce https urls etc.

    +

    Below is a past complex exemple illustrating the adaptation done for GDB

    +
    --- g/GDB/GDB-8.3-GCCcore-8.2.0-Python-3.7.2.eb 2020-03-31 12:17:03.000000000 +0200
    ++++ g/GDB/GDB-9.1-GCCcore-8.3.0-Python-3.7.4.eb 2020-05-08 15:49:41.000000000 +0200
    +@@ -1,31 +1,36 @@
    + easyblock = 'ConfigureMake'
    +
    + name = 'GDB'
    +-version = '8.3'
    ++version = '9.1'
    + versionsuffix = '-Python-%(pyver)s'
    +
    +-homepage = 'http://www.gnu.org/software/gdb/gdb.html'
    ++homepage = 'https://www.gnu.org/software/gdb/gdb.html'
    + description = "The GNU Project Debugger"
    +
    +-toolchain = {'name': 'GCCcore', 'version': '8.2.0'}
    ++toolchain = {'name': 'GCCcore', 'version': '8.3.0'}
    +
    + source_urls = [GNU_SOURCE]
    + sources = [SOURCELOWER_TAR_XZ]
    +-checksums = ['802f7ee309dcc547d65a68d61ebd6526762d26c3051f52caebe2189ac1ffd72e']
    ++checksums = ['699e0ec832fdd2f21c8266171ea5bf44024bd05164fdf064e4d10cc4cf0d1737']
    +
    + builddependencies = [
    +-    ('binutils', '2.31.1'),
    +-    ('texinfo', '6.6'),
    ++    ('binutils', '2.32'),
    ++    ('texinfo', '6.7'),
    + ]
    +
    + dependencies = [
    +     ('zlib', '1.2.11'),
    +     ('libreadline', '8.0'),
    +     ('ncurses', '6.1'),
    +-    ('expat', '2.2.6'),
    +-    ('Python', '3.7.2'),
    ++    ('expat', '2.2.7'),
    ++    ('Python', '3.7.4'),
    + ]
    +
    ++preconfigopts = "mkdir obj && cd obj && "
    ++configure_cmd_prefix = '../'
    ++prebuildopts = "cd obj && "
    ++preinstallopts = prebuildopts
    ++
    + configopts = '--with-system-zlib --with-python=$EBROOTPYTHON/bin/python --with-expat=$EBROOTEXPAT '
    + configopts += '--with-system-readline --enable-tui --enable-plugins --disable-install-libbfd '
    +
    + +

    Note on dependencies version: typically as in the above example, the version to use for dependencies are not obvious to guess (Ex: texinfo, expat etc.) and you need to be aware of the matching toolchain/GCC/binutils versions for the available prod or devel software sets recalled before -- use eb -S <dependency> to find the appropriate versions.

    +

    None (or only very old/obsolete) easyconfigs are suggested

    +

    Don't panic, it simply means that the official repositories do not hold any recent reciPY for the considered software. + You may find a pending Pull-request addressing the software you're looking for.

    +

    Otherwise, you can either try to create a new easyconfig file, or simply follow the installation guildes for the considered software to build it.

    +

    Using Easybuild to Build software in your Home

    +

    See also Technical documentation to better understand the Easybuild configuration.

    +
    +

    If upon Dry-run builds (eb -Dr [...]) you find most dependencies NOT satisfied, you've likely made an error and may be trying to build a software against a toolchain/software set not supported either as prod or devel.

    +
    +
    # BETTER work in a screen or tmux session ;)
    +$ si[-gpu] [-c <threads>]   # get an interactive job
    +$ module load tools/EasyBuild
    +# /!\ IMPORTANT: ensure EASYBUILD_PREFIX is correctly set to [basedir]/<cluster>/<environment>/<arch>
    +#                and that MODULEPATH is prefixed accordingly
    +$ resif-load-home-swset-{prod | devel}  # adapt environment
    +$ eb -S <softwarename>   # confirm <filename>.eb == <softwarename>-<version>[-<toolchain>][-<suffix>].eb
    +$ eb -Dr <filename>.eb   # check dependencies, normally most MUST be satisfied
    +$ eb -r  <filename>.eb
    +
    + +

    From that point, the compiled software and associated module is available in your home and can be used as follows in launchers etc. -- see ULHPC launcher Examples

    +
    #!/bin/bash -l # <--- DO NOT FORGET '-l' to facilitate further access to ULHPC modules
    +#SBATCH -p <partition>
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node <#sockets * s>
    +#SBATCH --ntasks-per-socket <s>
    +#SBATCH -c <thread>
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +# Safeguard for NOT running this launcher on access/login nodes
    +module purge || print_error_and_exit "No 'module' command"
    +
    +resif-load-home-swset-prod  # OR  resif-load-home-swset-devel
    +module load <softwarename>[/<version>]
    +[...]
    +
    + +

    Using Easybuild to Build software in the project

    +

    Similarly to the above home builds, you should repeat the procedure this time using the helper script resif-load-project-swset-{prod | devel}. +Don't forget Project Data Management instructions: to avoid quotas issues, you have to use sg

    +
    # BETTER work in a screen or tmux session ;)
    +$ si[-gpu] [-c <threads>]   # get an interactive job
    +$ module load tools/EasyBuild
    +# /!\ IMPORTANT: ensure EASYBUILD_PREFIX is correctly set to [basedir]/<cluster>/<environment>/<arch>
    +#                and that MODULEPATH is prefixed accordingly
    +$ resif-load-project-swset-{prod | devel} $PROJECTHOME/<project> # /!\ ADAPT environment and <project> accordingly
    +$ sg <project> -c "eb -S <softwarename>"   # confirm <filename>.eb == <softwarename>-<v>-<toolchain>.eb
    +$ sg <project> -c "eb -Dr <filename>.eb"   # check dependencies, normally most MUST be satisfied
    +$ sg <project> -c "eb -r  <filename>.eb"
    +
    + +

    From that point, the compiled software and associated module is available in the project directoryand can be used by all project members as follows in launchers etc. -- see ULHPC launcher Examples

    +
    #!/bin/bash -l # <--- DO NOT FORGET '-l' to facilitate further access to ULHPC modules
    +#SBATCH -p <partition>
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node <#sockets * s>
    +#SBATCH --ntasks-per-socket <s>
    +#SBATCH -c <thread>
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +# Safeguard for NOT running this launcher on access/login nodes
    +module purge || print_error_and_exit "No 'module' command"
    +
    +resif-load-project-swset-prod  $PROJECTHOME/<project> # OR resif-load-project-swset-devel $PROJECTHOME/<project>
    +module load <softwarename>[/<version>]
    +[...]
    +
    + +

    Contribute back to Easybuild

    +

    If you developped new easyconfig(s), you are expected to contribute them back to the Easybuilders community! +Consider creating a Pull-Request. You can even do it by command-line assuming you have setup your Github integration. On iris or aion, you will likely need to install the possibly-insecure, alternate keyrings keyrings.alt packages -- see https://pypi.org/project/keyring/

    +
    # checking code style - see https://easybuild.readthedocs.io/en/latest/Code_style.html#code-style
    +eb --check-contrib <ebfile>
    +eb --new-pr <ebfile>
    +
    + +

    You can can also consider using the script PR-create provided as part of the RESIF 3 project.

    +

    Once the pull request is merged, you can inform the ULHPC team to consider adding the submitted Easyconfig as part of the ULHPC bundles and see it deployed within the next ULHPC software set release.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/cae/abaqus/index.html b/software/cae/abaqus/index.html new file mode 100644 index 00000000..24d684f2 --- /dev/null +++ b/software/cae/abaqus/index.html @@ -0,0 +1,3195 @@ + + + + + + + + + + + + + + + + + + + + + + + + Abaqus - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Abaqus

    + +

    +

    The Abaqus Unified FEA +product suite offers powerful and complete solutions +for both routine and sophisticated engineering problems covering a vast +spectrum of industrial applications. In the automotive industry engineering +work groups are able to consider full vehicle loads, dynamic vibration, +multibody systems, impact/crash, nonlinear static, thermal coupling, and +acoustic-structural coupling using a common model data structure and integrated +solver technology. Best-in-class companies are taking advantage of +Abaqus Unified FEA to consolidate their processes and tools, +reduce costs and inefficiencies, and gain a competitive advantage

    +

    Available versions of Abaqus in ULHPC

    +

    To check available versions of Abaqus at ULHPC, type module spider abaqus. +It will list the available versions with the following format: +

    cae/ABAQUS/<version>[-hotfix-<hotfix>]
    +

    +
    +

    Don't forget to unset SLURM_GTIDS

    +

    You MUST unset the SLURM environment variable SLURM_GTIDS for both interactive/GUI and batch jobs +

    unset SLURM_GTIDS
    +
    +Failure to do so will cause Abaqus to get stuck due to the MPI that Abaqus ships witch is not supporting the SLURM scheduler.

    +
    +

    When using a general compute node for Abaqus 2021, please run:

    +
      +
    • abaqus cae -mesa to launch the GUI without support for hardware-accelerated graphics rendering.
        +
      • the option -mesa disables hardware-accelerated graphics rendering within Abaqus’s GUI.
      • +
      +
    • +
    • For a Non-Graphical execution, use +
      abaqus job=<my_job_name> input=<filename>.inp mp_mode=<mode> cpus=<cores> [gpus=<gpus>] scratch=$SCRATCH memory="<mem>gb"
      +
    • +
    +

    Supported parallel mode

    +

    Abaqus has two parallelization options which are mutually exclusive:

    +
      +
    • +

      MPI (mp_mode=mpi), which is generally preferred since this allows for scaling the job to multiple compute nodes. As for MPI jobs, use -N <nodes> --ntasks-per-node <cores> -c1 upon submission to use:

      +
       abaqus mp_mode=mpi cpu=$SLURM_NTASKS [...]
      +
      + + +
    • +
    • +

      Shared memory / Threads (mp_mode=threads) for single node / multi-threaded executions. Typically use -N1 --ntasks-per-node 1 -c <threads> upon submission to use:

      +
      abaqus mp_mode=threads cpus=${SLURM_CPUS_PER_TASK} [...]
      +
      + + +
    • +
    • +

      Shared memory for single node with GPU(s) / multi-threaded executions (mp_mode=threads). Typically use -N1 -G 1 --ntasks-per-node 1 -c <threads> upon submission on a GPU node to use:

      +
      abaqus mp_mode=threads cpus=${SLURM_CPUS_PER_TASK} gpus=${SLURM_GPUS} [...]
      +
      + + +
    • +
    +

    Abaqus example problems

    +

    Abaqus contains a large number of example problems which can be used to become familiar with Abaqus on the system. These example problems are described in the Abaqus documentation and can be obtained using the abaqus fetch jobs=<name> command.

    +
    +

    For example, after loading the Abaqus module cae/ABAQUS, enter the following at the command line to extract the input file for test problem s4d: +

    abaqus fetch job=s4d
    +
    +This will extract the input file s4d.inp +See also Abaqus performance data.

    +
    +

    Interactive mode

    +

    To open an Abaqus in the interactive mode, please follow the following steps:

    +

    (eventually) connect to the ULHPC login node with the -X (or -Y) option:

    +
    +
    ssh -X iris-cluster   # OR on Mac OS: ssh -Y iris-cluster
    +
    + +
    +
    +
    ssh -X aion-cluster   # OR on Mac OS: ssh -Y aion-cluster
    +
    + +
    +
    +

    Then you can reserve an interactive job, for instance with 8 MPI processes. Don't forget to use the --x11 option if you intend to use the GUI.

    +
    $ si --x11 -c 8               # Abaqus mp_mode=threads test
    +# OR
    +$ si --x11 --ntask-per-node 8 # abaqus mp_mode=mpi test
    +
    +# Load the module ABAQUS and needed environment
    +(node)$ module purge
    +(node)$ module load cae/ABAQUS
    +(node)$ unset SLURM_GTIDS   # MANDATORY
    +
    +# /!\ IMPORTANT: You MUST ADAPT the LM_LICENSE_FILE variable to point to YOUR licence server!!!
    +(node)$ export LM_LICENSE_FILE=xyz
    +
    +# Check License server token available
    +(node)$ abaqus licensing lmstat -a
    +abaqus licensing lmstat -a
    +lmutil - Copyright (c) 1989-2019 Flexera. All Rights Reserved.
    +Flexible License Manager status on Wed 4/13/2022 22:39
    +[...]
    +
    + +

    Non-graphical Abaqus

    +

    Then the general format to run your Non-Graphical multithreaded interactive execution:

    +
    +

    Assuming a job submitted with {sbatch|srun|si...} -N1 -c <threads>: +

    # /!\ ADAPT $INPUTFILE accordingly
    +abaqus job="${SLURM_JOB_NAME}" verbose=2 interactive \
    +    input=${INPUTFILE} \
    +    cpus=${SLURM_CPUS_PER_TASK} mp_mode=threads
    +

    +
    +
    +

    Assuming a job submitted with {sbatch|srun|si...} -N <N> --ntasks-per-node <npn> -c 1: +

    # /!\ ADAPT $INPUTFILE accordingly
    +abaqus job="${SLURM_JOB_NAME}" verbose=2 interactive \
    +    input=${INPUTFILE} \
    +    cpus=${SLURM_NTASKS} mp_mode=mpi
    +

    +
    +
    +

    GUI

    +

    If you want to run the GUI, use: abaqus cae -mesa

    +
    License information

    Assuming you have set the variable LM_LICENSE_FILE to point to YOUR licence server, you can +check the available license and group you belongs to with: +

    abaqus licensing lmstat -a
    +
    +If your server is hosted outside the ULHPC network, you will have to contact the HPC team to adapt the network firewalls to allow the connection towards your license server.

    +
    +

    Use the following options for simulation to stop and resume it: +

    # /!\ ADAPT <jobname> accordingly:
    +abaqus job=<jobname> suspend
    +abaqus job=<jobname> resume
    +

    +

    Batch mode

    +
    +
    #!/bin/bash -l                # <--- DO NOT FORGET '-l'
    +#SBATCH -J <jobname>
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 4                  # /!\ ADAPT accordingly
    +#SBATCH --time=0-03:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load cae/ABAQUS
    +# export LM_LICENSE_FILE=[...]
    +unset SLURM_GTIDS
    +
    +INPUTFILE=s4d.inp
    +[ ! -f "${INPUTFILE}" ] && print_error_and_exit "Unable to find input file ${INPUTFILE}"
    +
    +abaqus job="${SLURM_JOB_NAME}" verbose=2 interactive \
    +    input=${INPUTFILE} cpus=${SLURM_CPUS_PER_TASK} mp_mode=threads
    +
    + +
    +
    +
    #!/bin/bash -l                # <--- DO NOT FORGET '-l'
    +#SBATCH -J <jobname>
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=8  # /!\ ADAPT accordingly
    +#SBATCH -c 1
    +#SBATCH --time=0-03:00:00
    +#SBATCH -p batch
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load cae/ABAQUS
    +# export LM_LICENSE_FILE=[...]
    +unset SLURM_GTIDS
    +
    +INPUTFILE=s4d.inp
    +[ ! -f "${INPUTFILE}" ] && print_error_and_exit "Unable to find input file ${INPUTFILE}"
    +
    +abaqus job="${SLURM_JOB_NAME}" verbose=2 interactive \
    +    input=${INPUTFILE} cpus=${SLURM_NTASKS} mp_mode=mpi
    +
    + +
    +
    +

    May not be supported depending on the software set +

    #!/bin/bash -l                # <--- DO NOT FORGET '-l'
    +#SBATCH -J <jobname>
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 7
    +#SBATCH -G 1
    +#SBATCH --time=0-03:00:00
    +#SBATCH -p gpu
    +
    +print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
    +module purge || print_error_and_exit "No 'module' command"
    +module load cae/ABAQUS
    +# export LM_LICENSE_FILE=[...]
    +unset SLURM_GTIDS
    +
    +INPUTFILE=s4d.inp
    +[ ! -f "${INPUTFILE}" ] && print_error_and_exit "Unable to find input file ${INPUTFILE}"
    +
    +abaqus job="${SLURM_JOB_NAME}" verbose=2 interactive \
    +    input=${INPUTFILE} cpus=${SLURM_CPUS_PER_TASK} gpus=${SLURM_GPUS} mp_mode=threads
    +

    +
    +
    +

    Additional information

    +

    To know more about Abaqus documentation and tutorial, +please refer Abaqus CAE

    +
    Tutorial +
    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please file a support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/cae/ansys/index.html b/software/cae/ansys/index.html new file mode 100644 index 00000000..6df37b3e --- /dev/null +++ b/software/cae/ansys/index.html @@ -0,0 +1,2982 @@ + + + + + + + + + + + + + + + + + + + + + + + + ANSYS - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ANSYS

    + +

    +ANSYS offers a comprehensive software suite that spans +the entire range of physics, providing access to virtually any +field of engineering simulation that a design process requires. +Organizations around the world trust Ansys to deliver the best value for +their engineering simulation software investment.

    +

    Available versions of ANSYS in ULHPC

    +

    To check available versions of ANSYS at ULHPC type module spider ansys. +The following versions of ANSYS are available in ULHPC: +

    # Available versions 
    +tools/ANSYS/18.0
    +tools/ANSYS/19.0
    +tools/ANSYS/19.4
    +

    +

    Interactive mode

    +

    To open an ANSYS in the interactive mode, please follow the following steps: +

    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 
    +
    +# Load the required version of ANSYS and needed environment
    +$ module purge
    +$ module load toolchain/intel/2019a
    +$ module load tools/ANSYS/19.4
    +
    +# To launch ANSYS workbench
    +$ runwb2
    +

    +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J ANSYS-CFX
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --ntasks-per-socket=14
    +#SBATCH -c 1
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Write out the stdout+stderr in a file
    +#SBATCH -o output.txt
    +
    +# Mail me on job start & end
    +#SBATCH --mail-user=myemailaddress@universityname.domain
    +#SBATCH --mail-type=BEGIN,END
    +
    +# To get basic info. about the job
    +echo "== Starting run at $(date)"
    +echo "== Job ID: ${SLURM_JOBID}"
    +echo "== Node list: ${SLURM_NODELIST}"
    +echo "== Submit dir. : ${SLURM_SUBMIT_DIR}"
    +
    +# Load the required version of ANSYS and needed environment
    +module purge
    +module load toolchain/intel/2019a
    +module load tools/ANSYS/19.4
    +
    +# The Input file
    +defFile=Benchmark.def
    +
    +MYHOSTLIST=$(srun hostname | sort | uniq -c | awk '{print $2 "*" $1}' | paste -sd, -)
    +echo $MYHOSTLIST
    +cfx5solve -double -def $defFile -start-method "Platform MPI Distributed Parallel" -par-dist $MYHOSTLIST
    +
    + +

    Additional information

    +

    ANSYS provides the customer support, +if you have a license key, you should be able to get all +the support and needed documents.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please file a support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/cae/fds/index.html b/software/cae/fds/index.html new file mode 100644 index 00000000..3a28cd0a --- /dev/null +++ b/software/cae/fds/index.html @@ -0,0 +1,3043 @@ + + + + + + + + + + + + + + + + + + + + + + + + FDS - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    FDS

    + +

    Fire Dynamics Simulator (FDS) is a large-eddy simulation (LES) +code for low-speed flows, with an emphasis on smoke and heat transport from fires.

    +

    Available versions of FDS in ULHPC

    +

    To check available versions of FDS at ULHPC type module spider abaqus. +The following versions of FDS are available in ULHPC: +

    # Available versions
    +phys/FDS/6.7.1-intel-2018a
    +phys/FDS/6.7.1-intel-2019a
    +phys/FDS/6.7.3-intel-2019a
    +

    +

    Interactive mode

    +

    To try FDS in the interactive mode, please follow the following steps: +

    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11
    +
    +# Load the required version of FDS and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel
    +$ module load phys/FDS/6.7.3-intel-2019a
    +
    +# Example in fds 
    +$ fds example.fds
    +

    +

    Batch mode

    +

    MPI only:

    +
    #!/bin/bash -l
    +#SBATCH -J FDS-mpi
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --ntasks-per-socket=14
    +#SBATCH -c 1
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Write out the stdout+stderr in a file
    +#SBATCH -o output.txt
    +
    +# Mail me on job start & end
    +#SBATCH --mail-user=myemailaddress@universityname.domain
    +#SBATCH --mail-type=BEGIN,END
    +
    +# To get basic info. about the job
    +echo "== Starting run at $(date)"
    +echo "== Job ID: ${SLURM_JOBID}"
    +echo "== Node list: ${SLURM_NODELIST}"
    +echo "== Submit dir. : ${SLURM_SUBMIT_DIR}"
    +
    +# Load the required version of FDS and needed environment
    +module purge
    +module load swenv/default-env/devel
    +module load phys/FDS/6.7.3-intel-2019a
    +
    +srun fds example.fds
    +
    + +

    MPI+OpenMP (hybrid):

    +
    #!/bin/bash -l
    +#SBATCH -J FDS-hybrid
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=56
    +#SBATCH --cpus-per-task=2
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Write out the stdout+stderr in a file
    +#SBATCH -o output.txt
    +
    +# Mail me on job start & end
    +#SBATCH --mail-user=myemailaddress@universityname.domain
    +#SBATCH --mail-type=BEGIN,END
    +
    +# To get basic info. about the job
    +echo "== Starting run at $(date)"
    +echo "== Job ID: ${SLURM_JOBID}"
    +echo "== Node list: ${SLURM_NODELIST}"
    +echo "== Submit dir. : ${SLURM_SUBMIT_DIR}"
    +
    +# Load the required version of FDS and needed environment
    +module purge
    +module load swenv/default-env/devel
    +module load phys/FDS/6.7.3-intel-2019a
    +
    +srun --cpus-per-task=2 fds_hyb example.fds
    +
    + +

    Additional information

    +

    To know more about FDS documentation and tutorial, +please refer https://pages.nist.gov/fds-smv/manuals.html

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please file a support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/cae/fenics/index.html b/software/cae/fenics/index.html new file mode 100644 index 00000000..d008f6c2 --- /dev/null +++ b/software/cae/fenics/index.html @@ -0,0 +1,3110 @@ + + + + + + + + + + + + + + + + + + + + + + + + FEniCS - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    FEniCS

    + +

    +FEniCS is a popular open-source (LGPLv3) computing platform for +solving partial differential equations (PDEs). +FEniCS enables users to quickly translate scientific models +into efficient finite element code. With the high-level +Python and C++ interfaces to FEniCS, it is easy to get started, +but FEniCS offers also powerful capabilities for more +experienced programmers. FEniCS runs on a multitude of +platforms ranging from laptops to high-performance clusters.

    +

    How to access the FEniCS through Anaconda

    +

    The following steps provides information about how to installed +on your local path. +

    # From your local computer
    +$ ssh -X iris-cluster    # OR ssh -Y iris-cluster on Mac
    +
    +# Reserve the node for interactive computation with grahics view (plots)
    +$ si --x11 --ntasks-per-node 1 -c 4
    +# salloc -p interactive --qos debug -C batch --x11 --ntasks-per-node 1 -c 4
    +
    +# Go to scratch directory 
    +$ cds
    +
    +/scratch/users/<login> $ Anaconda3-2020.07-Linux-x86_64.sh
    +/scratch/users/<login> $ chmod +x Anaconda3-2020.07-Linux-x86_64.sh
    +/scratch/users/<login> $ ./Anaconda3-2020.07-Linux-x86_64.sh
    +
    +Do you accept the license terms? [yes|no]
    +yes
    +Anaconda3 will now be installed into this location:
    +/home/users/<login>/anaconda3
    +
    +  - Press ENTER to confirm the location
    +  - Press CTRL-C to abort the installation
    +  - Or specify a different location below
    +
    +# You can choose your path where you want to install it
    +[/home/users/<login>/anaconda3] >>> /scratch/users/<login>/Anaconda3
    +
    +# To activate the anaconda 
    +/scratch/users/<login> $ source /scratch/users/<login>/Anaconda3/bin/activate
    +
    +# Install the fenics in anaconda environment 
    +/scratch/users/<login> $ conda create -n fenicsproject -c conda-forge fenics
    +
    +# Install matplotlib for the visualization 
    +/scratch/users/<login> $ conda install -c conda-forge matplotlib 
    +
    +Once you have installed the anaconda, you can always +activate it by calling the source activate path where anaconda +has been installed.

    +

    Working example

    +

    Interactive mode

    +
    # From your local computer
    +$ ssh -X iris-cluster      # or ssh -Y iris-cluster on Mac
    +
    +# Reserve the node for interactive computation with grahics view (plots)
    +$ si --ntasks-per-node 1 -c 4 --x11
    +# salloc -p interactive --qos debug -C batch --x11 --ntasks-per-node 1 -c 4
    +
    +# Activate anaconda  
    +$ source /${SCRATCH}/Anaconda3/bin/activate
    +
    +# activate the fenicsproject
    +$ conda activate fenicsproject
    +
    +# execute the Poisson.py example (you can uncomment the plot lines in Poission.py example)
    +$ python3 Poisson.py
    +
    + +

    Batch script

    +
    #!/bin/bash -l                                                                                                 
    +#SBATCH -J FEniCS                                                                                        
    +#SBATCH -N 1
    +###SBATCH -A <project name>
    +###SBATCH --ntasks-per-node=1
    +#SBATCH -c 1
    +#SBATCH --time=00:05:00                                                                      
    +#SBATCH -p batch
    +
    +echo "== Starting run at $(date)"                                                                                             
    +echo "== Job ID: ${SLURM_JOBID}"                                                                                            
    +echo "== Node list: ${SLURM_NODELIST}"                                                                                       
    +echo "== Submit dir. : ${SLURM_SUBMIT_DIR}"
    +
    +# activate the anaconda source 
    +source ${SCRATCH}/Anaconda3/bin/activate
    +
    +# activate the fenicsproject from anaconda 
    +conda activate fenicsproject
    +
    +# execute the poisson.py through python
    +srun python3 Poisson.py  
    +
    + +

    Example (Poisson.py)

    +
    # FEniCS tutorial demo program: Poisson equation with Dirichlet conditions.
    +# Test problem is chosen to give an exact solution at all nodes of the mesh.
    +#  -Laplace(u) = f    in the unit square
    +#            u = u_D  on the boundary
    +#  u_D = 1 + x^2 + 2y^2
    +#    f = -6
    +
    +from __future__ import print_function
    +from fenics import *
    +import matplotlib.pyplot as plt
    +
    +# Create mesh and define function space
    +mesh = UnitSquareMesh(8, 8)
    +V = FunctionSpace(mesh, 'P', 1)
    +
    +# Define boundary condition
    +u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2)
    +
    +def boundary(x, on_boundary):
    +    return on_boundary
    +
    +bc = DirichletBC(V, u_D, boundary)
    +
    +# Define variational problem
    +u = TrialFunction(V)
    +v = TestFunction(V)
    +f = Constant(-6.0)
    +a = dot(grad(u), grad(v))*dx
    +L = f*v*dx
    +
    +# Compute solution
    +u = Function(V)
    +solve(a == L, u, bc)
    +
    +# Plot solution and mesh
    +#plot(u)
    +#plot(mesh)
    +
    +# Save solution to file in VTK format
    +vtkfile = File('poisson/solution.pvd')
    +vtkfile << u
    +
    +# Compute error in L2 norm
    +error_L2 = errornorm(u_D, u, 'L2')
    +
    +# Compute maximum error at vertices
    +vertex_values_u_D = u_D.compute_vertex_values(mesh)
    +vertex_values_u = u.compute_vertex_values(mesh)
    +import numpy as np
    +error_max = np.max(np.abs(vertex_values_u_D - vertex_values_u))
    +
    +# Print errors
    +print('error_L2  =', error_L2)
    +print('error_max =', error_max)
    +
    +# Hold plot
    +#plt.show()
    +
    + +

    Additional information

    +

    FEniCS provides the technical documentation, +and also it provides lots of communication channel +for support and development.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please file a support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/cae/meshing-tools/index.html b/software/cae/meshing-tools/index.html new file mode 100644 index 00000000..80d9bd77 --- /dev/null +++ b/software/cae/meshing-tools/index.html @@ -0,0 +1,3033 @@ + + + + + + + + + + + + + + + + + + + + + + + + Meshing-Tools - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Meshing Tools

    +

    Gmsh

    +

    Gmsh is an open source 3D finite element mesh generator with a built-in CAD engine and post-processor. +Its design goal is to provide a fast, light and user-friendly meshing tool with parametric +input and advanced visualization capabilities. Gmsh is built around four modules: geometry, mesh, +solver and post-processing. The specification of any input to these modules is done either +interactively using the graphical user interface, in ASCII text files using Gmsh's +own scripting language (.geo files), or using the C++, C, Python or Julia Application Programming Interface (API).

    +

    See this general presentation +for a high-level overview of Gmsh and recent developments, +the screencasts for a quick tour of Gmsh's graphical user interface, and the reference manual +for a more thorough overview of Gmsh's capabilities, some frequently +asked questions and the documentation of the C++, C, Python and Julia API.

    +

    The source code repository contains many examples written using both the +built-in script language and the API (see e.g. the tutorials and the and demos).

    +

    Available versions of Gmsh in ULHPC

    +

    To check available versions of Gmsh at ULHPC type module spider gmsh. +Below it shows list of available versions of Gmsh in ULHPC. +

    cae/gmsh/4.3.0-intel-2018a
    +cae/gmsh/4.4.0-intel-2019a
    +

    +

    To work with Gmsh interactively on ULHPC:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11
    +
    +# Load the module for Gmesh and neeed environment
    +$ module purge
    +$ module load swenv/default-env/v1.2-20191021-production
    +$ module load cae/gmsh/4.4.0-intel-2019a
    +
    +$ gmsh example.geo
    +
    + +

    Salome

    +

    SALOME is an open-source software that provides a generic +Pre- and Post-Processing platform for numerical simulation. +It is based on an open and flexible architecture made of reusable components.

    +

    SALOME is a cross-platform solution. It is distributed under the terms of the GNU LGPL license. +You can download both the source code and the executables from this site.

    +

    To know more about salome documentation please refer https://www.salome-platform.org/user-section/salome-tutorials

    +

    Available versions of SALOME in ULHPC

    +

    To check available versions of SALOME at ULHPC type module spider salome. +Below it shows list of available versions of SALOME in ULHPC.

    +
    cae/Salome/8.5.0-intel-2018a
    +cae/Salome/8.5.0-intel-2019a
    +
    + +

    To work with SALOME interactively on ULHPC:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ srun -p batch --time=00:30:00 --ntasks 1 -c 4 --x11 --pty bash -i
    +
    +# Load the module Salome and needed environment
    +$ module purge
    +$ module load swenv/default-env/v1.2-20191021-production
    +$ module load cae/Salome/8.5.0-intel-2019a
    +
    +$ salome start
    +
    + +
    +

    Tip

    +

    If you find some issues with the instructions above, +please file a support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/cae/openfoam/index.html b/software/cae/openfoam/index.html new file mode 100644 index 00000000..8b7c7d78 --- /dev/null +++ b/software/cae/openfoam/index.html @@ -0,0 +1,3003 @@ + + + + + + + + + + + + + + + + + + + + + + + + OpenFOAM - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    OpenFOAM

    + +

    +

    OpenFOAM is the free, open source CFD software developed primarily by OpenCFD Ltd since 2004. +It has a large user base across most areas of engineering and science, +from both commercial and academic organisations. OpenFOAM has an extensive +range of features to solve anything from complex fluid flows involving chemical reactions, +turbulence and heat transfer, to acoustics, solid mechanics and electromagnetics

    +

    Available versions of OpenFOAM in ULHPC

    +

    To check available versions of OpenFOAM at ULHPC type module spider openfoam. +The following versions of OpenFOAM are available in ULHPC: +

    # Available versions
    +cae/OpenFOAM/v1712-intel-2018a
    +cae/OpenFOAM/v1812-foss-2019a   
    +

    +

    Interactive mode

    +

    To run an OpenFOAM in the interactive mode, please follow the following steps: +

    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p batch --time=00:30:00 --ntasks 1 -c 4 --x11
    +
    +# Load the required version of OpenFOAM and Intel environment
    +$ module load swenv/default-env/v1.1-20180716-production
    +$ module load cae/OpenFOAM/v1712-intel-2018a
    +
    +# Load the OpenFOAM environment
    +$ source $FOAM_BASH
    +
    +$ mkdir OpenFOAM
    +$ cd OpenFOAM
    +
    +# Copy the example to your local folder (cavity example)
    +$ cp -r /opt/apps/resif/data/production/v1.1-20180716/default/software/cae/OpenFOAM/v1712-intel-2018a/OpenFOAM-v1712/tutorials/incompressible/icoFoam/cavity/cavity .
    +$ cd cavity
    +
    +# To initialize the mesh
    +$ blockMesh
    +
    +# Run the simulation
    +$ icoFoam
    +
    +# Visualize the solution
    +$ paraFoam
    +

    +

    Batch mode

    +

    Example of computational domain preparation (Dambreak example). +

    $ mkdir OpenFOAM
    +$ cd OpenFOAM
    +$ cp -r /opt/apps/resif/data/production/v1.1-20180716/default/software/cae/OpenFOAM/v1712-intel-2018a/OpenFOAM-v1712/tutorials/multiphase/interFoam/laminar/damBreak/damBreak .
    +$ blockMesh
    +$ cd damBreak/system
    +
    +Open a decomposeParDict and set numberOfSubdomains 16 where n is number of MPI processor. +And do blockMesh to prepare the computational domain (mesh) and finally do the decomposePar to +repartition the mesh domain.

    +
    #!/bin/bash -l
    +#SBATCH -J OpenFOAM
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --ntasks-per-socket=14
    +#SBATCH -c 1
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Write out the stdout+stderr in a file
    +#SBATCH -o output.txt
    +
    +# Mail me on job start & end
    +#SBATCH --mail-user=myemailaddress@universityname.domain
    +#SBATCH --mail-type=BEGIN,END
    +
    +# To get basic info. about the job
    +echo "== Starting run at $(date)"
    +echo "== Job ID: ${SLURM_JOBID}"
    +echo "== Node list: ${SLURM_NODELIST}"
    +echo "== Submit dir. : ${SLURM_SUBMIT_DIR}"
    +
    +# Load the required version of OpenFOAM and needed environment
    +module purge
    +module load swenv/default-env/v1.1-20180716-production
    +module load cae/OpenFOAM/v1712-intel-2018a
    +
    +# Load the OpenFOAM environment
    +source $FOAM_BASH
    +
    +srun interFoam -parallel
    +
    + +

    Additional information

    +

    To know more information about OpenFOAM tutorial/documentation, +please refer https://www.openfoam.com/documentation/tutorial-guide/

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please file a support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/electronics/abinit/index.html b/software/computational-chemistry/electronics/abinit/index.html new file mode 100644 index 00000000..09d4eeae --- /dev/null +++ b/software/computational-chemistry/electronics/abinit/index.html @@ -0,0 +1,2970 @@ + + + + + + + + + + + + + + + + + + + + + + + + ABINIT - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ABINIT

    + +

    +ABINIT is a software suite to calculate the optical, mechanical, vibrational, +and other observable properties of materials. Starting from the quantum equations +of density functional theory, you can build up to advanced applications with +perturbation theories based on DFT, and many-body Green's functions (GW and DMFT) . +ABINIT can calculate molecules, nanostructures and solids with any chemical composition, +and comes with several complete and robust tables of atomic potentials. +On-line tutorials are available for the main features of the code, +and several schools and workshops are organized each year.

    +

    Available versions of ABINIT in ULHPC

    +

    To check available versions of ABINIT at ULHPC type module spider abinit. +The following list shows the available versions of ABINIT in ULHPC. +

    chem/ABINIT/8.2.3-intel-2017a
    +chem/ABINIT/8.6.3-intel-2018a-trio-nc
    +chem/ABINIT/8.6.3-intel-2018a
    +chem/ABINIT/8.10.2-intel-2019a
    +

    +

    Interactive mode

    +

    To open an ABINIT in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 # OR si --x11 [...]
    +
    +# Load the module abinit and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load chem/ABINIT/8.10.2-intel-2019a
    +
    +$ export SRUN_CPUS_PER_TASK=$SLURM_CPUS_PER_TASK
    +
    +$ abinit < example.in 
    +
    + +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J ABINIT
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module abinit and needed environment
    +module purge
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load chem/ABINIT/8.10.2-intel-2019a
    +
    +srun -n ${SLURM_NTASKS} abinit < input.files &> out
    +
    + +

    Additional information

    +

    To know more information about ABINIT tutorial and documentation, +please refer to ABINIT tutorial.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/electronics/ase/index.html b/software/computational-chemistry/electronics/ase/index.html new file mode 100644 index 00000000..58a71890 --- /dev/null +++ b/software/computational-chemistry/electronics/ase/index.html @@ -0,0 +1,2967 @@ + + + + + + + + + + + + + + + + + + + + + + + + ASE - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ASE

    + +

    The Atomic Simulation Environment (ASE) is a set of tools and Python +modules for setting up, manipulating, running, visualizing and +analyzing atomistic simulations. The code is freely available +under the GNU LGPL license. ASE provides interfaces to different +codes through Calculators which are used together with the +central Atoms object and the many available algorithms in ASE.

    +

    Available versions of ASE in ULHPC

    +

    To check available versions of ASE at ULHPC type module spider ase. +The following list shows the available versions of ASE in ULHPC. +

    chem/ASE/3.13.0-intel-2017a-Python-2.7.13
    +chem/ASE/3.16.0-foss-2018a-Python-2.7.14
    +chem/ASE/3.16.0-intel-2018a-Python-2.7.14
    +chem/ASE/3.17.0-foss-2019a-Python-3.7.2
    +chem/ASE/3.17.0-intel-2019a-Python-2.7.15
    +chem/ASE/3.17.0-intel-2019a-Python-3.7.2
    +

    +

    Interactive mode

    +

    To open an ASE in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 # OR si --x11 [...]
    +
    +# Load the module ase and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load chem/ASE/3.17.0-intel-2019a-Python-3.7.2
    +
    +$ python3 example.py
    +
    + +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J ASE
    +#SBATCH -N 1
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH --ntasks-per-node=1
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module ase and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load chem/ASE/3.17.0-intel-2019a-Python-3.7.2
    +
    +python3 example.py
    +
    + +

    Additional information

    +

    To know more information about ASE tutorial and documentation, +please refer to ASE tutorials.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/electronics/crystal/index.html b/software/computational-chemistry/electronics/crystal/index.html new file mode 100644 index 00000000..7c72e6c6 --- /dev/null +++ b/software/computational-chemistry/electronics/crystal/index.html @@ -0,0 +1,2875 @@ + + + + + + + + + + + + + + + + + + + + + + + + Crystal - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Crystal

    + +

    +The CRYSTAL package +performs ab initio calculations of the ground +state energy, energy gradient, electronic wave function and properties +of periodic systems. Hartree-Fock or Kohn-Sham Hamiltonians +(that adopt an Exchange-Correlation potential following the postulates of +Density-Functional Theory) can be used. Systems periodic in 0 (molecules, 0D), +1 (polymers,1D), 2 (slabs, 2D), and 3 dimensions (crystals, 3D) +are treated on an equal footing. In eachcase the fundamental approximation +made is the expansion of the single particle wave functions(’Crystalline Orbital’, CO) +as a linear combination of Bloch functions (BF) defined in terms of +local functions (hereafter indicated as ’Atomic Orbitals’, AOs).

    +

    Available versions of CRYSTAL in ULHPC

    +

    To check available versions of CRYSTAL at UL-HPC type module spider crystal. +The following list shows the available versions of CRYSTAL in ULHPC. +

    chem/CRYSTAL/17-intel-2017a-1.0.1
    +chem/CRYSTAL/17-intel-2018a-1.0.1
    +chem/CRYSTAL/17-intel-2019a-1.0.2
    +

    +

    Interactive mode

    +

    To test CRYTAL in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 # OR si --x11 [...]
    +
    +# Load the module crytal and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load chem/CRYSTAL/17-intel-2019a-1.0.2
    +
    +$ Pcrystal >& log.out
    +
    + +
    +

    Warning

    +

    Please note your input file should be named just as INPUT. Pcrytal automatically +will recognize the INPUT file from the folder where you are currently in.

    +
    +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J CRYSTAL
    +#SBATCH -N 2
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module crytal and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel    # Eventually (only relevant on 2019a software environment) 
    +$ module load chem/CRYSTAL/17-intel-2019a-1.0.2
    +
    +srun -n ${SLURM_NTASKS} Pcrystal >& log.out
    +
    + +

    Additional information

    +

    To know more information about CRYSTAL tutorial and documentation, +please refer to CRYSTAL solutions tutorials.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/electronics/meep/index.html b/software/computational-chemistry/electronics/meep/index.html new file mode 100644 index 00000000..4d406bee --- /dev/null +++ b/software/computational-chemistry/electronics/meep/index.html @@ -0,0 +1,2965 @@ + + + + + + + + + + + + + + + + + + + + + + + + MEEP - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    MEEP

    + +

    +Meep is a free and open-source +software package for electromagnetics simulation via +the finite-difference time-domain (FDTD) method spanning a +broad range of applications.

    +

    Available versions of Meep in ULHPC

    +

    To check available versions of Meep at ULHPC type module spider meep. +The following list shows the available versions of Meep in ULHPC. +

    phys/Meep/1.3-intel-2017a
    +phys/Meep/1.4.3-intel-2018a
    +phys/Meep/1.4.3-intel-2019a
    +

    +

    Interactive mode

    +

    To try Meep in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 # OR si --x11 [...]
    +
    +# Load the module meep and needed environment 
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load toolchain/intel/2019a
    +$ module load phys/Meep/1.4.3-intel-2019a
    +
    +$ meep example.ctl > result_output
    +
    + +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J Meep
    +#SBATCH -N 2
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module meep and needed environment 
    +module purge
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load toolchain/intel/2019a
    +module load phys/Meep/1.4.3-intel-2019a
    +
    +srun -n ${SLURM_NTASKS} meep example.ctl > result_output
    +
    + +

    Additional information

    +

    To know more information about Meep tutorial and documentation, +please refer to Meep tutorial.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/electronics/quantum-espresso/index.html b/software/computational-chemistry/electronics/quantum-espresso/index.html new file mode 100644 index 00000000..467af6a4 --- /dev/null +++ b/software/computational-chemistry/electronics/quantum-espresso/index.html @@ -0,0 +1,2974 @@ + + + + + + + + + + + + + + + + + + + + + + + + Quantum Espresso - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Quantum Espresso

    + +

    +Quantum ESPRESSO +is an integrated suite of Open-Source computer codes for electronic-structure +calculations and materials modeling at the nanoscale. +It is based on density-functional theory, plane waves, and pseudopotentials.

    +

    Quantum ESPRESSO has evolved into a distribution of independent and +inter-operable codes in the spirit of an open-source project. +The Quantum ESPRESSO distribution consists of a “historical” +core set of components, and a set of plug-ins that perform more advanced tasks, +plus a number of third-party packages designed to be inter-operable with +the core components. Researchers active in the field of electronic-structure +calculations are encouraged to participate in the project by +contributing their own codes or by implementing their own +ideas into existing codes.

    +

    Available versions of Quantum ESPRESSO in ULHPC

    +

    To check available versions of Quantum ESPRESSO at ULHPC type module spider quantum espresso. +The following list shows the available versions of Quantum ESPRESSO in ULHPC. +

    chem/QuantumESPRESSO/6.1-intel-2017a
    +chem/QuantumESPRESSO/6.1-intel-2018a-maxter500
    +chem/QuantumESPRESSO/6.1-intel-2018a
    +chem/QuantumESPRESSO/6.2.1-intel-2018a
    +chem/QuantumESPRESSO/6.4.1-intel-2019a
    +

    +

    Interactive mode

    +

    To open an Quantum ESPRESSO in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11  # OR si --x11 [...]
    +
    +# Load the module quantumespresso and needed environment 
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load chem/QuantumESPRESSO/6.4.1-intel-2019a
    +
    +$ pw.x -input example.in
    +
    + +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J QuantumESPRESSO
    +#SBATCH -N 2
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module quantumespresso and needed environment 
    +module purge
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load chem/QuantumESPRESSO/6.4.1-intel-2019a
    +
    +srun -n ${SLURM_NTASKS} pw.x -input example.inp
    +
    + +

    Additional information

    +

    To know more information about Quantum ESPRESSO tutorial and documentation, +please refer to Quantum ESPRESSO user manual.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/electronics/vasp/index.html b/software/computational-chemistry/electronics/vasp/index.html new file mode 100644 index 00000000..3f8ccca0 --- /dev/null +++ b/software/computational-chemistry/electronics/vasp/index.html @@ -0,0 +1,2964 @@ + + + + + + + + + + + + + + + + + + + + + + + + VASP - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    VASP

    + +

    +VASP is a package for performing ab initio quantum-mechanical molecular dynamics (MD) +using pseudopotentials and a plane wave basis set. The approach implemented in VASP +is based on a finite-temperature local-density approximation (with the free energy as variational quantity) +and an exact evaluation of the instantaneous electronic ground state at each MD step +using efficient matrix diagonalization schemes and an efficient Pulay mixing.

    +

    Available versions of VASP in ULHPC

    +

    To check available versions of VASP at ULHPC type module spider vasp. +The following list shows the available versions of VASP in ULHPC. +

    phys/VASP/5.4.4-intel-2017a
    +phys/VASP/5.4.4-intel-2018a
    +phys/VASP/5.4.4-intel-2019a
    +

    +

    Interactive mode

    +

    To open VASP in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 # OR si --x11 [...]
    +
    +# Load the module vasp and needed environment 
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load phys/VASP/5.4.4-intel-2019a
    +
    +$ vasp_[std/gam/ncl]
    +
    + +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J VASP
    +#SBATCH -N 2
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module vasp and needed environment 
    +module purge 
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load phys/VASP/5.4.4-intel-2019a
    +
    +srun -n ${SLURM_NTASKS} vasp_[std/gam/ncl]
    +
    + +

    Additional information

    +

    To know more information about VASP tutorial and documentation, +please refer to VASP manual.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/molecular-dynamics/cp2k/index.html b/software/computational-chemistry/molecular-dynamics/cp2k/index.html new file mode 100644 index 00000000..34238f73 --- /dev/null +++ b/software/computational-chemistry/molecular-dynamics/cp2k/index.html @@ -0,0 +1,2969 @@ + + + + + + + + + + + + + + + + + + + + + + + + CP2K - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    CP2K

    + +

    +CP2K is a quantum chemistry and solid state physics software package that can perform atomistic +simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. +CP2K provides a general framework for different modeling methods such as DFT using the mixed +Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, +GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, …), and classical force +fields (AMBER, CHARMM, …). CP2K can do simulations of molecular dynamics, metadynamics, +Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, +and transition state optimization using NEB or dimer method. +CP2K is written in Fortran 2008 and can be run efficiently in parallel using a combination of multi-threading, +MPI, and CUDA. It is freely available under the GPL license. +It is therefore easy to give the code a try, and to make modifications as needed.

    +

    Available versions of CP2K in ULHPC

    +

    To check available versions of CP2K at ULHPC type module spider cp2k. +The following list shows the available versions of CP2K in ULHPC. +

    chem/CP2K/6.1-foss-2019a
    +chem/CP2K/6.1-intel-2018a
    +

    +

    Interactive mode

    +

    To open CP2K in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 # OR si --x11 [...]
    +
    +# Load the module cp2k and needed environment 
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load chem/CP2K/6.1-intel-2018a
    +
    +$ cp2k.popt -i example.inp 
    +
    + +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J CP2K
    +#SBATCH -N 2
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module cp2k and needed environment 
    +module purge
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load chem/CP2K/6.1-intel-2018a 
    +
    +srun -n ${SLURM_NTASKS} cp2k.popt -i example.inp > outputfile.out
    +
    + +

    Additional information

    +

    To know more information about CP2K tutorial and documentation, +please refer to CP2K HOWTOs.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/molecular-dynamics/gromacs/index.html b/software/computational-chemistry/molecular-dynamics/gromacs/index.html new file mode 100644 index 00000000..21ba9283 --- /dev/null +++ b/software/computational-chemistry/molecular-dynamics/gromacs/index.html @@ -0,0 +1,2969 @@ + + + + + + + + + + + + + + + + + + + + + + + + GROMACS - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    GROMACS

    + +

    +GROMACS is a versatile package to perform molecular dynamics, i.e. simulate +the Newtonian equations of motion for systems with hundreds to millions of particles. +It is primarily designed for biochemical molecules like proteins, lipids and nucleic +acids that have a lot of complicated bonded interactions, but since GROMACS +is extremely fast at calculating the nonbonded interactions +(that usually dominate simulations) many groups are also using it +for research on non-biological systems, e.g. polymers.

    +

    Available versions of GROMACS in ULHPC

    +

    To check available versions of GROMACS at ULHPC type module spider gromacs. +The following list shows the available versions of GROMACS in ULHPC. +

    bio/GROMACS/2016.3-intel-2017a-hybrid
    +bio/GROMACS/2016.5-intel-2018a-hybrid
    +bio/GROMACS/2019.2-foss-2019a
    +bio/GROMACS/2019.2-fosscuda-2019a
    +bio/GROMACS/2019.2-intel-2019a
    +bio/GROMACS/2019.2-intelcuda-2019a
    +

    +

    Interactive mode

    +

    To try GROMACS in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 # OR si --x11 [...]
    +
    +# Load the module gromacs and needed environment 
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load bio/GROMACS/2019.2-intel-2019a
    +
    +$ gmx_mpi mdrun <all your GMX job specification options in here>
    +
    + +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J GROMAC
    +#SBATCH -N 2
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module gromacs and needed environment 
    +module purge
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load bio/GROMACS/2019.2-intel-2019a
    +
    +srun -n ${SLURM_NTASKS} gmx_mpi mdrun <all your GMX job specification options in here>
    +
    + +

    Additional information

    +

    To know more information about GROMACS tutorial and documentation, +please refer to GROMACS documentation.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/molecular-dynamics/helping-libraries/index.html b/software/computational-chemistry/molecular-dynamics/helping-libraries/index.html new file mode 100644 index 00000000..c74a7e47 --- /dev/null +++ b/software/computational-chemistry/molecular-dynamics/helping-libraries/index.html @@ -0,0 +1,3183 @@ + + + + + + + + + + + + + + + + + + + + + + + + Helping Libraries - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Helping Libraries

    + +

    libctl

    +

    This is libctl, a Guile-based library +for supporting flexible control files in scientific simulations. +For more information about libctl, please refer to libctl Documentation.

    +

    Available versions of libctl in ULHPC:

    +
    chem/libctl/3.2.2-intel-2017a
    +chem/libctl/4.0.0-intel-2018a
    +chem/libctl/4.0.0-intel-2019a
    +
    + +

    Libint

    +

    Libint library is used to evaluate the traditional (electron repulsion) and +certain novel two-body matrix elements (integrals) over Cartesian +Gaussian functions used in modern atomic and molecular theory.

    +

    Available versions of Libint in ULHPC:

    +
    chem/Libint/1.1.6-GCC-8.2.0-2.31.1
    +chem/Libint/1.2.1-intel-2018a
    +
    + +

    Libxc

    +

    Libxc is a library of exchange-correlation functionals for density-functional theory. +The aim is to provide a portable, well tested and reliable set of exchange and +correlation functionals that can be used by all the ETSF codes and also other codes.

    +

    Available versions of Libxc in ULHPC:

    +
    chem/libxc/3.0.0-intel-2017a
    +chem/libxc/3.0.1-intel-2018a
    +chem/libxc/4.2.3-intel-2019a
    +chem/libxc/4.3.4-GCC-8.2.0-2.31.1
    +chem/libxc/4.3.4-iccifort-2019.1.144-GCC-8.2.0-2.31.1
    +
    + +

    PLUMED

    +

    +PLUMED works together with some of the most popular MD engines, +such as ACEMD, Amber, DL_POLY, GROMACS, LAMMPS, NAMD, OpenMM, DFTB+, ABIN, CP2K, i-PI, PINY-MD, +and Quantum Espresso. In addition, PLUMED can be used to augment the capabilities of +analysis tools such as VMD, HTMD, OpenPathSampling, and as a +standalone utility to analyze pre-calculated MD trajectories.

    +

    PLUMED can be interfaced with the host code using a single +well-documented API that enables the PLUMED functionalities to be imported. +The API is accessible from multiple languages (C, C++, FORTRAN, and Python), +and is thus compatible with the majority of the codes used in the community. +The PLUMED license (L-GPL) also allows it to be interfaced with proprietary software.

    +

    Available versions of PLUMED in ULHPC:

    +

    chem/PLUMED/2.4.2-intel-2018a
    +chem/PLUMED/2.5.1-foss-2019a
    +chem/PLUMED/2.5.1-intel-2019a
    +
    +To know more information about PLUMMED tutorial and documentation, +please refer to PLUMMED Cambridge tutorial.

    +

    ESPResSo

    +

    +ESPResSo is a highly versatile software package for performing and analyzing +scientific Molecular Dynamics many-particle simulations of coarse-grained +atomistic or bead-spring models as they are used in soft matter research in physics, +chemistry and molecular biology. It can be used to simulate systems such as polymers, +liquid crystals, colloids, polyelectrolytes, ferrofluids and biological systems, +for example DNA and lipid membranes. It also has a DPD and lattice Boltzmann +solver for hydrodynamic interactions, and allows several particle couplings to the LB fluid.

    +

    ESPResSo is free, open-source software published under the GNU General Public License (GPL3). +It is parallelized and can be employed on desktop machines, convenience clusters as well as on +supercomputers with hundreds of CPUs, and some modules have also support for GPU acceleration. +The parallel code is controlled via the scripting language Python, +which gives the software its great flexibility.

    +

    Available versions of ESPResSo in ULHPC:

    +

    phys/ESPResSo/3.3.1-intel-2017a-parallel
    +phys/ESPResSo/3.3.1-intel-2018a-parallel
    +phys/ESPResSo/4.0.2-intel-2019a
    +phys/ESPResSo/4.0.2-intelcuda-2019a
    +
    +To know more information about ESPResSo tutorial and documentation, +please refer to ESPRessSo documentation.

    +

    UDUNITS

    +

    +The UDUNITS package supports +units of physical quantities. Its C library provides for arithmetic +manipulation of units and for conversion of numeric values between +compatible units. The package contains an extensive unit database, +which is in XML format and user-extendable. The package also contains a +command-line utility for investigating units and converting values.

    +

    Available version of UDUNITS in ULHPC:

    +

    phys/UDUNITS/2.2.26-GCCcore-8.2.0
    +
    +To know more information about UDUNITS tutorial and documentation, please +refer to UDUNITS 2.2.26 Manual.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/molecular-dynamics/namd/index.html b/software/computational-chemistry/molecular-dynamics/namd/index.html new file mode 100644 index 00000000..bffa2287 --- /dev/null +++ b/software/computational-chemistry/molecular-dynamics/namd/index.html @@ -0,0 +1,2968 @@ + + + + + + + + + + + + + + + + + + + + + + + + NAMD - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    NAMD

    + +

    +NAMD, recipient of a 2002 Gordon Bell Award and a 2012 Sidney Fernbach Award, +is a parallel molecular dynamics code designed for high-performance simulation +of large biomolecular systems. Based on Charm++ parallel objects, +NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulations. +NAMD uses the popular molecular graphics program VMD for simulation setup and +trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR. +NAMD is distributed free of charge with source code. You can build NAMD yourself or +download binaries for a wide variety of platforms. +Our tutorials show you how to use NAMD and VMD for biomolecular modeling.

    +

    Available versions of NAMD in ULHPC

    +

    To check available versions of NAMD at ULHPC type module spider namd. +The following list shows the available versions of NAMD in ULHPC. +

    chem/NAMD/2.12-intel-2017a-mpi
    +chem/NAMD/2.12-intel-2018a-mpi
    +chem/NAMD/2.13-foss-2019a-mpi
    +

    +

    Interactive mode

    +

    To open NAMD in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 # OR si --x11 [...]
    +
    +# Load the module namd and needed environment 
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load chem/NAMD/2.12-intel-2018a-mpi
    +
    +$ namd2 +setcpuaffinity +p4 config_file > output_file
    +
    + +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J NAMD
    +#SBATCH -N 2
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module namd and needed environment 
    +module purge
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load chem/NAMD/2.12-intel-2018a-mpi
    +
    +srun -n ${SLURM_NTASKS} namd2 +setcpuaffinity +p56 config_file.namd > output_file
    +
    + +

    Additional information

    +

    To know more information about NAMD tutorial and documentation, +please refer to NAMD User's Guide.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/computational-chemistry/molecular-dynamics/nwchem/index.html b/software/computational-chemistry/molecular-dynamics/nwchem/index.html new file mode 100644 index 00000000..6a797de7 --- /dev/null +++ b/software/computational-chemistry/molecular-dynamics/nwchem/index.html @@ -0,0 +1,2968 @@ + + + + + + + + + + + + + + + + + + + + + + + + NWCHEM - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    NWCHEM

    + +

    +NWChem aims to provide its users with computational chemistry tools that +are scalable both in their ability to efficiently treat large scientific +problems, and in their use of available computing resources from +high-performance parallel supercomputers to conventional workstation clusters.

    +

    Available versions of NWChem in ULHPC

    +

    To check available versions of NWChem at ULHPC type module spider nwchem. +The following list shows the available versions of NWChem in ULHPC.

    +
    chem/NWChem/6.6.revision27746-intel-2017a-2015-10-20-patches-20170814-Python-2.7.13
    +chem/NWChem/6.8.revision47-intel-2018a-Python-2.7.14
    +chem/NWChem/6.8.revision47-intel-2019a-Python-2.7.15
    +
    + +

    Interactive mode

    +

    To try NWChem in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11 # OR si --x11 [...]
    +
    +# Load the module nwchem and needed environment 
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load chem/NWChem/6.8.revision47-intel-2019a-Python-2.7.15
    +
    +$ nwchem example
    +
    + +
    +

    naming input file

    +

    Please note example file should be named with extension like example.nw.

    +
    +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J NWChem
    +#SBATCH -N 2
    +#SBATCH -A <project name>
    +#SBATCH -M --cluster iris 
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module nwchem and needed environment 
    +module purge 
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load chem/NWChem/6.8.revision47-intel-2019a-Python-2.7.15
    +
    +srun -n ${SLURM_NTASKS} nwchem example 
    +
    + +

    Additional information

    +

    To know more information about NWChem tutorial and documentation, +please refer to NWChem User Documentation.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/eessi/index.html b/software/eessi/index.html new file mode 100644 index 00000000..9e84285e --- /dev/null +++ b/software/eessi/index.html @@ -0,0 +1,2854 @@ + + + + + + + + + + + + + + + + + + + + + + + + EESSI software stack - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    EESSI - European Environment for Scientific Software Installations

    +

    +

    The European Environment for Scientific Software Installations (EESSI, pronounced as "easy") is a collaboration between different European partners in HPC community. +The goal of this project is to build a common stack of scientific software installations for HPC systems and beyond, including laptops, personal workstations and cloud infrastructure.

    +

    The EESSI software stack is available on the ULHPC platform, and gives you access to software modules maintained by the EESSI project and optimized for the CPU architectures available on the ULHPC platform.

    +

    On a compute node, to set up the EESSI environment, simply run the command:

    +
    source /cvmfs/software.eessi.io/versions/2023.06/init/bash
    +
    + +

    The first usage may be slow as the files are downloaded from an upstream Stratum 1 server, but the files are cached locally.

    +

    You should see the following output:

    +
    Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06!
    +archdetect says x86_64/amd/zen2
    +Using x86_64/amd/zen2 as software subdirectory.
    +Using /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all as the directory to be added to MODULEPATH.
    +Found Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod/lmodrc.lua
    +Initializing Lmod...
    +Prepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all to $MODULEPATH...
    +Environment set up to use EESSI (2023.06), have fun!
    +{EESSI 2023.06} [user@system ~]$ 
    +
    + +

    The last line is the shell prompt.

    +

    Your environment is now set up, you are ready to start running software provided by EESSI!

    +

    To see which modules (and extensions) are available, run:

    +
    module avail
    +
    + +

    Here is a short excerpt of the output produced by module avail:

    +
    ----- /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all -----
    +   ALL/0.9.2-foss-2023a           ESPResSo/4.2.1-foss-2023a        foss/2023a            h5py/3.9.0-foss-2023a
    +   ParaView/5.11.2-foss-2023a     PyTorch/2.1.2-foss-2023a         QuantumESPRESSO/7.2-foss-2022b   VTK/9.3.0-foss-2023a
    +   ELPA/2022.05.001-foss-2022b    foss/2022b                       foss/2023b (D)        OpenFOAM/11-foss-2023a
    +...
    +
    + +

    For more precise information, please refer to the official documentation.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/index.html b/software/index.html new file mode 100644 index 00000000..d6cb3cfa --- /dev/null +++ b/software/index.html @@ -0,0 +1,5892 @@ + + + + + + + + + + + + + + + + + + + + + + + + List of all softwares - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    List of all softwares

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersCategoryDescription
    ABAQUS2018, 20212019b, 2020bbroadwell, skylake, epyciris, aionCFD/Finite element modellingFinite Element Analysis software for modeling, visualization and best-in-class implicit and explicit dynamics FEA.
    ABINIT9.4.12020bepycaionChemistryABINIT is a package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave or wavelet basis.
    ABySS2.2.52020bbroadwell, epyc, skylakeaion, irisBiologyAssembly By Short Sequences - a de novo, parallel, paired-end sequence assembler
    ACTC1.12019b, 2020bbroadwell, skylake, epyciris, aionLibrariesACTC converts independent triangles into triangle strips or fans.
    ANSYS19.4, 21.12019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesANSYS simulation software enables organizations to confidently predict how their products will operate in the real world. We believe that every product is a promise of something greater.
    AOCC3.1.02020bepycaionCompilersAMD Optimized C/C++ & Fortran compilers (AOCC) based on LLVM 12.0
    ASE3.19.0, 3.20.1, 3.21.12019b, 2020bbroadwell, skylake, epyc, gpuiris, aionChemistryASE is a python package providing an open source Atomic Simulation Environment in the Python scripting language. From version 3.20.1 we also include the ase-ext package, it contains optional reimplementations in C of functions in ASE. ASE uses it automatically when installed.
    ATK2.34.1, 2.36.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications.
    Advisor2019_update52019bbroadwell, skylakeirisPerformance measurementsVectorization Optimization and Thread Prototyping - Vectorize & thread code or performance “dies” - Easy workflow + data + tips = faster code faster - Prioritize, Prototype & Predict performance gain
    Anaconda32020.02, 2020.112019b, 2020bbroadwell, skylake, epyciris, aionProgramming LanguagesBuilt to complement the rich, open source Python community, the Anaconda platform provides an enterprise-ready data analytics platform that empowers companies to adopt a modern open data science analytics architecture.
    ArmForge20.0.32019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesThe industry standard development package for C, C++ and Fortran high performance code on Linux. Forge is designed to handle the complex software projects - including parallel, multiprocess and multithreaded code. Arm Forge combines an industry-leading debugger, Arm DDT, and an out-of-the-box-ready profiler, Arm MAP.
    ArmReports20.0.32019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesArm Performance Reports - a low-overhead tool that produces one-page text and HTML reports summarizing and characterizing both scalar and MPI application performance. Arm Performance Reports runs transparently on optimized production-ready codes by adding a single command to your scripts, and provides the most effective way to characterize and understand the performance of HPC application runs.
    Armadillo10.5.3, 9.900.12020b, 2019bbroadwell, epyc, skylakeaion, irisNumerical librariesArmadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.
    Arrow0.16.02019bbroadwell, skylakeirisData processingApache Arrow (incl. PyArrow Python bindings)), a cross-language development platform for in-memory data.
    Aspera-CLI3.9.1, 3.9.62019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesIBM Aspera Command-Line Interface (the Aspera CLI) is a collection of Aspera tools for performing high-speed, secure data transfers from the command line. The Aspera CLI is for users and organizations who want to automate their transfer workflows.
    Autoconf2.692019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentAutoconf is an extensible package of M4 macros that produce shell scripts to automatically configure software source code packages. These scripts can adapt the packages to many kinds of UNIX-like systems without manual user intervention. Autoconf creates a configuration script for a package from a template file that lists the operating system features that the package can use, in the form of M4 macro calls.
    Automake1.16.1, 1.16.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentAutomake: GNU Standards-compliant Makefile generator
    Autotools20180311, 202003212019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentThis bundle collect the standard GNU build tools: Autoconf, Automake and libtool
    BEDTools2.29.2, 2.30.02019b, 2020bbroadwell, skylake, epyciris, aionBiologyBEDTools: a powerful toolset for genome arithmetic. The BEDTools utilities allow one to address common genomics tasks such as finding feature overlaps and computing coverage. The utilities are largely based on four widely-used file formats: BED, GFF/GTF, VCF, and SAM/BAM.
    BLAST+2.11.0, 2.9.02020b, 2019bbroadwell, epyc, skylakeaion, irisBiologyBasic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences.
    BWA0.7.172019b, 2020bbroadwell, skylake, epyciris, aionBiologyBurrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.
    BamTools2.5.12019b, 2020bbroadwell, skylake, epyciris, aionBiologyBamTools provides both a programmer's API and an end-user's toolkit for handling BAM files.
    Bazel0.26.1, 0.29.1, 3.7.22019b, 2020bgpu, broadwell, skylake, epyciris, aionDevelopmentBazel is a build tool that builds code quickly and reliably. It is used to build the majority of Google's software.
    BioPerl1.7.2, 1.7.82019b, 2020bbroadwell, skylake, epyciris, aionBiologyBioperl is the product of a community effort to produce Perl code which is useful in biology. Examples include Sequence objects, Alignment objects and database searching objects.
    Bison3.3.2, 3.5.3, 3.7.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesBison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.
    Boost.Python1.74.02020bbroadwell, epyc, skylakeaion, irisLibrariesBoost.Python is a C++ library which enables seamless interoperability between C++ and the Python programming language.
    Boost1.71.0, 1.74.02019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentBoost provides free peer-reviewed portable C++ source libraries.
    Bowtie22.3.5.1, 2.4.22019b, 2020bbroadwell, skylake, epyciris, aionBiologyBowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM Index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.
    CGAL4.14.1, 5.22019b, 2020bbroadwell, skylake, epyciris, aionNumerical librariesThe goal of the CGAL Open Source Project is to provide easy access to efficient and reliable geometric algorithms in the form of a C++ library.
    CMake3.15.3, 3.18.4, 3.20.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentCMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.
    CPLEX12.102019bbroadwell, skylakeirisMathematicsIBM ILOG CPLEX Optimizer's mathematical programming technology enables analytical decision support for improving efficiency, reducing costs, and increasing profitability.
    CRYSTAL172019bbroadwell, skylakeirisChemistryThe CRYSTAL package performs ab initio calculations of the ground state energy, energy gradient, electronic wave function and properties of periodic systems. Hartree-Fock or Kohn- Sham Hamiltonians (that adopt an Exchange-Correlation potential following the postulates of Density-Functional Theory) can be used.
    CUDA10.1.243, 11.1.12019b, 2020bgpuirisSystem-level softwareCUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
    CUDAcore11.1.12020bgpuirisSystem-level softwareCUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
    Check0.15.22020bgpuirisLibrariesCheck is a unit testing framework for C. It features a simple interface for defining unit tests, putting little in the way of the developer. Tests are run in a separate address space, so both assertion failures and code errors that cause segmentation faults or other signals can be caught. Test results are reportable in the following: Subunit, TAP, XML, and a generic logging format.
    Clang11.0.1, 9.0.12020b, 2019bbroadwell, epyc, skylake, gpuaion, irisCompilersC, C++, Objective-C compiler, based on LLVM. Does not include C++ standard library -- use libstdc++ from GCC.
    CubeGUI4.4.42019bbroadwell, skylakeirisPerformance measurementsCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube graphical report explorer.
    CubeLib4.4.42019bbroadwell, skylakeirisPerformance measurementsCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube general purpose C++ library component and command-line tools.
    CubeWriter4.4.32019bbroadwell, skylakeirisPerformance measurementsCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube high-performance C writer library component.
    DB18.1.32, 18.1.402019b, 2020bbroadwell, skylake, epyc, gpuiris, aionUtilitiesBerkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects.
    DB_File1.8552020bbroadwell, epyc, skylakeaion, irisData processingPerl5 access to Berkeley DB version 1.x.
    DBus1.13.12, 1.13.182019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentD-Bus is a message bus system, a simple way for applications to talk to one another. In addition to interprocess communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a "single instance" application or daemon, and to launch applications and daemons on demand when their services are needed.
    DMTCP2.5.22019bbroadwell, skylakeirisUtilitiesDMTCP is a tool to transparently checkpoint the state of multiple simultaneous applications, including multi-threaded and distributed applications. It operates directly on the user binary executable, without any Linux kernel modules or other kernel modifications.
    Dakota6.11.0, 6.15.02019b, 2020bbroadwell, skylakeirisMathematicsThe Dakota project delivers both state-of-the-art research and robust, usable software for optimization and UQ. Broadly, the Dakota software's advanced parametric analyses enable design exploration, model calibration, risk analysis, and quantification of margins and uncertainty with computational models."
    Doxygen1.8.16, 1.8.202019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentDoxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.
    ELPA2019.11.001, 2020.11.0012019b, 2020bbroadwell, epyc, skylakeiris, aionMathematicsEigenvalue SoLvers for Petaflop-Applications .
    EasyBuild4.3.0, 4.3.3, 4.4.1, 4.4.2, 4.5.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesEasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
    Eigen3.3.7, 3.3.8, 3.4.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionMathematicsEigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
    Elk6.3.2, 7.0.122019b, 2020bbroadwell, skylake, epyciris, aionPhysicsAn all-electron full-potential linearised augmented-plane wave (FP-LAPW) code with many advanced features. Written originally at Karl-Franzens-Universität Graz as a milestone of the EXCITING EU Research and Training Network, the code is designed to be as simple as possible so that new developments in the field of density functional theory (DFT) can be added quickly and reliably.
    FDS6.7.1, 6.7.62019b, 2020bbroadwell, skylake, epyciris, aionPhysicsFire Dynamics Simulator (FDS) is a large-eddy simulation (LES) code for low-speed flows, with an emphasis on smoke and heat transport from fires.
    FFTW3.3.82019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesFFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data.
    FFmpeg4.2.1, 4.3.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationA complete, cross-platform solution to record, convert and stream audio and video.
    FLAC1.3.32020bbroadwell, epyc, skylake, gpuaion, irisLibrariesFLAC stands for Free Lossless Audio Codec, an audio format similar to MP3, but lossless, meaning that audio is compressed in FLAC without any loss in quality.
    FLTK1.3.52019b, 2020bbroadwell, skylake, epyciris, aionVisualisationFLTK is a cross-platform C++ GUI toolkit for UNIX/Linux (X11), Microsoft Windows, and MacOS X. FLTK provides modern GUI functionality without the bloat and supports 3D graphics via OpenGL and its built-in GLUT emulation.
    FastQC0.11.92019b, 2020bbroadwell, skylake, epyciris, aionBiologyFastQC is a quality control application for high throughput sequence data. It reads in sequence data in a variety of formats and can either provide an interactive application to review the results of several different QC checks, or create an HTML based report which can be integrated into a pipeline.
    Flask1.1.22020bbroadwell, epyc, skylake, gpuaion, irisLibrariesFlask is a lightweight WSGI web application framework. It is designed to make getting started quick and easy, with the ability to scale up to complex applications. This module includes the Flask extensions: Flask-Cors
    Flink1.11.22020bbroadwell, epyc, skylakeaion, irisDevelopmentApache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale.
    FreeImage3.18.02020bbroadwell, epyc, skylakeaion, irisVisualisationFreeImage is an Open Source library project for developers who would like to support popular graphics image formats like PNG, BMP, JPEG, TIFF and others as needed by today's multimedia applications. FreeImage is easy to use, fast, multithreading safe.
    FriBidi1.0.10, 1.0.52020b, 2019bbroadwell, epyc, skylake, gpuaion, irisProgramming LanguagesThe Free Implementation of the Unicode Bidirectional Algorithm.
    GCC10.2.0, 8.3.02020b, 2019bbroadwell, epyc, skylake, gpuaion, irisCompilersThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    GCCcore10.2.0, 8.3.02020b, 2019bbroadwell, epyc, skylake, gpuaion, irisCompilersThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    GDAL3.0.2, 3.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingGDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing.
    GDB10.1, 9.12020b, 2019bbroadwell, epyc, skylakeaion, irisDebuggingThe GNU Project Debugger
    GDRCopy2.12020bgpuirisLibrariesA low-latency GPU memory copy library based on NVIDIA GPUDirect RDMA technology.
    GEOS3.8.0, 3.9.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionMathematicsGEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)
    GLPK4.652019b, 2020bbroadwell, skylake, epyc, gpuiris, aionUtilitiesThe GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library.
    GLib2.62.0, 2.66.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationGLib is one of the base libraries of the GTK+ project
    GMP6.1.2, 6.2.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionMathematicsGMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.
    GObject-Introspection1.63.1, 1.66.12019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentGObject introspection is a middleware layer between C libraries (using GObject) and language bindings. The C library can be scanned at compile time and generate a metadata file, in addition to the actual native C library. Then at runtime, language bindings can read this metadata and automatically provide bindings to call into the C library.
    GPAW-setups0.9.200002019bbroadwell, skylakeirisChemistryPAW setup for the GPAW Density Functional Theory package. Users can install setups manually using 'gpaw install-data' or use setups from this package. The versions of GPAW and GPAW-setups can be intermixed.
    GPAW20.1.02019bbroadwell, skylakeirisChemistryGPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). It uses real-space uniform grids and multigrid methods or atom-centered basis-functions.
    GROMACS2019.4, 2019.6, 2020, 2021, 2021.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionBiologyGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a CPU only build, containing both MPI and threadMPI builds for both single and double precision. It also contains the gmxapi extension for the single precision MPI build.
    GSL2.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesThe GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.
    GTK+3.24.13, 3.24.232019b, 2020bbroadwell, skylake, epyciris, aionVisualisationGTK+ is the primary library used to construct user interfaces in GNOME. It provides all the user interface controls, or widgets, used in a common graphical application. Its object-oriented API allows you to construct user interfaces without dealing with the low-level details of drawing and device interaction.
    Gdk-Pixbuf2.38.2, 2.40.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationThe Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3.
    Ghostscript9.50, 9.53.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesGhostscript is a versatile processor for PostScript data with the ability to render PostScript to different targets. It used to be part of the cups printing stack, but is no longer used for that.
    Go1.14.1, 1.16.62019b, 2020bbroadwell, skylake, epyciris, aionCompilersGo is an open source programming language that makes it easy to build simple, reliable, and efficient software.
    Guile1.8.8, 2.2.42019bbroadwell, skylakeirisProgramming LanguagesGuile is a programming language, designed to help programmers create flexible applications that can be extended by users or other programmers with plug-ins, modules, or scripts.
    Gurobi9.0.0, 9.1.22019b, 2020bbroadwell, skylake, epyciris, aionMathematicsThe Gurobi Optimizer is a state-of-the-art solver for mathematical programming. The solvers in the Gurobi Optimizer were designed from the ground up to exploit modern architectures and multi-core processors, using the most advanced implementations of the latest algorithms.
    HDF51.10.5, 1.10.72019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingHDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.
    HDF4.2.152020bbroadwell, epyc, skylake, gpuaion, irisData processingHDF (also known as HDF4) is a library and multi-object file format for storing and managing data between machines.
    HTSlib1.10.2, 1.122019b, 2020bbroadwell, skylake, epyciris, aionBiologyA C library for reading/writing high-throughput sequencing data. This package includes the utilities bgzip and tabix
    Hadoop2.10.02020bbroadwell, epyc, skylakeaion, irisUtilitiesHadoop MapReduce by Cloudera
    HarfBuzz2.6.4, 2.6.72019b, 2020bbroadwell, skylake, epyciris, aionVisualisationHarfBuzz is an OpenType text shaping engine.
    Harminv1.4.12019bbroadwell, skylakeirisMathematicsHarminv is a free program (and accompanying library) to solve the problem of harmonic inversion - given a discrete-time, finite-length signal that consists of a sum of finitely-many sinusoids (possibly exponentially decaying) in a given bandwidth, it determines the frequencies, decay constants, amplitudes, and phases of those sinusoids.
    Horovod0.19.1, 0.22.02019b, 2020bbroadwell, skylake, gpuirisUtilitiesHorovod is a distributed training framework for TensorFlow.
    Hypre2.20.02020bbroadwell, epyc, skylakeaion, irisNumerical librariesHypre is a library for solving large, sparse linear systems of equations on massively parallel computers. The problems of interest arise in the simulation codes being developed at LLNL and elsewhere to study physical phenomena in the defense, environmental, energy, and biological sciences.
    ICU64.2, 67.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications.
    ISL0.232020bbroadwell, epyc, skylakeaion, irisMathematicsisl is a library for manipulating sets and relations of integer points bounded by linear constraints.
    ImageMagick7.0.10-35, 7.0.9-52020b, 2019bbroadwell, epyc, skylake, gpuaion, irisVisualisationImageMagick is a software suite to create, edit, compose, or convert bitmap images
    Inspector2019_update52019bbroadwell, skylakeirisUtilitiesIntel Inspector XE is an easy to use memory error checker and thread checker for serial and parallel applications
    JasPer2.0.14, 2.0.242019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationThe JasPer Project is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.
    Java1.8.0_241, 11.0.2, 13.0.2, 16.0.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    Jellyfish2.3.02019bbroadwell, skylakeirisBiologyJellyfish is a tool for fast, memory-efficient counting of k-mers in DNA.
    JsonCpp1.9.3, 1.9.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesJsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files.
    Julia1.4.1, 1.6.22019b, 2020bbroadwell, skylake, epyciris, aionProgramming LanguagesJulia is a high-level, high-performance dynamic programming language for numerical computing
    Keras2.3.1, 2.4.32019b, 2020bgpu, broadwell, epyc, skylakeiris, aionMathematicsKeras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow.
    LAME3.1002019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingLAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.
    LLVM10.0.1, 11.0.0, 9.0.0, 9.0.12020b, 2019bbroadwell, epyc, skylake, gpuaion, irisCompilersThe LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
    LMDB0.9.242019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLMDB is a fast, memory-efficient database. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases.
    LibTIFF4.0.10, 4.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariestiff: Library and tools for reading and writing TIFF data files
    LittleCMS2.11, 2.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisVisualisationLittle CMS intends to be an OPEN SOURCE small-footprint color management engine, with special focus on accuracy and performance.
    Lua5.1.5, 5.4.22019b, 2020bbroadwell, skylake, epyciris, aionProgramming LanguagesLua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.
    M41.4.182019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentGNU M4 is an implementation of the traditional Unix macro processor. It is mostly SVR4 compatible although it has some extensions (for example, handling more than 9 positional parameters to macros). GNU M4 also has built-in functions for including files, running shell commands, doing arithmetic, etc.
    MATLAB2019b, 2020a, 2021a2019b, 2020bbroadwell, skylake, epyciris, aionMathematicsMATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran.
    METIS5.1.02019b, 2020bbroadwell, skylake, epyciris, aionMathematicsMETIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.
    MPC1.2.12020bbroadwell, epyc, skylakeaion, irisMathematicsGnu Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. It extends the principles of the IEEE-754 standard for fixed precision real floating point numbers to complex numbers, providing well-defined semantics for every operation. At the same time, speed of operation at high precision is a major design goal.
    MPFR4.0.2, 4.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionMathematicsThe MPFR library is a C library for multiple-precision floating-point computations with correct rounding.
    MUMPS5.3.52020bbroadwell, epyc, skylakeaion, irisMathematicsA parallel sparse direct solver
    Mako1.1.0, 1.1.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentA super-fast templating language that borrows the best ideas from the existing templating languages
    Mathematica12.0.0, 12.1.02019b, 2020bbroadwell, skylake, epyciris, aionMathematicsMathematica is a computational software program used in many scientific, engineering, mathematical and computing fields.
    Maven3.6.32019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentBinary maven install, Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.
    Meep1.4.32019bbroadwell, skylakeirisPhysicsMeep (or MEEP) is a free finite-difference time-domain (FDTD) simulation software package developed at MIT to model electromagnetic systems.
    Mesa19.1.7, 19.2.1, 20.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationMesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
    Meson0.51.2, 0.55.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesMeson is a cross-platform build system designed to be both as fast and as user friendly as possible.
    Mesquite2.3.02019bbroadwell, skylakeirisMathematicsMesh-Quality Improvement Library
    NAMD2.132019bbroadwell, skylakeirisChemistryNAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.
    NASM2.14.02, 2.15.052019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesNASM: General-purpose x86 assembler
    NCCL2.4.8, 2.8.32019b, 2020bgpuirisLibrariesThe NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs.
    NLopt2.6.1, 2.6.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesNLopt is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms.
    NSPR4.21, 4.292019b, 2020bbroadwell, skylake, epyciris, aionLibrariesNetscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions.
    NSS3.45, 3.572019b, 2020bbroadwell, skylake, epyciris, aionLibrariesNetwork Security Services (NSS) is a set of libraries designed to support cross-platform development of security-enabled client and server applications.
    Ninja1.10.1, 1.9.02020b, 2019bbroadwell, epyc, skylake, gpuaion, irisUtilitiesNinja is a small build system with a focus on speed.
    OPARI22.0.52019bbroadwell, skylakeirisPerformance measurementsOPARI2, the successor of Forschungszentrum Juelich's OPARI, is a source-to-source instrumentation tool for OpenMP and hybrid codes. It surrounds OpenMP directives and runtime library calls with calls to the POMP2 measurement interface.
    OTF22.22019bbroadwell, skylakeirisPerformance measurementsThe Open Trace Format 2 is a highly scalable, memory efficient event trace data format plus support library. It is the new standard trace format for Scalasca, Vampir, and TAU and is open for other tools.
    OpenBLAS0.3.12, 0.3.72020b, 2019bbroadwell, epyc, skylake, gpuaion, irisNumerical librariesOpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
    OpenCV4.2.0, 4.5.12019b, 2020bbroadwell, skylake, epyciris, aionVisualisationOpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Includes extra modules for OpenCV from the contrib repository.
    OpenEXR2.5.52020bbroadwell, epyc, skylakeaion, irisVisualisationOpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications
    OpenFOAM-Extend4.1-202004082019bbroadwell, skylakeirisCFD/Finite element modellingOpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
    OpenFOAM8, v19122020b, 2019bepyc, broadwell, skylakeaion, irisCFD/Finite element modellingOpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
    OpenMPI3.1.4, 4.0.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionMPIThe Open MPI Project is an open source MPI-3 implementation.
    PAPI6.0.02019b, 2020bbroadwell, skylake, epyciris, aionPerformance measurementsPAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. In addition Component PAPI provides access to a collection of components that expose performance measurement opportunites across the hardware and software stack.
    PCRE210.33, 10.352019b, 2020bbroadwell, skylake, epyc, gpuiris, aionDevelopmentThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PCRE8.43, 8.442019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PDT3.252019bbroadwell, skylakeirisPerformance measurementsProgram Database Toolkit (PDT) is a framework for analyzing source code written in several programming languages and for making rich program knowledge accessible to developers of static and dynamic analysis tools. PDT implements a standard program representation, the program database (PDB), that can be accessed in a uniform way through a class library supporting common PDB operations.
    PETSc3.14.42020bbroadwell, epyc, skylakeaion, irisNumerical librariesPETSc, pronounced PET-see (the S is silent), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
    PGI19.102019bbroadwell, skylakeirisCompilersC, C++ and Fortran compilers from The Portland Group - PGI
    PLUMED2.5.3, 2.7.02019b, 2020bbroadwell, skylake, epyciris, aionChemistryPLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.
    POV-Ray3.7.0.82020bbroadwell, epyc, skylakeaion, irisVisualisationThe Persistence of Vision Raytracer, or POV-Ray, is a ray tracing program which generates images from a text-based scene description, and is available for a variety of computer platforms. POV-Ray is a high-quality, Free Software tool for creating stunning three-dimensional graphics. The source code is available for those wanting to do their own ports.
    PROJ6.2.1, 7.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesProgram proj is a standard Unix filter function which converts geographic longitude and latitude coordinates into cartesian coordinates
    Pango1.44.7, 1.47.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationPango is a library for laying out and rendering of text, with an emphasis on internationalization. Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in the context of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.
    ParMETIS4.0.32019bbroadwell, skylakeirisMathematicsParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes.
    ParMGridGen1.02019bbroadwell, skylakeirisMathematicsParMGridGen is an MPI-based parallel library that is based on the serial package MGridGen, that implements (serial) algorithms for obtaining a sequence of successive coarse grids that are well-suited for geometric multigrid methods.
    ParaView5.6.2, 5.8.12019b, 2020bbroadwell, skylake, epyciris, aionVisualisationParaView is a scientific parallel visualizer.
    Perl5.30.0, 5.32.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesLarry Wall's Practical Extraction and Report Language This is a minimal build without any modules. Should only be used for build dependencies.
    Pillow6.2.1, 8.0.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationPillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.
    PyOpenGL3.1.52020bbroadwell, epyc, skylakeaion, irisVisualisationPyOpenGL is the most common cross platform Python binding to OpenGL and related APIs.
    PyQt55.15.12020bbroadwell, epyc, skylakeaion, irisVisualisationPyQt5 is a set of Python bindings for v5 of the Qt application framework from The Qt Company. This bundle includes PyQtWebEngine, a set of Python bindings for The Qt Company’s Qt WebEngine framework.
    PyQtGraph0.11.12020bbroadwell, epyc, skylakeaion, irisVisualisationPyQtGraph is a pure-python graphics and GUI library built on PyQt5/PySide2 and numpy.
    PyTorch-Geometric1.6.32020bbroadwell, epyc, skylake, gpuaion, irisLibrariesPyTorch Geometric (PyG) is a geometric deep learning extension library for PyTorch.
    PyTorch1.4.0, 1.7.1, 1.8.1, 1.9.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentTensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
    PyYAML5.1.2, 5.3.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesPyYAML is a YAML parser and emitter for the Python programming language.
    Python2.7.16, 2.7.18, 3.7.4, 3.8.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesPython is a programming language that lets you work more quickly and integrate your systems more effectively.
    Qt55.13.1, 5.14.22019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentQt is a comprehensive cross-platform C++ application framework.
    QuantumESPRESSO6.72019b, 2020bbroadwell, epyc, skylakeiris, aionChemistryQuantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).
    RDFlib5.0.02020bbroadwell, epyc, skylake, gpuaion, irisLibrariesRDFLib is a Python library for working with RDF, a simple yet powerful language for representing information.
    R3.6.2, 4.0.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesR is a free software environment for statistical computing and graphics.
    ReFrame2.21, 3.6.32019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentReFrame is a framework for writing regression tests for HPC systems.
    Ruby2.7.1, 2.7.22019b, 2020bbroadwell, skylake, epyciris, aionProgramming LanguagesRuby is a dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write.
    Rust1.37.02019bbroadwell, skylakeirisProgramming LanguagesRust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
    SAMtools1.10, 1.122019b, 2020bbroadwell, skylake, epyciris, aionBiologySAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
    SCOTCH6.0.9, 6.1.02019b, 2020bbroadwell, skylake, epyciris, aionMathematicsSoftware package and libraries for sequential and parallel graph partitioning, static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.
    SDL22.0.142020bbroadwell, epyc, skylakeaion, irisLibrariesSDL: Simple DirectMedia Layer, a cross-platform multimedia library
    SIONlib1.7.62019bbroadwell, skylakeirisLibrariesSIONlib is a scalable I/O library for parallel access to task-local files. The library not only supports writing and reading binary data to or from several thousands of processors into a single or a small number of physical files, but also provides global open and close functions to access SIONlib files in parallel. This package provides a stripped-down installation of SIONlib for use with performance tools (e.g., Score-P), with renamed symbols to avoid conflicts when an application using SIONlib itself is linked against a tool requiring a different SIONlib version.
    SLEPc3.14.22020bbroadwell, epyc, skylakeaion, irisNumerical librariesSLEPc (Scalable Library for Eigenvalue Problem Computations) is a software library for the solution of large scale sparse eigenvalue problems on parallel computers. It is an extension of PETSc and can be used for either standard or generalized eigenproblems, with real or complex arithmetic. It can also be used for computing a partial SVD of a large, sparse, rectangular matrix, and to solve quadratic eigenvalue problems.
    SQLite3.29.0, 3.33.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentSQLite: SQL Database Engine in a C Library
    SWIG4.0.1, 4.0.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentSWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages.
    Salmon1.1.02019bbroadwell, skylakeirisBiologySalmon is a wicked-fast program to produce a highly-accurate, transcript-level quantification estimates from RNA-seq data.
    Salome8.5.0, 9.8.02019b, 2020bbroadwell, skylake, epyciris, aionCFD/Finite element modellingThe SALOME platform is an open source software framework for pre- and post-processing and integration of numerical solvers from various scientific fields. CEA and EDF use SALOME to perform a large number of simulations, typically related to power plant equipment and alternative energy. To address these challenges, SALOME includes a CAD/CAE modelling tool, mesh generators, an advanced 3D visualization tool, etc.
    ScaLAPACK2.0.2, 2.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesThe ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers.
    Scalasca2.52019bbroadwell, skylakeirisPerformance measurementsScalasca is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks -- in particular those concerning communication and synchronization -- and offers guidance in exploring their causes.
    SciPy-bundle2019.10, 2020.112019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesBundle of Python packages for scientific software
    Score-P6.02019bbroadwell, skylakeirisPerformance measurementsThe Score-P measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications.
    Singularity3.6.0, 3.8.12019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesSingularityCE is an open source container platform designed to be simple, fast, and secure. Singularity is optimized for EPC and HPC workloads, allowing untrusted users to run untrusted containers in a trusted way.
    Spack0.12.12019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentSpack is a package manager for supercomputers, Linux, and macOS. It makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.
    Spark2.4.32019bbroadwell, skylakeirisDevelopmentSpark is Hadoop MapReduce done in memory
    Stata172020bbroadwell, epyc, skylakeaion, irisMathematicsStata is a complete, integrated statistical software package that provides everything you need for data analysis, data management, and graphics.
    SuiteSparse5.8.12020bbroadwell, epyc, skylakeaion, irisNumerical librariesSuiteSparse is a collection of libraries manipulate sparse matrices.
    Sumo1.3.12019bbroadwell, skylakeirisUtilitiesSumo is an open source, highly portable, microscopic and continuous traffic simulation package designed to handle large road networks.
    Szip2.1.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesSzip compression software, providing lossless compression of scientific data
    Tcl8.6.10, 8.6.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisProgramming LanguagesTcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more.
    TensorFlow1.15.5, 2.1.0, 2.4.1, 2.5.02019b, 2020bgpu, broadwell, skylake, epyciris, aionLibrariesAn open-source software library for Machine Intelligence
    Theano1.0.4, 1.1.22019b, 2020bgpu, broadwell, epyc, skylakeiris, aionMathematicsTheano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.
    Tk8.6.10, 8.6.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisVisualisationTk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.
    Tkinter3.7.4, 3.8.62019b, 2020bbroadwell, skylake, epyc, gpuiris, aionProgramming LanguagesTkinter module, built with the Python buildsystem
    TopHat2.1.22019b, 2020bbroadwell, skylake, epyciris, aionBiologyTopHat is a fast splice junction mapper for RNA-Seq reads.
    Trinity2.10.02019bbroadwell, skylakeirisBiologyTrinity represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-Seq data. Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-Seq reads.
    UCX1.9.02020bbroadwell, epyc, skylake, gpuaion, irisLibrariesUnified Communication X An open-source production grade communication framework for data centric and high-performance applications
    UDUNITS2.2.262019b, 2020bbroadwell, skylake, gpu, epyciris, aionPhysicsUDUNITS supports conversion of unit specifications between formatted and binary forms, arithmetic manipulation of units, and conversion of values between compatible scales of measurement.
    ULHPC-bd2020b2020bbroadwell, epyc, skylakeaion, irisSystem-level softwareGeneric Module bundle for BigData Analytics software in use on the UL HPC Facility
    ULHPC-bio2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle for Bioinformatics, biology and biomedical software in use on the UL HPC Facility, especially at LCSB
    ULHPC-cs2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle for Computational science software in use on the UL HPC Facility, including: - Computer Aided Engineering, incl. CFD - Chemistry, Computational Chemistry and Quantum Chemistry - Data management & processing tools - Earth Sciences - Quantum Computing - Physics and physical systems simulations
    ULHPC-dl2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle for (CPU-version) of AI / Deep Learning / Machine Learning software in use on the UL HPC Facility
    ULHPC-gpu2019b, 2020b2019b, 2020bgpuirisSystem-level softwareGeneric Module bundle for GPU accelerated User Software in use on the UL HPC Facility
    ULHPC-math2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle for High-level mathematical software and Linear Algrebra libraries in use on the UL HPC Facility
    ULHPC-toolchains2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle that contains all the dependencies required to enable toolchains and building tools/programming language in use on the UL HPC Facility
    ULHPC-tools2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareMisc tools, incl. - perf: Performance tools - tools: General purpose tools
    UnZip6.02020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesUnZip is an extraction utility for archives compressed in .zip format (also called "zipfiles"). Although highly compatible both with PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP's own Zip program, our primary objectives have been portability and non-MSDOS functionality.
    VASP5.4.4, 6.2.12019b, 2020bbroadwell, skylake, epyc, gpuiris, aionPhysicsThe Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.
    VMD1.9.4a512020bbroadwell, epyc, skylakeaion, irisVisualisationVMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
    VTK8.2.0, 9.0.12019b, 2020bbroadwell, skylake, epyciris, aionVisualisationThe Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.
    VTune2019_update8, 2020_update32019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesIntel VTune Amplifier XE is the premier performance profiler for C, C++, C#, Fortran, Assembly and Java.
    Valgrind3.15.0, 3.16.12019b, 2020bbroadwell, skylake, epyciris, aionDebuggingValgrind: Debugging and profiling tools
    VirtualGL2.6.22019bbroadwell, skylakeirisVisualisationVirtualGL is an open source toolkit that gives any Linux or Unix remote display software the ability to run OpenGL applications with full hardware acceleration.
    Voro++0.4.62019bbroadwell, skylakeirisMathematicsVoro++ is a software library for carrying out three-dimensional computations of the Voronoi tessellation. A distinguishing feature of the Voro++ library is that it carries out cell-based calculations, computing the Voronoi cell for each particle individually. It is particularly well-suited for applications that rely on cell-based statistics, where features of Voronoi cells (eg. volume, centroid, number of faces) can be used to analyze a system of particles.
    Wannier903.1.02020bbroadwell, epyc, skylakeaion, irisChemistryA tool for obtaining maximally-localised Wannier functions
    X1120190717, 202010082019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationThe X Window System (X11) is a windowing system for bitmap displays
    XML-LibXML2.0201, 2.02062019b, 2020bbroadwell, skylake, epyciris, aionData processingPerl binding for libxml2
    XZ5.2.4, 5.2.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesxz: XZ utilities
    Xerces-C++3.2.22019bbroadwell, skylakeirisLibrariesXerces-C++ is a validating XML parser written in a portable subset of C++. Xerces-C++ makes it easy to give your application the ability to read and write XML data. A shared library is provided for parsing, generating, manipulating, and validating XML documents using the DOM, SAX, and SAX2 APIs.
    Xvfb1.20.92020bbroadwell, epyc, skylake, gpuaion, irisVisualisationXvfb is an X server that can run on machines with no display hardware and no physical input devices. It emulates a dumb framebuffer using virtual memory.
    YACS0.1.82020bbroadwell, epyc, skylakeaion, irisLibrariesYACS was created as a lightweight library to define and manage system configurations, such as those commonly found in software designed for scientific experimentation. These "configurations" typically cover concepts like hyperparameters used in training a machine learning model or configurable model hyperparameters, such as the depth of a convolutional neural network.
    Yasm1.3.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesYasm: Complete rewrite of the NASM assembler with BSD license
    Z34.8.102020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesZ3 is a theorem prover from Microsoft Research.
    Zip3.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesZip is a compression and file packaging/archive utility. Although highly compatible both with PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP's own UnZip, our primary objectives have been portability and other-than-MSDOS functionality
    ant1.10.6, 1.10.7, 1.10.92019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentApache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.
    archspec0.1.02019bbroadwell, skylakeirisUtilitiesA library for detecting, labeling, and reasoning about microarchitectures
    arpack-ng3.7.0, 3.8.02019b, 2020bbroadwell, skylake, epyciris, aionNumerical librariesARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.
    at-spi2-atk2.34.1, 2.38.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationAT-SPI 2 toolkit bridge
    at-spi2-core2.34.0, 2.38.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationAssistive Technology Service Provider Interface.
    binutils2.32, 2.352019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesbinutils: GNU binary utilities
    bokeh2.2.32020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesStatistical and novel interactive HTML plots for Python
    bzip21.0.82019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesbzip2 is a freely available, patent free, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression.
    cURL7.66.0, 7.72.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitieslibcurl is a free and easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file transfer resume, http proxy tunneling and more.
    cairo1.16.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationCairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB
    cuDNN7.6.4.38, 8.0.4.30, 8.0.5.392019b, 2020bgpuirisNumerical librariesThe NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks.
    dask2021.2.02020bbroadwell, epyc, skylake, gpuaion, irisData processingDask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love.
    double-conversion3.1.4, 3.1.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesEfficient binary-decimal and decimal-binary conversion routines for IEEE doubles.
    elfutils0.1832020bgpuirisLibrariesThe elfutils project provides libraries and tools for ELF files and DWARF data.
    expat2.2.7, 2.2.92019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesExpat is an XML parser library written in C. It is a stream-oriented parser in which an application registers handlers for things the parser might find in the XML document (like start tags)
    flatbuffers-python1.122020bbroadwell, epyc, skylake, gpuaion, irisDevelopmentPython Flatbuffers runtime library.
    flatbuffers1.12.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentFlatBuffers: Memory Efficient Serialization Library
    flex2.6.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesFlex (Fast Lexical Analyzer) is a tool for generating scanners. A scanner, sometimes called a tokenizer, is a program which recognizes lexical patterns in text.
    fontconfig2.13.1, 2.13.922019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationFontconfig is a library designed to provide system-wide font configuration, customization and application access.
    foss2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    fosscuda2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)GCC based compiler toolchain with CUDA support, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    freetype2.10.1, 2.10.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationFreeType 2 is a software font engine that is designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other products as well.
    gc7.6.122019bbroadwell, skylakeirisLibrariesThe Boehm-Demers-Weiser conservative garbage collector can be used as a garbage collecting replacement for C malloc or C++ new.
    gcccuda2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, along with CUDA toolkit.
    gettext0.19.8.1, 0.20.1, 0.212019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesGNU 'gettext' is an important step for the GNU Translation Project, as it is an asset on which we may build many other steps. This package offers to programmers, translators, and even users, a well integrated set of tools and documentation
    gflags2.2.22019bbroadwell, skylakeirisDevelopmentThe gflags package contains a C++ library that implements commandline flags processing. It includes built-in support for standard types such as string and the ability to define flags in the source file in which they are used.
    giflib5.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesgiflib is a library for reading and writing gif images. It is API and ABI compatible with libungif which was in wide use while the LZW compression algorithm was patented.
    git2.23.0, 2.28.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesGit is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
    glog0.4.02019bbroadwell, skylakeirisDevelopmentA C++ implementation of the Google logging module.
    gmsh4.4.02019bbroadwell, skylakeirisCFD/Finite element modellingSalome is an open-source software that provides a generic Pre- and Post-Processing platform for numerical simulation. It is based on an open and flexible architecture made of reusable components.
    gmsh4.8.42020bbroadwell, epyc, skylakeaion, irisMathematicsGmsh is a 3D finite element grid generator with a build-in CAD engine and post-processor.
    gnuplot5.2.8, 5.4.12019b, 2020bbroadwell, skylake, epyciris, aionVisualisationPortable interactive, function plotting utility
    gocryptfs1.7.1, 2.0.12019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesEncrypted overlay filesystem written in Go. gocryptfs uses file-based encryption that is implemented as a mountable FUSE filesystem. Each file in gocryptfs is stored as one corresponding encrypted file on the hard disk.
    gompi2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
    gompic2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain along with CUDA toolkit, including OpenMPI for MPI support with CUDA features enabled.
    googletest1.10.02019bbroadwell, skylakeirisDevelopmentGoogle's framework for writing C++ tests on a variety of platforms
    gperf3.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentGNU gperf is a perfect hash function generator. For a given list of strings, it produces a hash function and hash table, in form of C or C++ code, for looking up a value depending on the input string. The hash function is perfect, which means that the hash table has no collisions, and the hash table lookup needs a single string comparison only.
    groff1.22.42020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesGroff (GNU troff) is a typesetting system that reads plain text mixed with formatting commands and produces formatted output.
    gzip1.102019b, 2020bbroadwell, skylake, epyc, gpuiris, aionUtilitiesgzip (GNU zip) is a popular data compression program as a replacement for compress
    h5py2.10.0, 3.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingHDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.
    help2man1.47.16, 1.47.4, 1.47.82020b, 2019bbroadwell, epyc, skylake, gpuaion, irisUtilitieshelp2man produces simple manual pages from the '--help' and '--version' output of other commands.
    hwloc1.11.12, 2.2.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionSystem-level softwareThe Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.
    hypothesis4.44.2, 5.41.2, 5.41.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesHypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.
    iccifort2019.5.281, 2020.4.3042019b, 2020bbroadwell, skylake, gpu, epyciris, aionCompilersIntel C, C++ & Fortran compilers
    iccifortcuda2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)Intel C, C++ & Fortran compilers with CUDA toolkit
    iimpi2019b, 2020b2019b, 2020bbroadwell, skylake, epyc, gpuiris, aionToolchains (software stacks)Intel C/C++ and Fortran compilers, alongside Intel MPI.
    iimpic2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)Intel C/C++ and Fortran compilers, alongside Intel MPI and CUDA.
    imkl2019.5.281, 2020.4.3042019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesIntel Math Kernel Library is a library of highly optimized, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more.
    impi2018.5.288, 2019.9.3042019b, 2020bbroadwell, skylake, gpu, epyciris, aionMPIIntel MPI Library, compatible with MPICH ABI
    intel2019b, 2020b2019b, 2020bbroadwell, skylake, epyc, gpuiris, aionToolchains (software stacks)Compiler toolchain including Intel compilers, Intel MPI and Intel Math Kernel Library (MKL).
    intelcuda2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)Intel Cluster Toolkit Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MPI & Intel MKL, with CUDA toolkit
    intltool0.51.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentintltool is a set of tools to centralize translation of many different file formats using GNU gettext-compatible PO files.
    itac2019.4.0362019bbroadwell, skylakeirisUtilitiesThe Intel Trace Collector is a low-overhead tracing library that performs event-based tracing in applications. The Intel Trace Analyzer provides a convenient way to monitor application activities gathered by the Intel Trace Collector through graphical displays.
    jemalloc5.2.12019bbroadwell, skylakeirisLibrariesjemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
    kallisto0.46.12019bbroadwell, skylakeirisBiologykallisto is a program for quantifying abundances of transcripts from RNA-Seq data, or more generally of target sequences using high-throughput sequencing reads.
    kim-api2.1.32019bbroadwell, skylakeirisChemistryOpen Knowledgebase of Interatomic Models. KIM is an API and OpenKIM is a collection of interatomic models (potentials) for atomistic simulations. This is a library that can be used by simulation programs to get access to the models in the OpenKIM database. This EasyBuild only installs the API, the models can be installed with the package openkim-models, or the user can install them manually by running kim-api-collections-management install user MODELNAME or kim-api-collections-management install user OpenKIM to install them all.
    libGLU9.0.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationThe OpenGL Utility Library (GLU) is a computer graphics library for OpenGL.
    libarchive3.4.32020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesMulti-format archive and compression library
    libcerf1.13, 1.142019b, 2020bbroadwell, skylake, epyciris, aionMathematicslibcerf is a self-contained numeric library that provides an efficient and accurate implementation of complex error functions, along with Dawson, Faddeeva, and Voigt functions.
    libctl4.0.02019bbroadwell, skylakeirisChemistrylibctl is a free Guile-based library implementing flexible control files for scientific simulations.
    libdrm2.4.102, 2.4.992020b, 2019bbroadwell, epyc, skylake, gpuaion, irisLibrariesDirect Rendering Manager runtime library.
    libepoxy1.5.42019b, 2020bbroadwell, skylake, epyciris, aionLibrariesEpoxy is a library for handling OpenGL function pointer management for you
    libevent2.1.11, 2.1.122019b, 2020bbroadwell, skylake, epyciris, aionLibrariesThe libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts.
    libffi3.2.1, 3.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesThe libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.
    libgd2.2.5, 2.3.02019b, 2020bbroadwell, skylake, epyciris, aionLibrariesGD is an open source code library for the dynamic creation of images by programmers.
    libgeotiff1.5.1, 1.6.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLibrary for reading and writing coordinate system information from/to GeoTIFF files
    libglvnd1.2.0, 1.3.22019b, 2020bbroadwell, skylake, epyc, gpuiris, aionLibrarieslibglvnd is a vendor-neutral dispatch layer for arbitrating OpenGL API calls between multiple vendors.
    libgpuarray0.7.62019b, 2020bgpuirisLibrariesLibrary to manipulate tensors on the GPU.
    libiconv1.162019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLibiconv converts from one character encoding to another through Unicode conversion
    libjpeg-turbo2.0.3, 2.0.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrarieslibjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.
    libmatheval1.1.112019bbroadwell, skylakeirisLibrariesGNU libmatheval is a library (callable from C and Fortran) to parse and evaluate symbolic expressions input as text.
    libogg1.3.42020bbroadwell, epyc, skylake, gpuaion, irisLibrariesOgg is a multimedia container format, and the native file and stream format for the Xiph.org multimedia codecs.
    libpciaccess0.14, 0.162019b, 2020bbroadwell, skylake, gpu, epyciris, aionSystem-level softwareGeneric PCI access library.
    libpng1.6.372019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrarieslibpng is the official PNG reference library
    libreadline8.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesThe GNU Readline library provides a set of functions for use by applications that allow users to edit command lines as they are typed in. Both Emacs and vi editing modes are available. The Readline library includes additional functions to maintain a list of previously-entered command lines, to recall and perhaps reedit those lines, and perform csh-like history expansion on previous commands.
    libsndfile1.0.282019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLibsndfile is a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface.
    libtirpc1.3.12020bbroadwell, epyc, skylake, gpuaion, irisLibrariesLibtirpc is a port of Suns Transport-Independent RPC library to Linux.
    libtool2.4.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesGNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface.
    libunistring0.9.102019bbroadwell, skylakeirisLibrariesThis library provides functions for manipulating Unicode strings and for manipulating C strings according to the Unicode standard.
    libunwind1.3.1, 1.4.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesThe primary goal of libunwind is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications
    libvorbis1.3.72020bbroadwell, epyc, skylake, gpuaion, irisLibrariesOgg Vorbis is a fully open, non-proprietary, patent-and-royalty-free, general-purpose compressed audio format
    libwebp1.1.02020bbroadwell, epyc, skylakeaion, irisLibrariesWebP is a modern image format that provides superior lossless and lossy compression for images on the web. Using WebP, webmasters and web developers can create smaller, richer images that make the web faster.
    libxc4.3.4, 5.1.22019b, 2020bbroadwell, skylake, epyciris, aionChemistryLibxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.
    libxml22.9.10, 2.9.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisLibrariesLibxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).
    libxslt1.1.342019bbroadwell, skylakeirisLibrariesLibxslt is the XSLT C library developed for the GNOME project (but usable outside of the Gnome platform).
    libyaml0.2.2, 0.2.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLibYAML is a YAML parser and emitter written in C.
    lxml4.4.22019bbroadwell, skylakeirisLibrariesThe lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt.
    lz41.9.22020bbroadwell, epyc, skylake, gpuaion, irisLibrariesLZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core. It features an extremely fast decoder, with speed in multiple GB/s per core.
    magma2.5.1, 2.5.42019b, 2020bgpuirisMathematicsThe MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current Multicore+GPU systems.
    makeinfo6.72020bbroadwell, epyc, skylake, gpuaion, irisDevelopmentmakeinfo is part of the Texinfo project, the official documentation format of the GNU project.
    matplotlib3.1.1, 3.3.32019b, 2020bbroadwell, skylake, epyc, gpuiris, aionVisualisationmatplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.
    molmod1.4.52019bbroadwell, skylakeirisMathematicsMolMod is a Python library with many compoments that are useful to write molecular modeling programs.
    ncurses6.0, 6.1, 6.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentThe Ncurses (new curses) library is a free software emulation of curses in System V Release 4.0, and more. It uses Terminfo format, supports pads and color and multiple highlights and forms characters and function-key mapping, and has all the other SYSV-curses enhancements over BSD Curses.
    netCDF-Fortran4.5.2, 4.5.32019b, 2020bbroadwell, skylake, epyciris, aionData processingNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    netCDF4.7.1, 4.7.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    nettle3.5.1, 3.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesNettle is a cryptographic library that is designed to fit easily in more or less any context: In crypto toolkits for object-oriented languages (C++, Python, Pike, ...), in applications like LSH or GNUPG, or even in kernel space.
    networkx2.52020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesNetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
    nodejs12.19.02020bbroadwell, epyc, skylake, gpuaion, irisProgramming LanguagesNode.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
    nsync1.24.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentnsync is a C library that exports various synchronization primitives, such as mutexes
    numactl2.0.12, 2.0.132019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesThe numactl program allows you to run your application program on specific cpu's and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.
    numba0.52.02020bbroadwell, epyc, skylake, gpuaion, irisProgramming LanguagesNumba is an Open Source NumPy-aware optimizing compiler for Python sponsored by Continuum Analytics, Inc. It uses the remarkable LLVM compiler infrastructure to compile Python syntax to machine code.
    phonopy2.2.02019bbroadwell, skylakeirisLibrariesPhonopy is an open source package of phonon calculations based on the supercell approach.
    pixman0.38.4, 0.40.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationPixman is a low-level software library for pixel manipulation, providing features such as image compositing and trapezoid rasterization. Important users of pixman are the cairo graphics library and the X server.
    pkg-config0.29.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentpkg-config is a helper tool used when compiling applications and libraries. It helps you insert the correct compiler options on the command line so an application can use gcc -o test test.c pkg-config --libs --cflags glib-2.0 for instance, rather than hard-coding values on where to find glib (or other libraries).
    pkgconfig1.5.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentpkgconfig is a Python module to interface with the pkg-config command line tool
    pocl1.4, 1.62019b, 2020bgpuirisLibrariesPocl is a portable open source (MIT-licensed) implementation of the OpenCL standard
    protobuf-python3.10.0, 3.14.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentPython Protocol Buffers runtime library.
    protobuf2.5.0, 3.10.0, 3.14.02019b, 2020bbroadwell, skylake, epyc, gpuiris, aionDevelopmentGoogle Protocol Buffers
    pybind112.4.3, 2.6.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariespybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code.
    re2c1.2.1, 2.0.32019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesre2c is a free and open-source lexer generator for C and C++. Its main goal is generating fast lexers: at least as fast as their reasonably optimized hand-coded counterparts. Instead of using traditional table-driven approach, re2c encodes the generated finite state automata directly in the form of conditional jumps and comparisons.
    scikit-build0.11.12020bbroadwell, epyc, skylake, gpuaion, irisLibrariesScikit-Build, or skbuild, is an improved build system generator for CPython C/C++/Fortran/Cython extensions.
    scikit-image0.18.12020bbroadwell, epyc, skylake, gpuaion, irisVisualisationscikit-image is a collection of algorithms for image processing.
    scikit-learn0.23.22020bbroadwell, epyc, skylake, gpuaion, irisData processingScikit-learn integrates machine learning algorithms in the tightly-knit scientific Python world, building upon numpy, scipy, and matplotlib. As a machine-learning module, it provides versatile tools for data mining and analysis in any field of science and engineering. It strives to be simple and efficient, accessible to everybody, and reusable in various contexts.
    scipy1.4.12019bbroadwell, skylake, gpuirisMathematicsSciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension for Python.
    setuptools41.0.12019bbroadwell, skylakeirisDevelopmentEasily download, build, install, upgrade, and uninstall Python packages
    snappy1.1.7, 1.1.82019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesSnappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression.
    sparsehash2.0.3, 2.0.42019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentAn extremely memory-efficient hash_map implementation. 2 bits/entry overhead! The SparseHash library contains several hash-map implementations, including implementations that optimize for space or speed.
    spglib-python1.16.02020bbroadwell, epyc, skylake, gpuaion, irisChemistrySpglib for Python. Spglib is a library for finding and handling crystal symmetries written in C.
    tbb2019_U9, 2020.2, 2020.32019b, 2020bbroadwell, skylake, epyciris, aionLibrariesIntel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.
    texinfo6.72019bbroadwell, skylakeirisDevelopmentTexinfo is the official documentation format of the GNU project.
    tqdm4.56.22020bbroadwell, epyc, skylake, gpuaion, irisLibrariesA fast, extensible progress bar for Python and CLI
    typing-extensions3.7.4.32019b, 2020bgpu, broadwell, epyc, skylakeiris, aionDevelopmentTyping Extensions – Backported and Experimental Type Hints for Python
    util-linux2.34, 2.362019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesSet of Linux utilities
    x26420190925, 202010262019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationx264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.
    x2653.2, 3.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationx265 is a free software library and application for encoding video streams into the H.265 AVC compression format, and is released under the terms of the GNU GPL.
    xorg-macros1.19.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentX.org macros utilities.
    xprop1.2.4, 1.2.52019b, 2020bbroadwell, skylake, epyciris, aionVisualisationThe xprop utility is for displaying window and font properties in an X server. One window or font is selected using the command line arguments or possibly in the case of a window, by clicking on the desired window. A list of properties is then given, possibly with formatting information.
    yaff1.6.02019bbroadwell, skylakeirisChemistryYaff stands for 'Yet another force field'. It is a pythonic force-field code.
    zlib1.2.112019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrarieszlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system.
    zstd1.4.52020bbroadwell, epyc, skylake, gpuaion, irisLibrariesZstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression/speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/maths/example.m b/software/maths/example.m new file mode 100644 index 00000000..7a242bf5 --- /dev/null +++ b/software/maths/example.m @@ -0,0 +1,13 @@ +# example for MATLAB ParFor +parpool('local', str2num(getenv('SLURM_CPUS_PER_TASK'))) % set the default cores +%as number of threads +tic +n = 50; +A = 50; +a = zeros(1,n); +parfor i = 1:n + a(i) = max(abs(eig(rand(A)))); +end +toc +delete(gcp); % you have to delete the parallel region after the work is done +exit; diff --git a/software/maths/julia/index.html b/software/maths/julia/index.html new file mode 100644 index 00000000..16df8145 --- /dev/null +++ b/software/maths/julia/index.html @@ -0,0 +1,3038 @@ + + + + + + + + + + + + + + + + + + + + + + + + Julia - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Julia

    + +

    +Scientific computing has traditionally required the highest performance, yet domain experts have largely moved to slower dynamic languages for daily work. We believe there are many good reasons to prefer dynamic languages for these applications, and we do not expect their use to diminish. Fortunately, modern language design and compiler techniques make it possible to mostly eliminate the performance trade-off and provide a single environment productive enough for prototyping and efficient enough for deploying performance-intensive applications. The Julia programming language fills this role: it is a flexible dynamic language, appropriate for scientific and numerical computing, with performance comparable to traditional statically-typed languages.

    +

    Available versions of Julia in ULHPC

    +

    To check available versions of Julia at ULHPC type module spider julia. +The following list shows the available versions of Julia in ULHPC. +

    lang/Julia/1.1.1
    +lang/Julia/1.3.0
    +

    +

    Interactive mode

    +

    To open an MATLAB in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 # OR si [...]
    +
    +# Load the module Julia and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load lang/Julia/1.3.0
    +
    +$ julia
    +
    + +

    Batch mode

    +

    An example for serial code

    +
    #!/bin/bash -l
    +#SBATCH -J Julia
    +###SBATCH -A <project name>
    +#SBATCH --ntasks-per-node 1
    +#SBATCH -c 1
    +#SBATCH --time=00:15:00
    +#SBATCH -p batch
    +
    +# Load the module Julia and needed environment
    +module purge
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load lang/Julia/1.3.0
    +
    +julia {example}.jl
    +
    + +

    An example for parallel code

    +
    #!/bin/bash -l
    +#SBATCH -J Julia
    +###SBATCH -A <project name>
    +#SBATCH -N 1
    +#SBATCH --ntasks-per-node 28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +
    +# Load the module Julia and needed environment
    +module purge
    +module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +module load lang/Julia/1.3.0
    +
    +srun -n ${SLURM_NTASKS} julia {example}.jl
    +
    + +
    +

    Example

    +
    using Distributed
    +
    +# launch worker processes
    +num_cores = parse(Int, ENV["SLURM_CPUS_PER_TASK"])
    +addprocs(num_cores)
    +
    +println("Number of cores: ", nprocs())
    +println("Number of workers: ", nworkers())
    +
    +# each worker gets its id, process id and hostname
    +for i in workers()
    +id, pid, host = fetch(@spawnat i (myid(), getpid(), gethostname()))
    +println(id, " " , pid, " ", host)
    +end
    +
    +# remove the workers
    +for i in workers()
    +rmprocs(i)
    +end
    +
    + +
    +

    Additional information

    +

    To know more information about Julia tutorial and documentation, +please refer to Julia tutorial.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please file a support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/maths/mathematica/index.html b/software/maths/mathematica/index.html new file mode 100644 index 00000000..74187fd2 --- /dev/null +++ b/software/maths/mathematica/index.html @@ -0,0 +1,3034 @@ + + + + + + + + + + + + + + + + + + + + + + + + MATHEMATICA - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    + +
    +
    + + +
    +
    + + + + + + + + + + +

    MATHEMATICA

    + +

    +For three decades, MATHEMATICA has defined the state of the art in technical +computing-and provided the principal computation environment for millions of +innovators, educators, students, and others around the world. +Widely admired for both its technical prowess and elegant ease of use, Mathematica provides a single integrated, +continually expanding system that covers the breadth and depth of technical +computing-and seamlessly available in the cloud through any web browser, as well as natively on all modern desktop systems.

    +

    Available versions of MATHEMATICA in ULHPC

    +

    To check available versions of MATHEMATICA at ULHPC type module spider mathematica. +The following list shows the available versions of MATHEMATICA in ULHPC. +

    math/Mathematica/11.0.0
    +math/Mathematica/11.3.0
    +math/Mathematica/12.0.0
    +

    +

    Interactive mode

    +

    To open an MATHEMATICA in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 # OR si [...]
    +
    +# Load the module MATHEMATICA and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load math/Mathematica/12.0.0
    +
    +$ math
    +
    + +

    Batch mode

    +

    An example for serial case

    +
    #!/bin/bash -l
    +#SBATCH -J MATHEMATICA
    +#SBATCH --ntasks-per-node 1
    +#SBATCH -c 1
    +#SBATCH --time=00:15:00
    +#SBATCH -p batch
    +### SBATCH -A <project_name>
    +
    +# Load the module MATHEMATICA and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load math/Mathematica/12.0.0
    +
    +$ srun -n ${SLURM_NTASKS} math -run < {mathematica-script-file}.m
    +
    + +

    An example for parallel case

    +
    #!/bin/bash -l
    +#SBATCH -J MATHEMATICA
    +#SBATCH -N 1
    +#SBATCH -c 28
    +#SBATCH --time=00:10:00
    +#SBATCH -p batch
    +### SBATCH -A <project_name>
    +
    +# Load the module MATHEMATICA and needed environment
    +$ module purge
    +$ module load swenv/default-env/devel # Eventually (only relevant on 2019a software environment) 
    +$ module load math/Mathematica/12.0.0
    +
    +$ srun -n ${SLURM_NTASKS} math -run < {mathematica-script-file}.m
    +
    + +
    +

    Exmaple

    +
    # example for MATHEMATICA prallel (mathematica_script_file.m)
    +//Limits Mathematica to requested resources
    +Unprotect[$ProcessorCount];$ProcessorCount = 28;
    +
    +//Prints the machine name that each kernel is running on
    +Print[ParallelEvaluate[$MachineName]];
    +
    +//Prints all Prime numbers less than 3000
    +Print[Parallelize[Select[Range[3000],PrimeQ[2^#-1]&]]];
    +
    + +
    +

    Additional information

    +

    To know more information about MATHEMATICA tutorial and documentation, +please refer to MATHEMATICA tutorial.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/maths/matlab/index.html b/software/maths/matlab/index.html new file mode 100644 index 00000000..cbbc223e --- /dev/null +++ b/software/maths/matlab/index.html @@ -0,0 +1,3033 @@ + + + + + + + + + + + + + + + + + + + + + + + + MATLAB - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    MATLAB

    + +

    +MATLAB® combines +a desktop environment tuned for iterative analysis and design processes +with a programming language that expresses matrix and array mathematics directly. +It includes the Live Editor for creating scripts that combine code, output, +and formatted text in an executable notebook.

    +

    Available versions of MATLAB in ULHPC

    +

    To check available versions of MATLAB at ULHPC type module spider matlab. +It will list the available versions: +

    math/MATLAB/<version>
    +

    +

    Interactive mode

    +

    To open MATLAB in the interactive mode, please follow the following steps:

    +

    (eventually) connect to the ULHPC login node with the -X (or -Y) option:

    +
    +
    ssh -X iris-cluster   # OR on Mac OS: ssh -Y iris-cluster
    +
    + +
    +
    +
    ssh -X aion-cluster   # OR on Mac OS: ssh -Y aion-cluster
    +
    + +
    +
    +

    Then you can reserve an interactive job, for instance with 4 cores. Don't forget to use the --x11 option if you intend to use the GUI.

    +
    $ si --x11 -c4
    +
    +# Load the module MATLAB and needed environment
    +(node)$ module purge
    +(node)$ module load math/MATLAB
    +
    +# Non-Graphical version (CLI)
    +(node)$ matlab -nodisplay -nosplash
    +                  < M A T L A B (R) >
    +        Copyright 1984-2021 The MathWorks, Inc.
    +        R2021a (9.10.0.1602886) 64-bit (glnxa64)
    +                   February 17, 2021
    +To get started, type doc.
    +For product information, visit www.mathworks.com.
    +>> version()
    +ans =
    +    '9.10.0.1602886 (R2021a)'
    +# List of installed add-ons
    +>>  matlab.addons.installedAddons
    +ans =
    +  96x4 table
    +            Name           Version     Enabled    Identifier
    +    ___________________    ________    _______    __________
    +
    +    "Mapping Toolbox"      "5.1"        true         "MG"
    +    "Simulink Test"        "3.4"        true         "SZ"
    +    [...]
    +>> quit()
    +
    +# To run the GUI version, over X11
    +(node)$ matlab &
    +
    + +

    Batch mode

    +

    For non-interactive or long executions, MATLAB can be ran in passive or batch mode, reading all commands from an input file (with .m extension) you provide (Ex: inputfile.m) and saving the results into an output file, for instance outputfile.out). +You have two ways to proceed:

    +
    +
    matlab -nodisplay -nosplash < inputfile.m > outputfile.out
    +
    + +
    +
    +
    # /!\ IMPORTANT: notice the **missing** '.m' extension on -r !!!
    +matlab -nodisplay -nosplash -r inputfile -logfile outputfile.out
    +
    + +
    +
    +
    #!/bin/bash -l
    +#SBATCH -J MATLAB
    +###SBATCH -A <project_name>
    +#SBATCH --ntasks-per-node 1
    +#SBATCH -c 1
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module MATLAB
    +module purge
    +module load math/MATLAB
    +
    +# second form with CLI options '-r <input>' and '-logfile <output>.out'
    +srun -c $SLURM_CPUS_PER_TASK matlab -nodisplay -r my_matlab_script -logfile output.out
    +
    +# example for if you need to have a input parameters for the computations
    +# matlab_script_serial_file(x,y,z)
    +srun matlab -nodisplay -r my_matlab_script(2,2,1)' -logfile output.out
    +
    +# safeguard (!) afterwards
    +rm -rf $HOME/.matlab
    +rm -rf $HOME/java*
    +
    + +

    In matlab, you can create a parallel pool of thread workers on the local computing node by using the parpool function. +After you create the pool, parallel pool features, such as parfor or parfeval, run on the workers. With the ThreadPool object, you can interact with the parallel pool.

    +
    +

    Example

    +
    # example for MATLAB ParFor (matlab_script_parallel_file.m)
    +parpool('local', str2num(getenv('SLURM_CPUS_PER_TASK'))) % set the default cores
    +%as number of threads
    +tic
    +n = 50;
    +A = 50;
    +a = zeros(1,n);
    +parfor i = 1:n
    +a(i) = max(abs(eig(rand(A))));
    +end
    +toc
    +delete(gcp); % you have to delete the parallel region after the work is done
    +exit;
    +
    + +
    +

    Additional information

    +

    To know more information about MATLAB tutorial and documentation, +please refer to MATLAB tutorial.

    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/maths/stata/index.html b/software/maths/stata/index.html new file mode 100644 index 00000000..8bce5101 --- /dev/null +++ b/software/maths/stata/index.html @@ -0,0 +1,3192 @@ + + + + + + + + + + + + + + + + + + + + + + + + Stata - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Stata

    + +

    +

    Stata is a commercial statistical package, which provides a complete solution for data analysis, data management, and graphics.

    +

    The University of Luxembourg contributes to a campus-wide license -- see SIU / Service Now Knowledge Base ticket on Stata MP2

    +

    Available versions of Stata on ULHPC platforms

    +

    To check available versions of Stata at ULHPC, type module spider stata.

    +
    math/Stata/<version>
    +
    + +

    Once loaded, the modules brings to you the following binaries:

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    BinaryDescription
    stataNon-graphical standard Stata/IC. For better performance and support for larger databases, stata-se should be used.
    stata-seNon-graphical Stata/SE designed for large databases. Can be used to run tasks automatically with the batch flag -b and a Stata '*.do file
    xstataGraphical standard Stata/IC. For better performance and support for larger databases, xstata-se should be used.
    xstata-seGraphical Stata/SE designed for large databases. Can be used interactively in a similar working environment to Windows and Mac versions.
    +

    Interactive Mode

    +

    To open a Stata session in interactive mode, please follow the following steps:

    +

    (eventually) connect to the ULHPC login node with the -X (or -Y) option:

    +
    +
    ssh -X iris-cluster   # OR on Mac OS: ssh -Y iris-cluster
    +
    + +
    +
    +
    ssh -X aion-cluster   # OR on Mac OS: ssh -Y aion-cluster
    +
    + +
    +
    +

    Then you can reserve an interactive job, for instance with 2 cores. Don't forget to use the --x11 option if you intend to use the GUI.

    +
    $ si --x11 -c2      # You CANNOT use more than 2 cores
    +
    +# Load the module Stata and needed environment
    +(node)$ module purge
    +(node)$ module load math/Stata
    +
    +# Non-Graphical version (CLI)
    +(node)$ stata
    +  ___  ____  ____  ____  ____ ®
    + /__    /   ____/   /   ____/      17.0
    +___/   /   /___/   /   /___/       BE—Basic Edition
    +
    + Statistics and Data Science       Copyright 1985-2021 StataCorp LLC
    +                                   StataCorp
    +                                   4905 Lakeway Drive
    +                                   College Station, Texas 77845 USA
    +                                   800-STATA-PC        https://www.stata.com
    +                                   979-696-4600        stata@stata.com
    +
    +Stata license: Unlimited-user network, expiring 31 Dec 2022
    +Serial number: <serial>
    +  Licensed to: University of Luxembourg
    +               Campus License - see KB0010885 (Service Now)
    +
    +.
    +# To quit Stata
    +. exit, clear
    +
    +# To run the GUI version, over X11
    +(node)$ stata &
    +
    + +

    Location of your ado files

    +

    Run the sysdir command to see the search path for ado files:

    +
    . sysdir
    +   STATA:  /opt/apps/resif/<cluster>/<version>/<arch>/software/Stata/<stataversion>/
    +    BASE:  /opt/apps/resif/<cluster>/<version>/<arch>/software/Stata/<stataversion>/ado/base/
    +    SITE:  /opt/apps/resif/<cluster>/<version>/<arch>/software/Stata/<stataversion>/software/Stata/ado/
    +    PLUS:  ~/ado/plus/
    +PERSONAL:  ~/ado/personal/
    +
    + +

    You should thus store ado files in `$HOME/ado/personal. For more see this document.

    +

    Batch mode

    +

    To run Stata in batch mode, you need to create do-files which contain the series of commands you would like to run. +With a do file (filename.do) in hand, you can run it from the shell in the command line with:

    +
    stata -b do filename.do
    +
    + +

    With the -b flag, outputs will be automatically saved to the outputfile filename.log.

    +
    +
    #!/bin/bash -l
    +#SBATCH -J Stata
    +###SBATCH -A <project_name>
    +#SBATCH --ntasks-per-node 1
    +#SBATCH -c 1
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module Stata
    +module purge
    +module load math/Stata
    +
    +srun stata -b do INPUTFILE.do
    +
    + +
    +
    +
    #!/bin/bash -l
    +#SBATCH -J Stata
    +###SBATCH -A <project_name>
    +#SBATCH --ntasks-per-node 1
    +#SBATCH -c 2
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module Stata
    +module purge
    +module load math/Stata
    +
    +# Use stata-mp to run across multiple cores
    +srun -c $SLURM_CPUS_PER_TASK stata-mp -b do INPUTFILE.do
    +
    + +
    +
    +

    Running Stata in Parallel

    +

    Stata/MP

    +

    You can use Stata/MP to advantage of the advanced multiprocessing capabilities of Stata/MP. +Stata/MP provides the most extensive multicore support of any statistics and data management package.

    +

    Note however that the current license limits the maximum number of cores (to 2 !). +Example of interactive usage:

    +
    $ si --x11 -c2      # You CANNOT use more than 2 cores
    +
    +# Load the module Stata and needed environment
    +(node)$ module purge
    +(node)$ module load math/Stata
    +
    +# Non-Graphical version (CLI)
    +(node)$ stata-mp
    +
    +  ___  ____  ____  ____  ____ ®
    + /__    /   ____/   /   ____/      17.0
    +___/   /   /___/   /   /___/       MP—Parallel Edition
    +
    + Statistics and Data Science       Copyright 1985-2021 StataCorp LLC
    +                                   StataCorp
    +                                   4905 Lakeway Drive
    +                                   College Station, Texas 77845 USA
    +                                   800-STATA-PC        https://www.stata.com
    +                                   979-696-4600        stata@stata.com
    +
    +Stata license: Unlimited-user 2-core network, expiring 31 Dec 2022
    +Serial number: <serial>
    +  Licensed to: University of Luxembourg
    +               Campus License - see KB0010885 (Service Now)
    +. set processors 2     # or use env SLURM_CPU_PER_TASKS
    +. [...]
    +. exit, clear
    +
    + +

    Note that using the stata-mp executable, Stata will automatically use the requested number of cores from Slurm's --cpus-per-task option. +This implicit parallelism does not require any changes to your code.

    +

    User-packages parallel and gtools

    +

    User-developed Stata packages can be installed from a login node using one of the Stata commands

    +
      net install <package>
    +
    + + +

    These packages will be installed in your home directory by default.

    +

    Among others, the parallel package implements parallel for loops. +Also, the gtools provides faster alternatives to some Stata commands when working with big data.

    +
    (node)$ stata
    +# installation
    +. net install parallel, from(https://raw.github.com/gvegayon/parallel/stable/) replace
    +checking parallel consistency and verifying not already installed...
    +installing into /home/users/svarrette/ado/plus/...
    +installation complete.
    +
    +# update index of the installed packages
    +. mata mata mlib index
    +.mlib libraries to be searched are now
    +    lmatabase;lmatasvy;lmatabma;lmatapath;lmatatab;lmatanumlib;lmatacollect;lmatafc;lmatapss;lmat
    +> asem;lmatamixlog;lmatamcmc;lmatasp;lmatameta;lmataopt;lmataado;lmatagsem;lmatami;lmatapostest;l
    +> matalasso;lmataerm;lparallel
    +
    +# initial - ADAPT with SLURM_CPU_PER_TASKS
    +. parallel initialize 4, f   # Or (better) find a way to use env SLURM_CPU_PER_TASKS
    +N Child processes: 4
    +Stata dir:  /mnt/irisgpfs/apps/resif/iris/2020b/broadwell/software/Stata/17/stata
    +
    +. sysuse auto
    +(1978 automobile data)
    +
    +. parallel, by(foreign): egen maxp = max(price)
    +Small workload/num groups. Temporarily setting number of child processes to 2
    +--------------------------------------------------------------------------------
    +Parallel Computing with Stata
    +Child processes: 2
    +pll_id         : bcrpvqtoi1
    +Running at     : /mnt/irisgpfs/users/svarrette
    +Randtype       : datetime
    +
    +Waiting for the child processes to finish...
    +child process 0002 has exited without error...
    +child process 0001 has exited without error...
    +--------------------------------------------------------------------------------
    +Enter -parallel printlog #- to checkout logfiles.
    +--------------------------------------------------------------------------------
    +
    +. tab maxp
    +
    +       maxp |      Freq.     Percent        Cum.
    +------------+-----------------------------------
    +      12990 |         22       29.73       29.73
    +      15906 |         52       70.27      100.00
    +------------+-----------------------------------
    +      Total |         74      100.00
    +
    +. exit, clear
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/optim/index.html b/software/optim/index.html new file mode 100644 index 00000000..fdd6d2f6 --- /dev/null +++ b/software/optim/index.html @@ -0,0 +1,3188 @@ + + + + + + + + + + + + + + + + + + + + + + + + Optimizers - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + + + +
    +
    + + + + + + + + + + +

    Optimizers

    +

    Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element (with regard to some criterion) from some set of available alternatives. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries

    +

    Mathematical programming with Cplex and Gurobi

    +

    Cplex is an optimization software for mathematical programming. +The Cplex optimizer can solve:

    +
      +
    • Mixed-Integer programming problems (MIP)
    • +
    • Very large linear programming problems (LP)
    • +
    • Non-convex quadratic programming problems (QP)
    • +
    • Convex quadratically constrained problems (QCP)
    • +
    +

    Gurobi is a powerful optimization software and an alternative to Cplex for solving. Gurobi has some additionnal features compared to Cplex. For example, it can perform Mixed-Integer Quadratic Programming (MIQP) and Mixed-Integer Quadratic Constrained Programming (MIQCP).

    +

    Loading Cplex or Gurobi

    +

    To use these optimization sfotwares, you need to load the corresponding Lmod module.

    +

    For Cplex

    +
    >$ module load maths/Cplex
    +
    + +

    or for Gurobi

    +
    >$ module load math/Gurobi
    +
    + +
    +

    Warning

    +

    Modules are not allowed on the access servers. To test interactively Singularity, rememerber to ask for an interactive job first. +

    salloc -p interactive     # OR, use the helper script: si
    +

    +
    +

    Using Cplex

    +

    In order to test cplex and gurobi, we need an optimization instance. Hereafter, we are going to rely on instances from the miplib. For example, let us the following instance ex10.mps.gz described in details here for the interested readers.

    +

    Multi-threaded optimization with Cplex

    +

    In order to solve mathematical programs, cplex allows users to define a command line script that can be passed to the executable. On the Iris cluster, the following launcher can be used to perform multi-threaded MIP optimzation. A good practice is to request as many threads as available cores on the node. If you need more computing power, you have to consider a distributed version.

    +
    #!/bin/bash -l
    +#SBATCH -J Multi-threaded_cplex
    +#SBATCH --ntasks=1
    +#SBATCH --cpus-per-task=28
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +#SBATCH --qos=normal
    +
    +# Load cplex 
    +module load math/CPLEX
    +
    +# Some variable
    +MPS_FILE=$1
    +RES_FILE=$2
    +CPLEX_COMMAND_SCRIPT="command_job${SLURM_JOBID}.lst"
    +
    +
    +
    +# Create cplex command script
    +cat << EOF > ${CPLEX_COMMAND_SCRIPT}
    +set threads ${SLURM_CPUS_PER_TASK}
    +read ${MPS_FILE} 
    +mipopt
    +write "${RES_FILE}.sol" 
    +quit
    +EOF
    +chmod +x ${CPLEX_COMMAND_SCRIPT}
    +
    +# Cplex will use the required number of thread
    +cplex -f ${CPLEX_COMMAND_SCRIPT}
    +rm ${CPLEX_COMMAND_SCRIPT}
    +
    + +

    Using the script cplex_mtt.slurm, you can launch a batch job with the sbatch command as follows sbatch cplex_mtt.slurm ex10.mps.gz cplex_mtt.

    +

    Distributed optimization with Cplex

    +

    When you require more computing power (e.g. more cores), distributed computations is the way to go. The cplex optimization software embeds a feature that allows you to perform distributed MIP. Using the Message Passing Interface (MPI), cplex will distribute the exploration of the tree search to multiple workers. +The below launcher is an example showing how to reserve ressources on multiple nodes through the Slurm scheduler. In this example, 31 tasks will be distributed over 2 nodes.

    +
    #!/bin/bash -l
    +#SBATCH -J Distrbuted\_cplex
    +#SBATCH --nodes=2
    +#SBATCH --ntasks-per-node=14
    +#SBATCH -c 2    # multithreading -- #threads (slurm cpu) per task 
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +#SBATCH --qos=normal
    +module load math/CPLEX
    +
    +# Some variables
    +MPS_FILE=$1
    +RES_FILE=$2
    +CPLEX_COMMAND_SCRIPT="command_job${SLURM_JOBID}.lst"
    +
    +
    +
    +# Create cplex command script
    +cat << EOF > ${CPLEX_COMMAND_SCRIPT}
    +set distmip config mpi
    +set threads ${SLURM_CPUS_PER_TASK}
    +read ${MPS_FILE} 
    +mipopt
    +write "${RES_FILE}.sol" 
    +quit
    +EOF
    +chmod +x ${CPLEX_COMMAND_SCRIPT}
    +
    +# Start Cplex with MPI
    +# On first host, the master is running 
    +mpirun -np 1 cplex -f ${CPLEX_COMMAND_SCRIPT} -mpi : -np $((SLURM_NTASKS - 1)) cplex -mpi
    +rm ${CPLEX_COMMAND_SCRIPT}
    +
    + +

    Using the script cplex_dist.slurm, you can launch a batch job with the sbatch command as follows sbatch cplex_dist.slurm ex10.mps.gz cplex_dist.

    +

    Gurobi

    +

    Multi-threaded optimization with Gurobi

    +

    The script below allows you to start multi-threaded MIP optimization with Gurobi.

    +
    #!/bin/bash -l
    +#SBATCH -J Multi-threaded_gurobi
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 28     # multithreading -- #threads (slurm cpu) per task 
    +#SBATCH --time=0-01:00:00
    +#SBATCH -p batch
    +#SBATCH --qos=normal
    +
    +# Load Gurobi 
    +module load math/Gurobi
    +
    +# Some variable
    +MPS_FILE=$1
    +RES_FILE=$2
    +
    +# Gurobi will access use the required number of thread
    +gurobi_cl Threads=${SLURM_CPUS_PER_TASK} ResultFile="${RES_FILE}.sol" ${MPS_FILE}
    +
    + +

    Using the script gurobi_mtt.slurm, you can launch a batch job with the sbatch command as follows sbatch gurobi_mtt.slurm ex10.mps.gz gurobi_mtt.

    +

    Distributed optimization with Gurobi

    +
    #!/bin/bash -l
    +#SBATCH -J Distrbuted_gurobi
    +#SBATCH -N 3       # Number of nodes
    +#SBATCH --ntasks-per-node=1
    +#SBATCH -c 5   # multithreading -- #threads (slurm cpu) per task 
    +#SBATCH --time=00:15:00
    +#SBATCH -p batch
    +#SBATCH --qos normal
    +#SBATCH -o %x-%j.log
    +
    +# Load personal modules
    +mu
    +# Load gurobi
    +module load math/Gurobi
    +
    +export MASTER_PORT=61000
    +export SLAVE_PORT=61000
    +export MPS_FILE=$1
    +export RES_FILE=$2
    +export GUROBI_INNER_LAUNCHER="inner_job${SLURM_JOBID}.sh"
    +
    +if [[ -f "grb_rs.cnf" ]];then
    +    sed -i "s/^THREADLIMIT.*$/THREADLIMIT=${SLURM_CPUS_PER_TASK}/g" grb_rs.cnf
    +else
    +    $GUROBI_REMOTE_BIN_PATH/grb_rs init
    +    echo "THREADLIMIT=${SLURM_CPUS_PER_TASK}" >> grb_rs.cnf
    +fi
    +
    +
    +cat << 'EOF' > ${GUROBI_INNER_LAUNCHER}
    +#!/bin/bash
    +MASTER_NODE=$(scontrol show hostname ${SLURM_NODELIST} | head -n 1)
    +    ## Load configuration and environment
    +    if [[ ${SLURM_PROCID} -eq 0 ]]; then
    +        ## Start Gurobi master worker in background
    +         $GUROBI_REMOTE_BIN_PATH/grb_rs --worker --port ${MASTER_PORT} &
    +         wait
    +    elif [[ ${SLURM_PROCID} -eq 1 ]]; then
    +        sleep 5
    +        grbcluster nodes --server ${MASTER_NODE}:${MASTER_PORT} 
    +        gurobi_cl Threads=${SLURM_CPUS_PER_TASK} ResultFile="${RES_FILE}.sol" Workerpool=${MASTER_NODE}:${MASTER_PORT} DistributedMIPJobs=$((SLURM_NNODES -1)) ${MPS_FILE}
    +    else
    +        sleep 2
    +        ## Start Gurobi slave worker in background
    +        $GUROBI_REMOTE_BIN_PATH/grb_rs --worker --port ${MASTER_PORT} --join ${MASTER_NODE}:${MASTER_PORT} &
    +        wait
    +fi
    +EOF
    +chmod +x ${GUROBI_INNER_LAUNCHER}
    +
    +## Launch Gurobi and wait for it to start
    +srun ${GUROBI_INNER_LAUNCHER} &
    +while [[ ! -e "${RES_FILE}.sol" ]]; do
    +    sleep 5
    +done
    +rm ${GUROBI_INNER_LAUNCHER}
    +
    + +

    Using the script gurobi_dist.slurm, you can launch a batch job with the sbatch command as follows sbatch gurobi_dist.slurm ex10.mps.gz gurobi_dist.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/physics/wrf/index.html b/software/physics/wrf/index.html new file mode 100644 index 00000000..63b92a44 --- /dev/null +++ b/software/physics/wrf/index.html @@ -0,0 +1,2904 @@ + + + + + + + + + + + + + + + + + + + + + + + + WRF - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Weather Research and Forecasting

    +

    Weather Research and Forecasting Model (WRF) is a state-of-the-art atmospheric modeling system designed for both meteorological research and numerical weather prediction. The official source code, models, usage instruction, and most importantly the user license, are found in the official repository.

    +

    The University of Manchester distribution

    +

    In our systems we provide repackaged containers developed by the Central Research IT Service of the University of Manchester. The repository for wrf-docker provides individual Docker containers for the following packages:

    + +

    Available versions in the UL HPC systems

    +

    In the UL HPC system we support Singularity containers. The University of Manchester containers have been repackaged as Singularity containers for use in our systems. The Singularity containers are:

    +
      +
    • WRF-WPS version 4.3.3: /work/projects/singularity/ulhpc/wrf-wps-4.3.3.sif
    • +
    • WRF-Chem version 4.3.3: /work/projects/singularity/ulhpc/wrf-chem-4.3.3.sif
    • +
    • WRF-4DVar version 4.3.3: /work/projects/singularity/ulhpc/wrf-4dvar-4.3.3.sif
    • +
    +

    There should be one-to-one correspondence when running the Singularity containers in UL HPC systems and when running the Docker containers in a local machine.

    +
    +

    Tip

    +

    If you find any issues with the information above, please file a support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/2019b/index.html b/software/swsets/2019b/index.html new file mode 100644 index 00000000..81760673 --- /dev/null +++ b/software/swsets/2019b/index.html @@ -0,0 +1,4959 @@ + + + + + + + + + + + + + + + + + + + + + + + + 2019b - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    2019b

    + +

    Alphabetical list of available ULHPC software belonging to the '2019b' software set. +To load a software of this set, use: +

    # Eventually: resif-load-swset-[...]
    +module load <category>/<software>[/<version>]
    +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareArchitecturesClustersCategoryDescription
    ABAQUS 2018broadwell, skylakeirisCFD/Finite element modellingFinite Element Analysis software for modeling, visualization and best-in-class implicit and explicit dynamics FEA.
    ACTC 1.1broadwell, skylakeirisLibrariesACTC converts independent triangles into triangle strips or fans.
    ANSYS 19.4broadwell, skylakeirisUtilitiesANSYS simulation software enables organizations to confidently predict how their products will operate in the real world. We believe that every product is a promise of something greater.
    ANSYS 21.1broadwell, skylakeirisUtilitiesANSYS simulation software enables organizations to confidently predict how their products will operate in the real world. We believe that every product is a promise of something greater.
    ASE 3.19.0broadwell, skylakeirisChemistryASE is a python package providing an open source Atomic Simulation Environment in the Python scripting language. From version 3.20.1 we also include the ase-ext package, it contains optional reimplementations in C of functions in ASE. ASE uses it automatically when installed.
    ATK 2.34.1broadwell, skylakeirisVisualisationATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications.
    Advisor 2019_update5broadwell, skylakeirisPerformance measurementsVectorization Optimization and Thread Prototyping - Vectorize & thread code or performance “dies” - Easy workflow + data + tips = faster code faster - Prioritize, Prototype & Predict performance gain
    Anaconda3 2020.02broadwell, skylakeirisProgramming LanguagesBuilt to complement the rich, open source Python community, the Anaconda platform provides an enterprise-ready data analytics platform that empowers companies to adopt a modern open data science analytics architecture.
    ArmForge 20.0.3broadwell, skylakeirisUtilitiesThe industry standard development package for C, C++ and Fortran high performance code on Linux. Forge is designed to handle the complex software projects - including parallel, multiprocess and multithreaded code. Arm Forge combines an industry-leading debugger, Arm DDT, and an out-of-the-box-ready profiler, Arm MAP.
    ArmReports 20.0.3broadwell, skylakeirisUtilitiesArm Performance Reports - a low-overhead tool that produces one-page text and HTML reports summarizing and characterizing both scalar and MPI application performance. Arm Performance Reports runs transparently on optimized production-ready codes by adding a single command to your scripts, and provides the most effective way to characterize and understand the performance of HPC application runs.
    Armadillo 9.900.1broadwell, skylakeirisNumerical librariesArmadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.
    Arrow 0.16.0broadwell, skylakeirisData processingApache Arrow (incl. PyArrow Python bindings)), a cross-language development platform for in-memory data.
    Aspera-CLI 3.9.1broadwell, skylakeirisUtilitiesIBM Aspera Command-Line Interface (the Aspera CLI) is a collection of Aspera tools for performing high-speed, secure data transfers from the command line. The Aspera CLI is for users and organizations who want to automate their transfer workflows.
    Autoconf 2.69broadwell, skylake, gpuirisDevelopmentAutoconf is an extensible package of M4 macros that produce shell scripts to automatically configure software source code packages. These scripts can adapt the packages to many kinds of UNIX-like systems without manual user intervention. Autoconf creates a configuration script for a package from a template file that lists the operating system features that the package can use, in the form of M4 macro calls.
    Automake 1.16.1broadwell, skylake, gpuirisDevelopmentAutomake: GNU Standards-compliant Makefile generator
    Autotools 20180311broadwell, skylake, gpuirisDevelopmentThis bundle collect the standard GNU build tools: Autoconf, Automake and libtool
    BEDTools 2.29.2broadwell, skylakeirisBiologyBEDTools: a powerful toolset for genome arithmetic. The BEDTools utilities allow one to address common genomics tasks such as finding feature overlaps and computing coverage. The utilities are largely based on four widely-used file formats: BED, GFF/GTF, VCF, and SAM/BAM.
    BLAST+ 2.9.0broadwell, skylakeirisBiologyBasic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences.
    BWA 0.7.17broadwell, skylakeirisBiologyBurrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.
    BamTools 2.5.1broadwell, skylakeirisBiologyBamTools provides both a programmer's API and an end-user's toolkit for handling BAM files.
    Bazel 0.26.1gpuirisDevelopmentBazel is a build tool that builds code quickly and reliably. It is used to build the majority of Google's software.
    Bazel 0.29.1gpuirisDevelopmentBazel is a build tool that builds code quickly and reliably. It is used to build the majority of Google's software.
    BioPerl 1.7.2broadwell, skylakeirisBiologyBioperl is the product of a community effort to produce Perl code which is useful in biology. Examples include Sequence objects, Alignment objects and database searching objects.
    Bison 3.3.2broadwell, skylake, gpuirisProgramming LanguagesBison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.
    Boost 1.71.0broadwell, skylakeirisDevelopmentBoost provides free peer-reviewed portable C++ source libraries.
    Bowtie2 2.3.5.1broadwell, skylakeirisBiologyBowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM Index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.
    CGAL 4.14.1broadwell, skylakeirisNumerical librariesThe goal of the CGAL Open Source Project is to provide easy access to efficient and reliable geometric algorithms in the form of a C++ library.
    CMake 3.15.3broadwell, skylake, gpuirisDevelopmentCMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.
    CPLEX 12.10broadwell, skylakeirisMathematicsIBM ILOG CPLEX Optimizer's mathematical programming technology enables analytical decision support for improving efficiency, reducing costs, and increasing profitability.
    CRYSTAL 17broadwell, skylakeirisChemistryThe CRYSTAL package performs ab initio calculations of the ground state energy, energy gradient, electronic wave function and properties of periodic systems. Hartree-Fock or Kohn- Sham Hamiltonians (that adopt an Exchange-Correlation potential following the postulates of Density-Functional Theory) can be used.
    CUDA 10.1.243gpuirisSystem-level softwareCUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
    Clang 9.0.1broadwell, skylake, gpuirisCompilersC, C++, Objective-C compiler, based on LLVM. Does not include C++ standard library -- use libstdc++ from GCC.
    CubeGUI 4.4.4broadwell, skylakeirisPerformance measurementsCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube graphical report explorer.
    CubeLib 4.4.4broadwell, skylakeirisPerformance measurementsCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube general purpose C++ library component and command-line tools.
    CubeWriter 4.4.3broadwell, skylakeirisPerformance measurementsCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube high-performance C writer library component.
    DB 18.1.32broadwell, skylakeirisUtilitiesBerkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects.
    DBus 1.13.12broadwell, skylakeirisDevelopmentD-Bus is a message bus system, a simple way for applications to talk to one another. In addition to interprocess communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a "single instance" application or daemon, and to launch applications and daemons on demand when their services are needed.
    DMTCP 2.5.2broadwell, skylakeirisUtilitiesDMTCP is a tool to transparently checkpoint the state of multiple simultaneous applications, including multi-threaded and distributed applications. It operates directly on the user binary executable, without any Linux kernel modules or other kernel modifications.
    Dakota 6.11.0broadwell, skylakeirisMathematicsThe Dakota project delivers both state-of-the-art research and robust, usable software for optimization and UQ. Broadly, the Dakota software's advanced parametric analyses enable design exploration, model calibration, risk analysis, and quantification of margins and uncertainty with computational models."
    Doxygen 1.8.16broadwell, skylake, gpuirisDevelopmentDoxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.
    ELPA 2019.11.001broadwellirisMathematicsEigenvalue SoLvers for Petaflop-Applications .
    EasyBuild 4.3.0broadwell, skylake, gpuirisUtilitiesEasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
    EasyBuild 4.3.3broadwell, skylake, gpuirisUtilitiesEasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
    Eigen 3.3.7broadwell, skylake, gpuirisMathematicsEigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
    Elk 6.3.2broadwell, skylakeirisPhysicsAn all-electron full-potential linearised augmented-plane wave (FP-LAPW) code with many advanced features. Written originally at Karl-Franzens-Universität Graz as a milestone of the EXCITING EU Research and Training Network, the code is designed to be as simple as possible so that new developments in the field of density functional theory (DFT) can be added quickly and reliably.
    FDS 6.7.1broadwell, skylakeirisPhysicsFire Dynamics Simulator (FDS) is a large-eddy simulation (LES) code for low-speed flows, with an emphasis on smoke and heat transport from fires.
    FFTW 3.3.8broadwell, skylake, gpuirisNumerical librariesFFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data.
    FFmpeg 4.2.1broadwell, skylake, gpuirisVisualisationA complete, cross-platform solution to record, convert and stream audio and video.
    FLTK 1.3.5broadwell, skylakeirisVisualisationFLTK is a cross-platform C++ GUI toolkit for UNIX/Linux (X11), Microsoft Windows, and MacOS X. FLTK provides modern GUI functionality without the bloat and supports 3D graphics via OpenGL and its built-in GLUT emulation.
    FastQC 0.11.9broadwell, skylakeirisBiologyFastQC is a quality control application for high throughput sequence data. It reads in sequence data in a variety of formats and can either provide an interactive application to review the results of several different QC checks, or create an HTML based report which can be integrated into a pipeline.
    FriBidi 1.0.5broadwell, skylake, gpuirisProgramming LanguagesThe Free Implementation of the Unicode Bidirectional Algorithm.
    GCC 8.3.0broadwell, skylake, gpuirisCompilersThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    GCCcore 8.3.0broadwell, skylake, gpuirisCompilersThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    GDAL 3.0.2broadwell, skylake, gpuirisData processingGDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing.
    GDB 9.1broadwell, skylakeirisDebuggingThe GNU Project Debugger
    GEOS 3.8.0broadwell, skylake, gpuirisMathematicsGEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)
    GLPK 4.65broadwell, skylakeirisUtilitiesThe GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library.
    GLib 2.62.0broadwell, skylake, gpuirisVisualisationGLib is one of the base libraries of the GTK+ project
    GMP 6.1.2broadwell, skylake, gpuirisMathematicsGMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.
    GObject-Introspection 1.63.1broadwell, skylakeirisDevelopmentGObject introspection is a middleware layer between C libraries (using GObject) and language bindings. The C library can be scanned at compile time and generate a metadata file, in addition to the actual native C library. Then at runtime, language bindings can read this metadata and automatically provide bindings to call into the C library.
    GPAW-setups 0.9.20000broadwell, skylakeirisChemistryPAW setup for the GPAW Density Functional Theory package. Users can install setups manually using 'gpaw install-data' or use setups from this package. The versions of GPAW and GPAW-setups can be intermixed.
    GPAW 20.1.0broadwell, skylakeirisChemistryGPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). It uses real-space uniform grids and multigrid methods or atom-centered basis-functions.
    GROMACS 2019.4broadwell, skylakeirisBiologyGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a CPU only build, containing both MPI and threadMPI builds for both single and double precision. It also contains the gmxapi extension for the single precision MPI build.
    GROMACS 2019.6broadwell, skylakeirisBiologyGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a CPU only build, containing both MPI and threadMPI builds for both single and double precision. It also contains the gmxapi extension for the single precision MPI build.
    GROMACS 2020broadwell, skylakeirisBiologyGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a CPU only build, containing both MPI and threadMPI builds for both single and double precision. It also contains the gmxapi extension for the single precision MPI build.
    GSL 2.6broadwell, skylake, gpuirisNumerical librariesThe GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.
    GTK+ 3.24.13broadwell, skylakeirisVisualisationGTK+ is the primary library used to construct user interfaces in GNOME. It provides all the user interface controls, or widgets, used in a common graphical application. Its object-oriented API allows you to construct user interfaces without dealing with the low-level details of drawing and device interaction.
    Gdk-Pixbuf 2.38.2broadwell, skylakeirisVisualisationThe Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3.
    Ghostscript 9.50broadwell, skylake, gpuirisUtilitiesGhostscript is a versatile processor for PostScript data with the ability to render PostScript to different targets. It used to be part of the cups printing stack, but is no longer used for that.
    Go 1.14.1broadwell, skylakeirisCompilersGo is an open source programming language that makes it easy to build simple, reliable, and efficient software.
    Guile 1.8.8broadwell, skylakeirisProgramming LanguagesGuile is a programming language, designed to help programmers create flexible applications that can be extended by users or other programmers with plug-ins, modules, or scripts.
    Guile 2.2.4broadwell, skylakeirisProgramming LanguagesGuile is a programming language, designed to help programmers create flexible applications that can be extended by users or other programmers with plug-ins, modules, or scripts.
    Gurobi 9.0.0broadwell, skylakeirisMathematicsThe Gurobi Optimizer is a state-of-the-art solver for mathematical programming. The solvers in the Gurobi Optimizer were designed from the ground up to exploit modern architectures and multi-core processors, using the most advanced implementations of the latest algorithms.
    HDF5 1.10.5broadwell, skylake, gpuirisData processingHDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.
    HTSlib 1.10.2broadwell, skylakeirisBiologyA C library for reading/writing high-throughput sequencing data. This package includes the utilities bgzip and tabix
    HarfBuzz 2.6.4broadwell, skylakeirisVisualisationHarfBuzz is an OpenType text shaping engine.
    Harminv 1.4.1broadwell, skylakeirisMathematicsHarminv is a free program (and accompanying library) to solve the problem of harmonic inversion - given a discrete-time, finite-length signal that consists of a sum of finitely-many sinusoids (possibly exponentially decaying) in a given bandwidth, it determines the frequencies, decay constants, amplitudes, and phases of those sinusoids.
    Horovod 0.19.1broadwell, skylake, gpuirisUtilitiesHorovod is a distributed training framework for TensorFlow.
    ICU 64.2broadwell, skylake, gpuirisLibrariesICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications.
    ImageMagick 7.0.9-5broadwell, skylake, gpuirisVisualisationImageMagick is a software suite to create, edit, compose, or convert bitmap images
    Inspector 2019_update5broadwell, skylakeirisUtilitiesIntel Inspector XE is an easy to use memory error checker and thread checker for serial and parallel applications
    JasPer 2.0.14broadwell, skylake, gpuirisVisualisationThe JasPer Project is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.
    Java 1.8.0_241broadwell, skylake, gpuirisProgramming LanguagesJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    Java 11.0.2broadwell, skylake, gpuirisProgramming LanguagesJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    Java 13.0.2broadwell, skylake, gpuirisProgramming LanguagesJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    Jellyfish 2.3.0broadwell, skylakeirisBiologyJellyfish is a tool for fast, memory-efficient counting of k-mers in DNA.
    JsonCpp 1.9.3broadwell, skylake, gpuirisLibrariesJsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files.
    Julia 1.4.1broadwell, skylakeirisProgramming LanguagesJulia is a high-level, high-performance dynamic programming language for numerical computing
    Keras 2.3.1gpuirisMathematicsKeras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow.
    LAME 3.100broadwell, skylake, gpuirisData processingLAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.
    LLVM 9.0.0broadwell, skylake, gpuirisCompilersThe LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
    LLVM 9.0.1broadwell, skylake, gpuirisCompilersThe LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
    LMDB 0.9.24broadwell, skylake, gpuirisLibrariesLMDB is a fast, memory-efficient database. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases.
    LibTIFF 4.0.10broadwell, skylake, gpuirisLibrariestiff: Library and tools for reading and writing TIFF data files
    LittleCMS 2.9broadwell, skylake, gpuirisVisualisationLittle CMS intends to be an OPEN SOURCE small-footprint color management engine, with special focus on accuracy and performance.
    Lua 5.1.5broadwell, skylakeirisProgramming LanguagesLua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.
    M4 1.4.18broadwell, skylake, gpuirisDevelopmentGNU M4 is an implementation of the traditional Unix macro processor. It is mostly SVR4 compatible although it has some extensions (for example, handling more than 9 positional parameters to macros). GNU M4 also has built-in functions for including files, running shell commands, doing arithmetic, etc.
    MATLAB 2019bbroadwell, skylakeirisMathematicsMATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran.
    MATLAB 2020abroadwell, skylakeirisMathematicsMATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran.
    METIS 5.1.0broadwell, skylakeirisMathematicsMETIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.
    MPFR 4.0.2broadwell, skylake, gpuirisMathematicsThe MPFR library is a C library for multiple-precision floating-point computations with correct rounding.
    Mako 1.1.0broadwell, skylake, gpuirisDevelopmentA super-fast templating language that borrows the best ideas from the existing templating languages
    Mathematica 12.0.0broadwell, skylakeirisMathematicsMathematica is a computational software program used in many scientific, engineering, mathematical and computing fields.
    Maven 3.6.3broadwell, skylakeirisDevelopmentBinary maven install, Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.
    Meep 1.4.3broadwell, skylakeirisPhysicsMeep (or MEEP) is a free finite-difference time-domain (FDTD) simulation software package developed at MIT to model electromagnetic systems.
    Mesa 19.1.7broadwell, skylake, gpuirisVisualisationMesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
    Mesa 19.2.1broadwell, skylake, gpuirisVisualisationMesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
    Meson 0.51.2broadwell, skylake, gpuirisUtilitiesMeson is a cross-platform build system designed to be both as fast and as user friendly as possible.
    Mesquite 2.3.0broadwell, skylakeirisMathematicsMesh-Quality Improvement Library
    NAMD 2.13broadwell, skylakeirisChemistryNAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.
    NASM 2.14.02broadwell, skylake, gpuirisProgramming LanguagesNASM: General-purpose x86 assembler
    NCCL 2.4.8gpuirisLibrariesThe NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs.
    NLopt 2.6.1broadwell, skylake, gpuirisNumerical librariesNLopt is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms.
    NSPR 4.21broadwell, skylakeirisLibrariesNetscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions.
    NSS 3.45broadwell, skylakeirisLibrariesNetwork Security Services (NSS) is a set of libraries designed to support cross-platform development of security-enabled client and server applications.
    Ninja 1.9.0broadwell, skylake, gpuirisUtilitiesNinja is a small build system with a focus on speed.
    OPARI2 2.0.5broadwell, skylakeirisPerformance measurementsOPARI2, the successor of Forschungszentrum Juelich's OPARI, is a source-to-source instrumentation tool for OpenMP and hybrid codes. It surrounds OpenMP directives and runtime library calls with calls to the POMP2 measurement interface.
    OTF2 2.2broadwell, skylakeirisPerformance measurementsThe Open Trace Format 2 is a highly scalable, memory efficient event trace data format plus support library. It is the new standard trace format for Scalasca, Vampir, and TAU and is open for other tools.
    OpenBLAS 0.3.7broadwell, skylake, gpuirisNumerical librariesOpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
    OpenCV 4.2.0broadwell, skylakeirisVisualisationOpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Includes extra modules for OpenCV from the contrib repository.
    OpenFOAM-Extend 4.1-20200408broadwell, skylakeirisCFD/Finite element modellingOpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
    OpenFOAM v1912broadwell, skylakeirisCFD/Finite element modellingOpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
    OpenMPI 3.1.4broadwell, skylake, gpuirisMPIThe Open MPI Project is an open source MPI-3 implementation.
    PAPI 6.0.0broadwell, skylakeirisPerformance measurementsPAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. In addition Component PAPI provides access to a collection of components that expose performance measurement opportunites across the hardware and software stack.
    PCRE2 10.33broadwell, skylakeirisDevelopmentThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PCRE 8.43broadwell, skylake, gpuirisDevelopmentThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PDT 3.25broadwell, skylakeirisPerformance measurementsProgram Database Toolkit (PDT) is a framework for analyzing source code written in several programming languages and for making rich program knowledge accessible to developers of static and dynamic analysis tools. PDT implements a standard program representation, the program database (PDB), that can be accessed in a uniform way through a class library supporting common PDB operations.
    PGI 19.10broadwell, skylakeirisCompilersC, C++ and Fortran compilers from The Portland Group - PGI
    PLUMED 2.5.3broadwell, skylakeirisChemistryPLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.
    PROJ 6.2.1broadwell, skylake, gpuirisLibrariesProgram proj is a standard Unix filter function which converts geographic longitude and latitude coordinates into cartesian coordinates
    Pango 1.44.7broadwell, skylakeirisVisualisationPango is a library for laying out and rendering of text, with an emphasis on internationalization. Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in the context of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.
    ParMETIS 4.0.3broadwell, skylakeirisMathematicsParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes.
    ParMGridGen 1.0broadwell, skylakeirisMathematicsParMGridGen is an MPI-based parallel library that is based on the serial package MGridGen, that implements (serial) algorithms for obtaining a sequence of successive coarse grids that are well-suited for geometric multigrid methods.
    ParaView 5.6.2broadwell, skylakeirisVisualisationParaView is a scientific parallel visualizer.
    Perl 5.30.0broadwell, skylake, gpuirisProgramming LanguagesLarry Wall's Practical Extraction and Report Language This is a minimal build without any modules. Should only be used for build dependencies.
    Pillow 6.2.1broadwell, skylake, gpuirisVisualisationPillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.
    PyTorch 1.4.0broadwell, skylakeirisDevelopmentTensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
    PyTorch 1.7.1broadwell, skylakeirisDevelopmentTensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
    PyYAML 5.1.2broadwell, skylake, gpuirisLibrariesPyYAML is a YAML parser and emitter for the Python programming language.
    Python 2.7.16broadwell, skylake, gpuirisProgramming LanguagesPython is a programming language that lets you work more quickly and integrate your systems more effectively.
    Python 3.7.4broadwell, skylake, gpuirisProgramming LanguagesPython is a programming language that lets you work more quickly and integrate your systems more effectively.
    Qt5 5.13.1broadwell, skylakeirisDevelopmentQt is a comprehensive cross-platform C++ application framework.
    QuantumESPRESSO 6.7broadwellirisChemistryQuantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).
    R 3.6.2broadwell, skylake, gpuirisProgramming LanguagesR is a free software environment for statistical computing and graphics.
    ReFrame 2.21broadwell, skylakeirisDevelopmentReFrame is a framework for writing regression tests for HPC systems.
    Ruby 2.7.1broadwell, skylakeirisProgramming LanguagesRuby is a dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write.
    Rust 1.37.0broadwell, skylakeirisProgramming LanguagesRust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
    SAMtools 1.10broadwell, skylakeirisBiologySAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
    SCOTCH 6.0.9broadwell, skylakeirisMathematicsSoftware package and libraries for sequential and parallel graph partitioning, static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.
    SIONlib 1.7.6broadwell, skylakeirisLibrariesSIONlib is a scalable I/O library for parallel access to task-local files. The library not only supports writing and reading binary data to or from several thousands of processors into a single or a small number of physical files, but also provides global open and close functions to access SIONlib files in parallel. This package provides a stripped-down installation of SIONlib for use with performance tools (e.g., Score-P), with renamed symbols to avoid conflicts when an application using SIONlib itself is linked against a tool requiring a different SIONlib version.
    SQLite 3.29.0broadwell, skylake, gpuirisDevelopmentSQLite: SQL Database Engine in a C Library
    SWIG 4.0.1broadwell, skylake, gpuirisDevelopmentSWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages.
    Salmon 1.1.0broadwell, skylakeirisBiologySalmon is a wicked-fast program to produce a highly-accurate, transcript-level quantification estimates from RNA-seq data.
    Salome 8.5.0broadwell, skylakeirisCFD/Finite element modellingThe SALOME platform is an open source software framework for pre- and post-processing and integration of numerical solvers from various scientific fields. CEA and EDF use SALOME to perform a large number of simulations, typically related to power plant equipment and alternative energy. To address these challenges, SALOME includes a CAD/CAE modelling tool, mesh generators, an advanced 3D visualization tool, etc.
    ScaLAPACK 2.0.2broadwell, skylake, gpuirisNumerical librariesThe ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers.
    Scalasca 2.5broadwell, skylakeirisPerformance measurementsScalasca is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks -- in particular those concerning communication and synchronization -- and offers guidance in exploring their causes.
    SciPy-bundle 2019.10broadwell, skylake, gpuirisProgramming LanguagesBundle of Python packages for scientific software
    Score-P 6.0broadwell, skylakeirisPerformance measurementsThe Score-P measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications.
    Singularity 3.6.0broadwell, skylakeirisUtilitiesSingularityCE is an open source container platform designed to be simple, fast, and secure. Singularity is optimized for EPC and HPC workloads, allowing untrusted users to run untrusted containers in a trusted way.
    Spack 0.12.1broadwell, skylakeirisDevelopmentSpack is a package manager for supercomputers, Linux, and macOS. It makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.
    Spark 2.4.3broadwell, skylakeirisDevelopmentSpark is Hadoop MapReduce done in memory
    Sumo 1.3.1broadwell, skylakeirisUtilitiesSumo is an open source, highly portable, microscopic and continuous traffic simulation package designed to handle large road networks.
    Szip 2.1.1broadwell, skylake, gpuirisUtilitiesSzip compression software, providing lossless compression of scientific data
    Tcl 8.6.9broadwell, skylake, gpuirisProgramming LanguagesTcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more.
    TensorFlow 1.15.5gpuirisLibrariesAn open-source software library for Machine Intelligence
    TensorFlow 2.1.0gpuirisLibrariesAn open-source software library for Machine Intelligence
    Theano 1.0.4gpuirisMathematicsTheano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.
    Tk 8.6.9broadwell, skylake, gpuirisVisualisationTk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.
    Tkinter 3.7.4broadwell, skylakeirisProgramming LanguagesTkinter module, built with the Python buildsystem
    TopHat 2.1.2broadwell, skylakeirisBiologyTopHat is a fast splice junction mapper for RNA-Seq reads.
    Trinity 2.10.0broadwell, skylakeirisBiologyTrinity represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-Seq data. Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-Seq reads.
    UDUNITS 2.2.26broadwell, skylake, gpuirisPhysicsUDUNITS supports conversion of unit specifications between formatted and binary forms, arithmetic manipulation of units, and conversion of values between compatible scales of measurement.
    ULHPC-bio 2019bbroadwell, skylakeirisSystem-level softwareGeneric Module bundle for Bioinformatics, biology and biomedical software in use on the UL HPC Facility, especially at LCSB
    ULHPC-cs 2019bbroadwell, skylakeirisSystem-level softwareGeneric Module bundle for Computational science software in use on the UL HPC Facility, including: - Computer Aided Engineering, incl. CFD - Chemistry, Computational Chemistry and Quantum Chemistry - Data management & processing tools - Earth Sciences - Quantum Computing - Physics and physical systems simulations
    ULHPC-dl 2019bbroadwell, skylakeirisSystem-level softwareGeneric Module bundle for (CPU-version) of AI / Deep Learning / Machine Learning software in use on the UL HPC Facility
    ULHPC-gpu 2019bgpuirisSystem-level softwareGeneric Module bundle for GPU accelerated User Software in use on the UL HPC Facility
    ULHPC-math 2019bbroadwell, skylakeirisSystem-level softwareGeneric Module bundle for High-level mathematical software and Linear Algrebra libraries in use on the UL HPC Facility
    ULHPC-toolchains 2019bbroadwell, skylakeirisSystem-level softwareGeneric Module bundle that contains all the dependencies required to enable toolchains and building tools/programming language in use on the UL HPC Facility
    ULHPC-tools 2019bbroadwell, skylakeirisSystem-level softwareMisc tools, incl. - perf: Performance tools - tools: General purpose tools
    VASP 5.4.4broadwell, skylakeirisPhysicsThe Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.
    VTK 8.2.0broadwell, skylakeirisVisualisationThe Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.
    VTune 2019_update8broadwell, skylakeirisUtilitiesIntel VTune Amplifier XE is the premier performance profiler for C, C++, C#, Fortran, Assembly and Java.
    Valgrind 3.15.0broadwell, skylakeirisDebuggingValgrind: Debugging and profiling tools
    VirtualGL 2.6.2broadwell, skylakeirisVisualisationVirtualGL is an open source toolkit that gives any Linux or Unix remote display software the ability to run OpenGL applications with full hardware acceleration.
    Voro++ 0.4.6broadwell, skylakeirisMathematicsVoro++ is a software library for carrying out three-dimensional computations of the Voronoi tessellation. A distinguishing feature of the Voro++ library is that it carries out cell-based calculations, computing the Voronoi cell for each particle individually. It is particularly well-suited for applications that rely on cell-based statistics, where features of Voronoi cells (eg. volume, centroid, number of faces) can be used to analyze a system of particles.
    X11 20190717broadwell, skylake, gpuirisVisualisationThe X Window System (X11) is a windowing system for bitmap displays
    XML-LibXML 2.0201broadwell, skylakeirisData processingPerl binding for libxml2
    XZ 5.2.4broadwell, skylake, gpuirisUtilitiesxz: XZ utilities
    Xerces-C++ 3.2.2broadwell, skylakeirisLibrariesXerces-C++ is a validating XML parser written in a portable subset of C++. Xerces-C++ makes it easy to give your application the ability to read and write XML data. A shared library is provided for parsing, generating, manipulating, and validating XML documents using the DOM, SAX, and SAX2 APIs.
    Yasm 1.3.0broadwell, skylake, gpuirisProgramming LanguagesYasm: Complete rewrite of the NASM assembler with BSD license
    Zip 3.0broadwell, skylake, gpuirisUtilitiesZip is a compression and file packaging/archive utility. Although highly compatible both with PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP's own UnZip, our primary objectives have been portability and other-than-MSDOS functionality
    ant 1.10.6broadwell, skylakeirisDevelopmentApache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.
    ant 1.10.7broadwell, skylakeirisDevelopmentApache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.
    archspec 0.1.0broadwell, skylakeirisUtilitiesA library for detecting, labeling, and reasoning about microarchitectures
    arpack-ng 3.7.0broadwell, skylakeirisNumerical librariesARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.
    at-spi2-atk 2.34.1broadwell, skylakeirisVisualisationAT-SPI 2 toolkit bridge
    at-spi2-core 2.34.0broadwell, skylakeirisVisualisationAssistive Technology Service Provider Interface.
    binutils 2.32broadwell, skylake, gpuirisUtilitiesbinutils: GNU binary utilities
    bzip2 1.0.8broadwell, skylake, gpuirisUtilitiesbzip2 is a freely available, patent free, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression.
    cURL 7.66.0broadwell, skylake, gpuirisUtilitieslibcurl is a free and easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file transfer resume, http proxy tunneling and more.
    cairo 1.16.0broadwell, skylake, gpuirisVisualisationCairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB
    cuDNN 7.6.4.38gpuirisNumerical librariesThe NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks.
    double-conversion 3.1.4broadwell, skylake, gpuirisLibrariesEfficient binary-decimal and decimal-binary conversion routines for IEEE doubles.
    expat 2.2.7broadwell, skylake, gpuirisUtilitiesExpat is an XML parser library written in C. It is a stream-oriented parser in which an application registers handlers for things the parser might find in the XML document (like start tags)
    flatbuffers 1.12.0broadwell, skylake, gpuirisDevelopmentFlatBuffers: Memory Efficient Serialization Library
    flex 2.6.4broadwell, skylake, gpuirisProgramming LanguagesFlex (Fast Lexical Analyzer) is a tool for generating scanners. A scanner, sometimes called a tokenizer, is a program which recognizes lexical patterns in text.
    fontconfig 2.13.1broadwell, skylake, gpuirisVisualisationFontconfig is a library designed to provide system-wide font configuration, customization and application access.
    foss 2019bbroadwell, skylakeirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    fosscuda 2019bgpuirisToolchains (software stacks)GCC based compiler toolchain with CUDA support, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    freetype 2.10.1broadwell, skylake, gpuirisVisualisationFreeType 2 is a software font engine that is designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other products as well.
    gc 7.6.12broadwell, skylakeirisLibrariesThe Boehm-Demers-Weiser conservative garbage collector can be used as a garbage collecting replacement for C malloc or C++ new.
    gcccuda 2019bgpuirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, along with CUDA toolkit.
    gettext 0.19.8.1broadwell, skylake, gpuirisUtilitiesGNU 'gettext' is an important step for the GNU Translation Project, as it is an asset on which we may build many other steps. This package offers to programmers, translators, and even users, a well integrated set of tools and documentation
    gettext 0.20.1broadwell, skylake, gpuirisUtilitiesGNU 'gettext' is an important step for the GNU Translation Project, as it is an asset on which we may build many other steps. This package offers to programmers, translators, and even users, a well integrated set of tools and documentation
    gflags 2.2.2broadwell, skylakeirisDevelopmentThe gflags package contains a C++ library that implements commandline flags processing. It includes built-in support for standard types such as string and the ability to define flags in the source file in which they are used.
    giflib 5.2.1broadwell, skylake, gpuirisLibrariesgiflib is a library for reading and writing gif images. It is API and ABI compatible with libungif which was in wide use while the LZW compression algorithm was patented.
    git 2.23.0broadwell, skylake, gpuirisUtilitiesGit is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
    glog 0.4.0broadwell, skylakeirisDevelopmentA C++ implementation of the Google logging module.
    gmsh 4.4.0broadwell, skylakeirisCFD/Finite element modellingSalome is an open-source software that provides a generic Pre- and Post-Processing platform for numerical simulation. It is based on an open and flexible architecture made of reusable components.
    gnuplot 5.2.8broadwell, skylakeirisVisualisationPortable interactive, function plotting utility
    gocryptfs 1.7.1broadwell, skylakeirisUtilitiesEncrypted overlay filesystem written in Go. gocryptfs uses file-based encryption that is implemented as a mountable FUSE filesystem. Each file in gocryptfs is stored as one corresponding encrypted file on the hard disk.
    gompi 2019bbroadwell, skylakeirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
    gompic 2019bgpuirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain along with CUDA toolkit, including OpenMPI for MPI support with CUDA features enabled.
    googletest 1.10.0broadwell, skylakeirisDevelopmentGoogle's framework for writing C++ tests on a variety of platforms
    gperf 3.1broadwell, skylake, gpuirisDevelopmentGNU gperf is a perfect hash function generator. For a given list of strings, it produces a hash function and hash table, in form of C or C++ code, for looking up a value depending on the input string. The hash function is perfect, which means that the hash table has no collisions, and the hash table lookup needs a single string comparison only.
    gzip 1.10broadwell, skylakeirisUtilitiesgzip (GNU zip) is a popular data compression program as a replacement for compress
    h5py 2.10.0broadwell, skylake, gpuirisData processingHDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.
    help2man 1.47.4broadwell, skylake, gpuirisUtilitieshelp2man produces simple manual pages from the '--help' and '--version' output of other commands.
    help2man 1.47.8broadwell, skylake, gpuirisUtilitieshelp2man produces simple manual pages from the '--help' and '--version' output of other commands.
    hwloc 1.11.12broadwell, skylake, gpuirisSystem-level softwareThe Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.
    hypothesis 4.44.2broadwell, skylake, gpuirisUtilitiesHypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.
    iccifort 2019.5.281broadwell, skylake, gpuirisCompilersIntel C, C++ & Fortran compilers
    iccifortcuda 2019bgpuirisToolchains (software stacks)Intel C, C++ & Fortran compilers with CUDA toolkit
    iimpi 2019bbroadwell, skylakeirisToolchains (software stacks)Intel C/C++ and Fortran compilers, alongside Intel MPI.
    iimpic 2019bgpuirisToolchains (software stacks)Intel C/C++ and Fortran compilers, alongside Intel MPI and CUDA.
    imkl 2019.5.281broadwell, skylake, gpuirisNumerical librariesIntel Math Kernel Library is a library of highly optimized, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more.
    impi 2018.5.288broadwell, skylake, gpuirisMPIIntel MPI Library, compatible with MPICH ABI
    intel 2019bbroadwell, skylakeirisToolchains (software stacks)Compiler toolchain including Intel compilers, Intel MPI and Intel Math Kernel Library (MKL).
    intelcuda 2019bgpuirisToolchains (software stacks)Intel Cluster Toolkit Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MPI & Intel MKL, with CUDA toolkit
    intltool 0.51.0broadwell, skylake, gpuirisDevelopmentintltool is a set of tools to centralize translation of many different file formats using GNU gettext-compatible PO files.
    itac 2019.4.036broadwell, skylakeirisUtilitiesThe Intel Trace Collector is a low-overhead tracing library that performs event-based tracing in applications. The Intel Trace Analyzer provides a convenient way to monitor application activities gathered by the Intel Trace Collector through graphical displays.
    jemalloc 5.2.1broadwell, skylakeirisLibrariesjemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
    kallisto 0.46.1broadwell, skylakeirisBiologykallisto is a program for quantifying abundances of transcripts from RNA-Seq data, or more generally of target sequences using high-throughput sequencing reads.
    kim-api 2.1.3broadwell, skylakeirisChemistryOpen Knowledgebase of Interatomic Models. KIM is an API and OpenKIM is a collection of interatomic models (potentials) for atomistic simulations. This is a library that can be used by simulation programs to get access to the models in the OpenKIM database. This EasyBuild only installs the API, the models can be installed with the package openkim-models, or the user can install them manually by running kim-api-collections-management install user MODELNAME or kim-api-collections-management install user OpenKIM to install them all.
    libGLU 9.0.1broadwell, skylake, gpuirisVisualisationThe OpenGL Utility Library (GLU) is a computer graphics library for OpenGL.
    libcerf 1.13broadwell, skylakeirisMathematicslibcerf is a self-contained numeric library that provides an efficient and accurate implementation of complex error functions, along with Dawson, Faddeeva, and Voigt functions.
    libctl 4.0.0broadwell, skylakeirisChemistrylibctl is a free Guile-based library implementing flexible control files for scientific simulations.
    libdrm 2.4.99broadwell, skylake, gpuirisLibrariesDirect Rendering Manager runtime library.
    libepoxy 1.5.4broadwell, skylakeirisLibrariesEpoxy is a library for handling OpenGL function pointer management for you
    libevent 2.1.11broadwell, skylakeirisLibrariesThe libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts.
    libffi 3.2.1broadwell, skylake, gpuirisLibrariesThe libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.
    libgd 2.2.5broadwell, skylakeirisLibrariesGD is an open source code library for the dynamic creation of images by programmers.
    libgeotiff 1.5.1broadwell, skylake, gpuirisLibrariesLibrary for reading and writing coordinate system information from/to GeoTIFF files
    libglvnd 1.2.0broadwell, skylakeirisLibrarieslibglvnd is a vendor-neutral dispatch layer for arbitrating OpenGL API calls between multiple vendors.
    libgpuarray 0.7.6gpuirisLibrariesLibrary to manipulate tensors on the GPU.
    libiconv 1.16broadwell, skylake, gpuirisLibrariesLibiconv converts from one character encoding to another through Unicode conversion
    libjpeg-turbo 2.0.3broadwell, skylake, gpuirisLibrarieslibjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.
    libmatheval 1.1.11broadwell, skylakeirisLibrariesGNU libmatheval is a library (callable from C and Fortran) to parse and evaluate symbolic expressions input as text.
    libpciaccess 0.14broadwell, skylake, gpuirisSystem-level softwareGeneric PCI access library.
    libpng 1.6.37broadwell, skylake, gpuirisLibrarieslibpng is the official PNG reference library
    libreadline 8.0broadwell, skylake, gpuirisLibrariesThe GNU Readline library provides a set of functions for use by applications that allow users to edit command lines as they are typed in. Both Emacs and vi editing modes are available. The Readline library includes additional functions to maintain a list of previously-entered command lines, to recall and perhaps reedit those lines, and perform csh-like history expansion on previous commands.
    libsndfile 1.0.28broadwell, skylake, gpuirisLibrariesLibsndfile is a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface.
    libtool 2.4.6broadwell, skylake, gpuirisLibrariesGNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface.
    libunistring 0.9.10broadwell, skylakeirisLibrariesThis library provides functions for manipulating Unicode strings and for manipulating C strings according to the Unicode standard.
    libunwind 1.3.1broadwell, skylake, gpuirisLibrariesThe primary goal of libunwind is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications
    libxc 4.3.4broadwell, skylakeirisChemistryLibxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.
    libxml2 2.9.9broadwell, skylake, gpuirisLibrariesLibxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).
    libxslt 1.1.34broadwell, skylakeirisLibrariesLibxslt is the XSLT C library developed for the GNOME project (but usable outside of the Gnome platform).
    libyaml 0.2.2broadwell, skylake, gpuirisLibrariesLibYAML is a YAML parser and emitter written in C.
    lxml 4.4.2broadwell, skylakeirisLibrariesThe lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt.
    magma 2.5.1gpuirisMathematicsThe MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current Multicore+GPU systems.
    magma 2.5.4gpuirisMathematicsThe MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current Multicore+GPU systems.
    matplotlib 3.1.1broadwell, skylakeirisVisualisationmatplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.
    molmod 1.4.5broadwell, skylakeirisMathematicsMolMod is a Python library with many compoments that are useful to write molecular modeling programs.
    ncurses 6.0broadwell, skylake, gpuirisDevelopmentThe Ncurses (new curses) library is a free software emulation of curses in System V Release 4.0, and more. It uses Terminfo format, supports pads and color and multiple highlights and forms characters and function-key mapping, and has all the other SYSV-curses enhancements over BSD Curses.
    ncurses 6.1broadwell, skylake, gpuirisDevelopmentThe Ncurses (new curses) library is a free software emulation of curses in System V Release 4.0, and more. It uses Terminfo format, supports pads and color and multiple highlights and forms characters and function-key mapping, and has all the other SYSV-curses enhancements over BSD Curses.
    netCDF-Fortran 4.5.2broadwell, skylakeirisData processingNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    netCDF 4.7.1broadwell, skylake, gpuirisData processingNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    nettle 3.5.1broadwell, skylake, gpuirisLibrariesNettle is a cryptographic library that is designed to fit easily in more or less any context: In crypto toolkits for object-oriented languages (C++, Python, Pike, ...), in applications like LSH or GNUPG, or even in kernel space.
    nsync 1.24.0broadwell, skylake, gpuirisDevelopmentnsync is a C library that exports various synchronization primitives, such as mutexes
    numactl 2.0.12broadwell, skylake, gpuirisUtilitiesThe numactl program allows you to run your application program on specific cpu's and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.
    phonopy 2.2.0broadwell, skylakeirisLibrariesPhonopy is an open source package of phonon calculations based on the supercell approach.
    pixman 0.38.4broadwell, skylake, gpuirisVisualisationPixman is a low-level software library for pixel manipulation, providing features such as image compositing and trapezoid rasterization. Important users of pixman are the cairo graphics library and the X server.
    pkg-config 0.29.2broadwell, skylake, gpuirisDevelopmentpkg-config is a helper tool used when compiling applications and libraries. It helps you insert the correct compiler options on the command line so an application can use gcc -o test test.c pkg-config --libs --cflags glib-2.0 for instance, rather than hard-coding values on where to find glib (or other libraries).
    pkgconfig 1.5.1broadwell, skylake, gpuirisDevelopmentpkgconfig is a Python module to interface with the pkg-config command line tool
    pocl 1.4gpuirisLibrariesPocl is a portable open source (MIT-licensed) implementation of the OpenCL standard
    protobuf-python 3.10.0broadwell, skylake, gpuirisDevelopmentPython Protocol Buffers runtime library.
    protobuf 2.5.0broadwell, skylakeirisDevelopmentGoogle Protocol Buffers
    protobuf 3.10.0broadwell, skylakeirisDevelopmentGoogle Protocol Buffers
    pybind11 2.4.3broadwell, skylake, gpuirisLibrariespybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code.
    re2c 1.2.1broadwell, skylakeirisUtilitiesre2c is a free and open-source lexer generator for C and C++. Its main goal is generating fast lexers: at least as fast as their reasonably optimized hand-coded counterparts. Instead of using traditional table-driven approach, re2c encodes the generated finite state automata directly in the form of conditional jumps and comparisons.
    scipy 1.4.1broadwell, skylake, gpuirisMathematicsSciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension for Python.
    setuptools 41.0.1broadwell, skylakeirisDevelopmentEasily download, build, install, upgrade, and uninstall Python packages
    snappy 1.1.7broadwell, skylake, gpuirisLibrariesSnappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression.
    sparsehash 2.0.3broadwell, skylakeirisDevelopmentAn extremely memory-efficient hash_map implementation. 2 bits/entry overhead! The SparseHash library contains several hash-map implementations, including implementations that optimize for space or speed.
    tbb 2019_U9broadwell, skylakeirisLibrariesIntel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.
    tbb 2020.2broadwell, skylakeirisLibrariesIntel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.
    texinfo 6.7broadwell, skylakeirisDevelopmentTexinfo is the official documentation format of the GNU project.
    typing-extensions 3.7.4.3gpuirisDevelopmentTyping Extensions – Backported and Experimental Type Hints for Python
    util-linux 2.34broadwell, skylake, gpuirisUtilitiesSet of Linux utilities
    x264 20190925broadwell, skylake, gpuirisVisualisationx264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.
    x265 3.2broadwell, skylake, gpuirisVisualisationx265 is a free software library and application for encoding video streams into the H.265 AVC compression format, and is released under the terms of the GNU GPL.
    xorg-macros 1.19.2broadwell, skylake, gpuirisDevelopmentX.org macros utilities.
    xprop 1.2.4broadwell, skylakeirisVisualisationThe xprop utility is for displaying window and font properties in an X server. One window or font is selected using the command line arguments or possibly in the case of a window, by clicking on the desired window. A list of properties is then given, possibly with formatting information.
    yaff 1.6.0broadwell, skylakeirisChemistryYaff stands for 'Yet another force field'. It is a pythonic force-field code.
    zlib 1.2.11broadwell, skylake, gpuirisLibrarieszlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/2020b/index.html b/software/swsets/2020b/index.html new file mode 100644 index 00000000..bba070f4 --- /dev/null +++ b/software/swsets/2020b/index.html @@ -0,0 +1,4994 @@ + + + + + + + + + + + + + + + + + + + + + + + + 2020a - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    2020a

    + +

    Alphabetical list of available ULHPC software belonging to the '2020b' software set. +To load a software of this set, use: +

    # Eventually: resif-load-swset-[...]
    +module load <category>/<software>[/<version>]
    +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareArchitecturesClustersCategoryDescription
    ABAQUS 2021broadwell, epyc, skylakeaion, irisCFD/Finite element modellingFinite Element Analysis software for modeling, visualization and best-in-class implicit and explicit dynamics FEA.
    ABINIT 9.4.1epycaionChemistryABINIT is a package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave or wavelet basis.
    ABySS 2.2.5broadwell, epyc, skylakeaion, irisBiologyAssembly By Short Sequences - a de novo, parallel, paired-end sequence assembler
    ACTC 1.1broadwell, skylakeirisLibrariesACTC converts independent triangles into triangle strips or fans.
    ANSYS 21.1broadwell, skylakeirisUtilitiesANSYS simulation software enables organizations to confidently predict how their products will operate in the real world. We believe that every product is a promise of something greater.
    AOCC 3.1.0epycaionCompilersAMD Optimized C/C++ & Fortran compilers (AOCC) based on LLVM 12.0
    ASE 3.20.1broadwell, epyc, skylake, gpuaion, irisChemistryASE is a python package providing an open source Atomic Simulation Environment in the Python scripting language. From version 3.20.1 we also include the ase-ext package, it contains optional reimplementations in C of functions in ASE. ASE uses it automatically when installed.
    ASE 3.21.1broadwell, epyc, skylake, gpuaion, irisChemistryASE is a python package providing an open source Atomic Simulation Environment in the Python scripting language. From version 3.20.1 we also include the ase-ext package, it contains optional reimplementations in C of functions in ASE. ASE uses it automatically when installed.
    ATK 2.36.0broadwell, epyc, skylakeaion, irisVisualisationATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications.
    Anaconda3 2020.11broadwell, epyc, skylakeaion, irisProgramming LanguagesBuilt to complement the rich, open source Python community, the Anaconda platform provides an enterprise-ready data analytics platform that empowers companies to adopt a modern open data science analytics architecture.
    ArmForge 20.0.3broadwell, skylakeirisUtilitiesThe industry standard development package for C, C++ and Fortran high performance code on Linux. Forge is designed to handle the complex software projects - including parallel, multiprocess and multithreaded code. Arm Forge combines an industry-leading debugger, Arm DDT, and an out-of-the-box-ready profiler, Arm MAP.
    ArmReports 20.0.3broadwell, skylakeirisUtilitiesArm Performance Reports - a low-overhead tool that produces one-page text and HTML reports summarizing and characterizing both scalar and MPI application performance. Arm Performance Reports runs transparently on optimized production-ready codes by adding a single command to your scripts, and provides the most effective way to characterize and understand the performance of HPC application runs.
    Armadillo 10.5.3broadwell, epyc, skylakeaion, irisNumerical librariesArmadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.
    Aspera-CLI 3.9.6broadwell, epyc, skylakeaion, irisUtilitiesIBM Aspera Command-Line Interface (the Aspera CLI) is a collection of Aspera tools for performing high-speed, secure data transfers from the command line. The Aspera CLI is for users and organizations who want to automate their transfer workflows.
    Autoconf 2.69broadwell, skylake, gpuirisDevelopmentAutoconf is an extensible package of M4 macros that produce shell scripts to automatically configure software source code packages. These scripts can adapt the packages to many kinds of UNIX-like systems without manual user intervention. Autoconf creates a configuration script for a package from a template file that lists the operating system features that the package can use, in the form of M4 macro calls.
    Automake 1.16.2broadwell, epyc, skylake, gpuaion, irisDevelopmentAutomake: GNU Standards-compliant Makefile generator
    Autotools 20200321broadwell, epyc, skylake, gpuaion, irisDevelopmentThis bundle collect the standard GNU build tools: Autoconf, Automake and libtool
    BEDTools 2.30.0broadwell, epyc, skylakeaion, irisBiologyBEDTools: a powerful toolset for genome arithmetic. The BEDTools utilities allow one to address common genomics tasks such as finding feature overlaps and computing coverage. The utilities are largely based on four widely-used file formats: BED, GFF/GTF, VCF, and SAM/BAM.
    BLAST+ 2.11.0broadwell, epyc, skylakeaion, irisBiologyBasic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences.
    BWA 0.7.17broadwell, skylakeirisBiologyBurrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.
    BamTools 2.5.1broadwell, skylakeirisBiologyBamTools provides both a programmer's API and an end-user's toolkit for handling BAM files.
    Bazel 3.7.2broadwell, epyc, skylake, gpuaion, irisDevelopmentBazel is a build tool that builds code quickly and reliably. It is used to build the majority of Google's software.
    BioPerl 1.7.8broadwell, epyc, skylakeaion, irisBiologyBioperl is the product of a community effort to produce Perl code which is useful in biology. Examples include Sequence objects, Alignment objects and database searching objects.
    Bison 3.5.3broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesBison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.
    Bison 3.7.1broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesBison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.
    Boost.Python 1.74.0broadwell, epyc, skylakeaion, irisLibrariesBoost.Python is a C++ library which enables seamless interoperability between C++ and the Python programming language.
    Boost 1.74.0broadwell, epyc, skylakeaion, irisDevelopmentBoost provides free peer-reviewed portable C++ source libraries.
    Bowtie2 2.4.2broadwell, epyc, skylakeaion, irisBiologyBowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM Index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.
    CGAL 5.2broadwell, epyc, skylakeaion, irisNumerical librariesThe goal of the CGAL Open Source Project is to provide easy access to efficient and reliable geometric algorithms in the form of a C++ library.
    CMake 3.18.4broadwell, epyc, skylake, gpuaion, irisDevelopmentCMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.
    CMake 3.20.1broadwell, epyc, skylake, gpuaion, irisDevelopmentCMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.
    CUDA 11.1.1gpuirisSystem-level softwareCUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
    CUDAcore 11.1.1gpuirisSystem-level softwareCUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
    Check 0.15.2gpuirisLibrariesCheck is a unit testing framework for C. It features a simple interface for defining unit tests, putting little in the way of the developer. Tests are run in a separate address space, so both assertion failures and code errors that cause segmentation faults or other signals can be caught. Test results are reportable in the following: Subunit, TAP, XML, and a generic logging format.
    Clang 11.0.1broadwell, epyc, skylake, gpuaion, irisCompilersC, C++, Objective-C compiler, based on LLVM. Does not include C++ standard library -- use libstdc++ from GCC.
    DB 18.1.40broadwell, epyc, skylake, gpuaion, irisUtilitiesBerkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects.
    DB_File 1.855broadwell, epyc, skylakeaion, irisData processingPerl5 access to Berkeley DB version 1.x.
    DBus 1.13.18broadwell, epyc, skylakeaion, irisDevelopmentD-Bus is a message bus system, a simple way for applications to talk to one another. In addition to interprocess communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a "single instance" application or daemon, and to launch applications and daemons on demand when their services are needed.
    Dakota 6.15.0broadwell, skylakeirisMathematicsThe Dakota project delivers both state-of-the-art research and robust, usable software for optimization and UQ. Broadly, the Dakota software's advanced parametric analyses enable design exploration, model calibration, risk analysis, and quantification of margins and uncertainty with computational models."
    Doxygen 1.8.20broadwell, epyc, skylake, gpuaion, irisDevelopmentDoxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.
    ELPA 2020.11.001broadwell, epyc, skylakeaion, irisMathematicsEigenvalue SoLvers for Petaflop-Applications .
    EasyBuild 4.4.1broadwell, epyc, skylakeaion, irisUtilitiesEasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
    EasyBuild 4.4.2broadwell, epyc, skylakeaion, irisUtilitiesEasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
    EasyBuild 4.5.4broadwell, epyc, skylakeaion, irisUtilitiesEasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
    Eigen 3.3.8broadwell, epyc, skylake, gpuaion, irisMathematicsEigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
    Eigen 3.4.0broadwell, epyc, skylake, gpuaion, irisMathematicsEigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
    Elk 7.0.12broadwell, epyc, skylakeaion, irisPhysicsAn all-electron full-potential linearised augmented-plane wave (FP-LAPW) code with many advanced features. Written originally at Karl-Franzens-Universität Graz as a milestone of the EXCITING EU Research and Training Network, the code is designed to be as simple as possible so that new developments in the field of density functional theory (DFT) can be added quickly and reliably.
    FDS 6.7.6broadwell, epyc, skylakeaion, irisPhysicsFire Dynamics Simulator (FDS) is a large-eddy simulation (LES) code for low-speed flows, with an emphasis on smoke and heat transport from fires.
    FFTW 3.3.8broadwell, skylake, gpuirisNumerical librariesFFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data.
    FFmpeg 4.3.1broadwell, epyc, skylake, gpuaion, irisVisualisationA complete, cross-platform solution to record, convert and stream audio and video.
    FLAC 1.3.3broadwell, epyc, skylake, gpuaion, irisLibrariesFLAC stands for Free Lossless Audio Codec, an audio format similar to MP3, but lossless, meaning that audio is compressed in FLAC without any loss in quality.
    FLTK 1.3.5broadwell, skylakeirisVisualisationFLTK is a cross-platform C++ GUI toolkit for UNIX/Linux (X11), Microsoft Windows, and MacOS X. FLTK provides modern GUI functionality without the bloat and supports 3D graphics via OpenGL and its built-in GLUT emulation.
    FastQC 0.11.9broadwell, skylakeirisBiologyFastQC is a quality control application for high throughput sequence data. It reads in sequence data in a variety of formats and can either provide an interactive application to review the results of several different QC checks, or create an HTML based report which can be integrated into a pipeline.
    Flask 1.1.2broadwell, epyc, skylake, gpuaion, irisLibrariesFlask is a lightweight WSGI web application framework. It is designed to make getting started quick and easy, with the ability to scale up to complex applications. This module includes the Flask extensions: Flask-Cors
    Flink 1.11.2broadwell, epyc, skylakeaion, irisDevelopmentApache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale.
    FreeImage 3.18.0broadwell, epyc, skylakeaion, irisVisualisationFreeImage is an Open Source library project for developers who would like to support popular graphics image formats like PNG, BMP, JPEG, TIFF and others as needed by today's multimedia applications. FreeImage is easy to use, fast, multithreading safe.
    FriBidi 1.0.10broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesThe Free Implementation of the Unicode Bidirectional Algorithm.
    GCC 10.2.0broadwell, epyc, skylake, gpuaion, irisCompilersThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    GCCcore 10.2.0broadwell, epyc, skylake, gpuaion, irisCompilersThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    GDAL 3.2.1broadwell, epyc, skylake, gpuaion, irisData processingGDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing.
    GDB 10.1broadwell, epyc, skylakeaion, irisDebuggingThe GNU Project Debugger
    GDRCopy 2.1gpuirisLibrariesA low-latency GPU memory copy library based on NVIDIA GPUDirect RDMA technology.
    GEOS 3.9.1broadwell, epyc, skylake, gpuaion, irisMathematicsGEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)
    GLPK 4.65broadwell, skylakeirisUtilitiesThe GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library.
    GLib 2.66.1broadwell, epyc, skylake, gpuaion, irisVisualisationGLib is one of the base libraries of the GTK+ project
    GMP 6.2.0broadwell, epyc, skylake, gpuaion, irisMathematicsGMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.
    GObject-Introspection 1.66.1broadwell, epyc, skylakeaion, irisDevelopmentGObject introspection is a middleware layer between C libraries (using GObject) and language bindings. The C library can be scanned at compile time and generate a metadata file, in addition to the actual native C library. Then at runtime, language bindings can read this metadata and automatically provide bindings to call into the C library.
    GROMACS 2021broadwell, epyc, skylakeaion, irisBiologyGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a CPU only build, containing both MPI and threadMPI builds for both single and double precision. It also contains the gmxapi extension for the single precision MPI build.
    GROMACS 2021.2broadwell, epyc, skylakeaion, irisBiologyGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a CPU only build, containing both MPI and threadMPI builds for both single and double precision. It also contains the gmxapi extension for the single precision MPI build.
    GSL 2.6broadwell, skylake, gpuirisNumerical librariesThe GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.
    GTK+ 3.24.23broadwell, epyc, skylakeaion, irisVisualisationGTK+ is the primary library used to construct user interfaces in GNOME. It provides all the user interface controls, or widgets, used in a common graphical application. Its object-oriented API allows you to construct user interfaces without dealing with the low-level details of drawing and device interaction.
    Gdk-Pixbuf 2.40.0broadwell, epyc, skylakeaion, irisVisualisationThe Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3.
    Ghostscript 9.53.3broadwell, epyc, skylake, gpuaion, irisUtilitiesGhostscript is a versatile processor for PostScript data with the ability to render PostScript to different targets. It used to be part of the cups printing stack, but is no longer used for that.
    Go 1.14.1broadwell, skylakeirisCompilersGo is an open source programming language that makes it easy to build simple, reliable, and efficient software.
    Go 1.16.6broadwell, epyc, skylakeaion, irisCompilersGo is an open source programming language that makes it easy to build simple, reliable, and efficient software.
    Gurobi 9.1.2broadwell, epyc, skylakeaion, irisMathematicsThe Gurobi Optimizer is a state-of-the-art solver for mathematical programming. The solvers in the Gurobi Optimizer were designed from the ground up to exploit modern architectures and multi-core processors, using the most advanced implementations of the latest algorithms.
    HDF5 1.10.7broadwell, epyc, skylake, gpuaion, irisData processingHDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.
    HDF 4.2.15broadwell, epyc, skylake, gpuaion, irisData processingHDF (also known as HDF4) is a library and multi-object file format for storing and managing data between machines.
    HTSlib 1.12broadwell, epyc, skylakeaion, irisBiologyA C library for reading/writing high-throughput sequencing data. This package includes the utilities bgzip and tabix
    Hadoop 2.10.0broadwell, epyc, skylakeaion, irisUtilitiesHadoop MapReduce by Cloudera
    HarfBuzz 2.6.7broadwell, epyc, skylakeaion, irisVisualisationHarfBuzz is an OpenType text shaping engine.
    Horovod 0.22.0gpuirisUtilitiesHorovod is a distributed training framework for TensorFlow.
    Hypre 2.20.0broadwell, epyc, skylakeaion, irisNumerical librariesHypre is a library for solving large, sparse linear systems of equations on massively parallel computers. The problems of interest arise in the simulation codes being developed at LLNL and elsewhere to study physical phenomena in the defense, environmental, energy, and biological sciences.
    ICU 67.1broadwell, epyc, skylake, gpuaion, irisLibrariesICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications.
    ISL 0.23broadwell, epyc, skylakeaion, irisMathematicsisl is a library for manipulating sets and relations of integer points bounded by linear constraints.
    ImageMagick 7.0.10-35broadwell, epyc, skylake, gpuaion, irisVisualisationImageMagick is a software suite to create, edit, compose, or convert bitmap images
    JasPer 2.0.24broadwell, epyc, skylake, gpuaion, irisVisualisationThe JasPer Project is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.
    Java 1.8.0_241broadwell, skylake, gpuirisProgramming LanguagesJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    Java 11.0.2broadwell, epyc, skylakeaion, irisProgramming LanguagesJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    Java 13.0.2broadwell, epyc, skylakeaion, irisProgramming LanguagesJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    Java 16.0.1broadwell, epyc, skylakeaion, irisProgramming LanguagesJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    JsonCpp 1.9.4broadwell, epyc, skylake, gpuaion, irisLibrariesJsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files.
    Julia 1.6.2broadwell, epyc, skylakeaion, irisProgramming LanguagesJulia is a high-level, high-performance dynamic programming language for numerical computing
    Keras 2.4.3broadwell, epyc, skylake, gpuaion, irisMathematicsKeras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow.
    LAME 3.100broadwell, skylake, gpuirisData processingLAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.
    LLVM 10.0.1broadwell, epyc, skylake, gpuaion, irisCompilersThe LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
    LLVM 11.0.0broadwell, epyc, skylake, gpuaion, irisCompilersThe LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
    LMDB 0.9.24broadwell, skylake, gpuirisLibrariesLMDB is a fast, memory-efficient database. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases.
    LibTIFF 4.1.0broadwell, epyc, skylake, gpuaion, irisLibrariestiff: Library and tools for reading and writing TIFF data files
    LittleCMS 2.11broadwell, epyc, skylake, gpuaion, irisVisualisationLittle CMS intends to be an OPEN SOURCE small-footprint color management engine, with special focus on accuracy and performance.
    Lua 5.4.2broadwell, epyc, skylakeaion, irisProgramming LanguagesLua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.
    M4 1.4.18broadwell, skylake, gpuirisDevelopmentGNU M4 is an implementation of the traditional Unix macro processor. It is mostly SVR4 compatible although it has some extensions (for example, handling more than 9 positional parameters to macros). GNU M4 also has built-in functions for including files, running shell commands, doing arithmetic, etc.
    MATLAB 2021abroadwell, epyc, skylakeaion, irisMathematicsMATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran.
    METIS 5.1.0broadwell, skylakeirisMathematicsMETIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.
    MPC 1.2.1broadwell, epyc, skylakeaion, irisMathematicsGnu Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. It extends the principles of the IEEE-754 standard for fixed precision real floating point numbers to complex numbers, providing well-defined semantics for every operation. At the same time, speed of operation at high precision is a major design goal.
    MPFR 4.1.0broadwell, epyc, skylake, gpuaion, irisMathematicsThe MPFR library is a C library for multiple-precision floating-point computations with correct rounding.
    MUMPS 5.3.5broadwell, epyc, skylakeaion, irisMathematicsA parallel sparse direct solver
    Mako 1.1.3broadwell, epyc, skylake, gpuaion, irisDevelopmentA super-fast templating language that borrows the best ideas from the existing templating languages
    Mathematica 12.1.0broadwell, epyc, skylakeaion, irisMathematicsMathematica is a computational software program used in many scientific, engineering, mathematical and computing fields.
    Maven 3.6.3broadwell, skylakeirisDevelopmentBinary maven install, Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.
    Mesa 20.2.1broadwell, epyc, skylake, gpuaion, irisVisualisationMesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
    Meson 0.55.3broadwell, epyc, skylake, gpuaion, irisUtilitiesMeson is a cross-platform build system designed to be both as fast and as user friendly as possible.
    NASM 2.15.05broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesNASM: General-purpose x86 assembler
    NCCL 2.8.3gpuirisLibrariesThe NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs.
    NLopt 2.6.2broadwell, epyc, skylake, gpuaion, irisNumerical librariesNLopt is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms.
    NSPR 4.29broadwell, epyc, skylakeaion, irisLibrariesNetscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions.
    NSS 3.57broadwell, epyc, skylakeaion, irisLibrariesNetwork Security Services (NSS) is a set of libraries designed to support cross-platform development of security-enabled client and server applications.
    Ninja 1.10.1broadwell, epyc, skylake, gpuaion, irisUtilitiesNinja is a small build system with a focus on speed.
    OpenBLAS 0.3.12broadwell, epyc, skylake, gpuaion, irisNumerical librariesOpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
    OpenCV 4.5.1broadwell, epyc, skylakeaion, irisVisualisationOpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Includes extra modules for OpenCV from the contrib repository.
    OpenEXR 2.5.5broadwell, epyc, skylakeaion, irisVisualisationOpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications
    OpenFOAM 8epycaionCFD/Finite element modellingOpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
    OpenMPI 4.0.5broadwell, epyc, skylake, gpuaion, irisMPIThe Open MPI Project is an open source MPI-3 implementation.
    PAPI 6.0.0broadwell, skylakeirisPerformance measurementsPAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. In addition Component PAPI provides access to a collection of components that expose performance measurement opportunites across the hardware and software stack.
    PCRE2 10.35broadwell, epyc, skylake, gpuaion, irisDevelopmentThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PCRE 8.44broadwell, epyc, skylake, gpuaion, irisDevelopmentThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PETSc 3.14.4broadwell, epyc, skylakeaion, irisNumerical librariesPETSc, pronounced PET-see (the S is silent), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
    PLUMED 2.7.0broadwell, epyc, skylakeaion, irisChemistryPLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.
    POV-Ray 3.7.0.8broadwell, epyc, skylakeaion, irisVisualisationThe Persistence of Vision Raytracer, or POV-Ray, is a ray tracing program which generates images from a text-based scene description, and is available for a variety of computer platforms. POV-Ray is a high-quality, Free Software tool for creating stunning three-dimensional graphics. The source code is available for those wanting to do their own ports.
    PROJ 7.2.1broadwell, epyc, skylake, gpuaion, irisLibrariesProgram proj is a standard Unix filter function which converts geographic longitude and latitude coordinates into cartesian coordinates
    Pango 1.47.0broadwell, epyc, skylakeaion, irisVisualisationPango is a library for laying out and rendering of text, with an emphasis on internationalization. Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in the context of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.
    ParaView 5.8.1broadwell, epyc, skylakeaion, irisVisualisationParaView is a scientific parallel visualizer.
    Perl 5.32.0broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesLarry Wall's Practical Extraction and Report Language This is a minimal build without any modules. Should only be used for build dependencies.
    Pillow 8.0.1broadwell, epyc, skylake, gpuaion, irisVisualisationPillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.
    PyOpenGL 3.1.5broadwell, epyc, skylakeaion, irisVisualisationPyOpenGL is the most common cross platform Python binding to OpenGL and related APIs.
    PyQt5 5.15.1broadwell, epyc, skylakeaion, irisVisualisationPyQt5 is a set of Python bindings for v5 of the Qt application framework from The Qt Company. This bundle includes PyQtWebEngine, a set of Python bindings for The Qt Company’s Qt WebEngine framework.
    PyQtGraph 0.11.1broadwell, epyc, skylakeaion, irisVisualisationPyQtGraph is a pure-python graphics and GUI library built on PyQt5/PySide2 and numpy.
    PyTorch-Geometric 1.6.3broadwell, epyc, skylake, gpuaion, irisLibrariesPyTorch Geometric (PyG) is a geometric deep learning extension library for PyTorch.
    PyTorch 1.7.1gpuirisDevelopmentTensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
    PyTorch 1.8.1broadwell, epyc, skylake, gpuaion, irisDevelopmentTensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
    PyTorch 1.9.0broadwell, epyc, skylake, gpuaion, irisDevelopmentTensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
    PyYAML 5.3.1broadwell, epyc, skylake, gpuaion, irisLibrariesPyYAML is a YAML parser and emitter for the Python programming language.
    Python 2.7.18broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesPython is a programming language that lets you work more quickly and integrate your systems more effectively.
    Python 3.8.6broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesPython is a programming language that lets you work more quickly and integrate your systems more effectively.
    Qt5 5.14.2broadwell, epyc, skylakeaion, irisDevelopmentQt is a comprehensive cross-platform C++ application framework.
    QuantumESPRESSO 6.7broadwellirisChemistryQuantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).
    RDFlib 5.0.0broadwell, epyc, skylake, gpuaion, irisLibrariesRDFLib is a Python library for working with RDF, a simple yet powerful language for representing information.
    R 4.0.5broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesR is a free software environment for statistical computing and graphics.
    ReFrame 3.6.3broadwell, epyc, skylakeaion, irisDevelopmentReFrame is a framework for writing regression tests for HPC systems.
    Ruby 2.7.2broadwell, epyc, skylakeaion, irisProgramming LanguagesRuby is a dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write.
    SAMtools 1.12broadwell, epyc, skylakeaion, irisBiologySAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
    SCOTCH 6.1.0broadwell, epyc, skylakeaion, irisMathematicsSoftware package and libraries for sequential and parallel graph partitioning, static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.
    SDL2 2.0.14broadwell, epyc, skylakeaion, irisLibrariesSDL: Simple DirectMedia Layer, a cross-platform multimedia library
    SLEPc 3.14.2broadwell, epyc, skylakeaion, irisNumerical librariesSLEPc (Scalable Library for Eigenvalue Problem Computations) is a software library for the solution of large scale sparse eigenvalue problems on parallel computers. It is an extension of PETSc and can be used for either standard or generalized eigenproblems, with real or complex arithmetic. It can also be used for computing a partial SVD of a large, sparse, rectangular matrix, and to solve quadratic eigenvalue problems.
    SQLite 3.33.0broadwell, epyc, skylake, gpuaion, irisDevelopmentSQLite: SQL Database Engine in a C Library
    SWIG 4.0.2broadwell, epyc, skylakeaion, irisDevelopmentSWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages.
    Salome 9.8.0broadwell, epyc, skylakeaion, irisCFD/Finite element modellingThe SALOME platform is an open source software framework for pre- and post-processing and integration of numerical solvers from various scientific fields. CEA and EDF use SALOME to perform a large number of simulations, typically related to power plant equipment and alternative energy. To address these challenges, SALOME includes a CAD/CAE modelling tool, mesh generators, an advanced 3D visualization tool, etc.
    ScaLAPACK 2.1.0broadwell, epyc, skylake, gpuaion, irisNumerical librariesThe ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers.
    SciPy-bundle 2020.11broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesBundle of Python packages for scientific software
    Singularity 3.8.1broadwell, epyc, skylakeaion, irisUtilitiesSingularityCE is an open source container platform designed to be simple, fast, and secure. Singularity is optimized for EPC and HPC workloads, allowing untrusted users to run untrusted containers in a trusted way.
    Spack 0.12.1broadwell, skylakeirisDevelopmentSpack is a package manager for supercomputers, Linux, and macOS. It makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.
    Stata 17broadwell, epyc, skylakeaion, irisMathematicsStata is a complete, integrated statistical software package that provides everything you need for data analysis, data management, and graphics.
    SuiteSparse 5.8.1broadwell, epyc, skylakeaion, irisNumerical librariesSuiteSparse is a collection of libraries manipulate sparse matrices.
    Szip 2.1.1broadwell, skylake, gpuirisUtilitiesSzip compression software, providing lossless compression of scientific data
    Tcl 8.6.10broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesTcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more.
    TensorFlow 2.4.1broadwell, epyc, skylake, gpuaion, irisLibrariesAn open-source software library for Machine Intelligence
    TensorFlow 2.5.0broadwell, epyc, skylake, gpuaion, irisLibrariesAn open-source software library for Machine Intelligence
    Theano 1.1.2broadwell, epyc, skylake, gpuaion, irisMathematicsTheano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.
    Tk 8.6.10broadwell, epyc, skylake, gpuaion, irisVisualisationTk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.
    Tkinter 3.8.6broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesTkinter module, built with the Python buildsystem
    TopHat 2.1.2broadwell, skylakeirisBiologyTopHat is a fast splice junction mapper for RNA-Seq reads.
    UCX 1.9.0broadwell, epyc, skylake, gpuaion, irisLibrariesUnified Communication X An open-source production grade communication framework for data centric and high-performance applications
    UDUNITS 2.2.26broadwell, skylake, gpuirisPhysicsUDUNITS supports conversion of unit specifications between formatted and binary forms, arithmetic manipulation of units, and conversion of values between compatible scales of measurement.
    ULHPC-bd 2020bbroadwell, epyc, skylakeaion, irisSystem-level softwareGeneric Module bundle for BigData Analytics software in use on the UL HPC Facility
    ULHPC-bio 2020bbroadwell, epyc, skylakeaion, irisSystem-level softwareGeneric Module bundle for Bioinformatics, biology and biomedical software in use on the UL HPC Facility, especially at LCSB
    ULHPC-cs 2020bepycaionSystem-level softwareGeneric Module bundle for Computational science software in use on the UL HPC Facility, including: - Computer Aided Engineering, incl. CFD - Chemistry, Computational Chemistry and Quantum Chemistry - Data management & processing tools - Earth Sciences - Quantum Computing - Physics and physical systems simulations
    ULHPC-dl 2020bbroadwell, epyc, skylakeaion, irisSystem-level softwareGeneric Module bundle for (CPU-version) of AI / Deep Learning / Machine Learning software in use on the UL HPC Facility
    ULHPC-gpu 2020bgpuirisSystem-level softwareGeneric Module bundle for GPU accelerated User Software in use on the UL HPC Facility
    ULHPC-math 2020bbroadwell, epyc, skylakeaion, irisSystem-level softwareGeneric Module bundle for High-level mathematical software and Linear Algrebra libraries in use on the UL HPC Facility
    ULHPC-toolchains 2020bbroadwell, epyc, skylakeaion, irisSystem-level softwareGeneric Module bundle that contains all the dependencies required to enable toolchains and building tools/programming language in use on the UL HPC Facility
    ULHPC-tools 2020bbroadwell, epyc, skylakeaion, irisSystem-level softwareMisc tools, incl. - perf: Performance tools - tools: General purpose tools
    UnZip 6.0broadwell, epyc, skylake, gpuaion, irisUtilitiesUnZip is an extraction utility for archives compressed in .zip format (also called "zipfiles"). Although highly compatible both with PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP's own Zip program, our primary objectives have been portability and non-MSDOS functionality.
    VASP 5.4.4broadwell, skylakeirisPhysicsThe Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.
    VASP 6.2.1broadwell, epyc, skylakeaion, irisPhysicsThe Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.
    VMD 1.9.4a51broadwell, epyc, skylakeaion, irisVisualisationVMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
    VTK 9.0.1broadwell, epyc, skylakeaion, irisVisualisationThe Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.
    VTune 2020_update3broadwell, epyc, skylakeaion, irisUtilitiesIntel VTune Amplifier XE is the premier performance profiler for C, C++, C#, Fortran, Assembly and Java.
    Valgrind 3.16.1broadwell, epyc, skylakeaion, irisDebuggingValgrind: Debugging and profiling tools
    Wannier90 3.1.0broadwell, epyc, skylakeaion, irisChemistryA tool for obtaining maximally-localised Wannier functions
    X11 20201008broadwell, epyc, skylake, gpuaion, irisVisualisationThe X Window System (X11) is a windowing system for bitmap displays
    XML-LibXML 2.0206broadwell, epyc, skylakeaion, irisData processingPerl binding for libxml2
    XZ 5.2.5broadwell, epyc, skylake, gpuaion, irisUtilitiesxz: XZ utilities
    Xvfb 1.20.9broadwell, epyc, skylake, gpuaion, irisVisualisationXvfb is an X server that can run on machines with no display hardware and no physical input devices. It emulates a dumb framebuffer using virtual memory.
    YACS 0.1.8broadwell, epyc, skylakeaion, irisLibrariesYACS was created as a lightweight library to define and manage system configurations, such as those commonly found in software designed for scientific experimentation. These "configurations" typically cover concepts like hyperparameters used in training a machine learning model or configurable model hyperparameters, such as the depth of a convolutional neural network.
    Yasm 1.3.0broadwell, skylake, gpuirisProgramming LanguagesYasm: Complete rewrite of the NASM assembler with BSD license
    Z3 4.8.10broadwell, epyc, skylake, gpuaion, irisUtilitiesZ3 is a theorem prover from Microsoft Research.
    Zip 3.0broadwell, skylake, gpuirisUtilitiesZip is a compression and file packaging/archive utility. Although highly compatible both with PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP's own UnZip, our primary objectives have been portability and other-than-MSDOS functionality
    ant 1.10.9broadwell, epyc, skylakeaion, irisDevelopmentApache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.
    arpack-ng 3.8.0broadwell, epyc, skylakeaion, irisNumerical librariesARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.
    at-spi2-atk 2.38.0broadwell, epyc, skylakeaion, irisVisualisationAT-SPI 2 toolkit bridge
    at-spi2-core 2.38.0broadwell, epyc, skylakeaion, irisVisualisationAssistive Technology Service Provider Interface.
    binutils 2.35broadwell, epyc, skylake, gpuaion, irisUtilitiesbinutils: GNU binary utilities
    bokeh 2.2.3broadwell, epyc, skylake, gpuaion, irisUtilitiesStatistical and novel interactive HTML plots for Python
    bzip2 1.0.8broadwell, skylake, gpuirisUtilitiesbzip2 is a freely available, patent free, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression.
    cURL 7.72.0broadwell, epyc, skylake, gpuaion, irisUtilitieslibcurl is a free and easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file transfer resume, http proxy tunneling and more.
    cairo 1.16.0broadwell, skylake, gpuirisVisualisationCairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB
    cuDNN 8.0.4.30gpuirisNumerical librariesThe NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks.
    cuDNN 8.0.5.39gpuirisNumerical librariesThe NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks.
    dask 2021.2.0broadwell, epyc, skylake, gpuaion, irisData processingDask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love.
    double-conversion 3.1.5broadwell, epyc, skylake, gpuaion, irisLibrariesEfficient binary-decimal and decimal-binary conversion routines for IEEE doubles.
    elfutils 0.183gpuirisLibrariesThe elfutils project provides libraries and tools for ELF files and DWARF data.
    expat 2.2.9broadwell, epyc, skylake, gpuaion, irisUtilitiesExpat is an XML parser library written in C. It is a stream-oriented parser in which an application registers handlers for things the parser might find in the XML document (like start tags)
    flatbuffers-python 1.12broadwell, epyc, skylake, gpuaion, irisDevelopmentPython Flatbuffers runtime library.
    flatbuffers 1.12.0broadwell, skylake, gpuirisDevelopmentFlatBuffers: Memory Efficient Serialization Library
    flex 2.6.4broadwell, skylake, gpuirisProgramming LanguagesFlex (Fast Lexical Analyzer) is a tool for generating scanners. A scanner, sometimes called a tokenizer, is a program which recognizes lexical patterns in text.
    fontconfig 2.13.92broadwell, epyc, skylake, gpuaion, irisVisualisationFontconfig is a library designed to provide system-wide font configuration, customization and application access.
    foss 2020bbroadwell, epyc, skylakeaion, irisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    fosscuda 2020bgpuirisToolchains (software stacks)GCC based compiler toolchain with CUDA support, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    freetype 2.10.3broadwell, epyc, skylake, gpuaion, irisVisualisationFreeType 2 is a software font engine that is designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other products as well.
    gcccuda 2020bgpuirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, along with CUDA toolkit.
    gettext 0.21broadwell, epyc, skylake, gpuaion, irisUtilitiesGNU 'gettext' is an important step for the GNU Translation Project, as it is an asset on which we may build many other steps. This package offers to programmers, translators, and even users, a well integrated set of tools and documentation
    giflib 5.2.1broadwell, skylake, gpuirisLibrariesgiflib is a library for reading and writing gif images. It is API and ABI compatible with libungif which was in wide use while the LZW compression algorithm was patented.
    git 2.28.0broadwell, epyc, skylake, gpuaion, irisUtilitiesGit is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
    gmsh 4.8.4broadwell, epyc, skylakeaion, irisMathematicsGmsh is a 3D finite element grid generator with a build-in CAD engine and post-processor.
    gnuplot 5.4.1broadwell, epyc, skylakeaion, irisVisualisationPortable interactive, function plotting utility
    gocryptfs 2.0.1broadwell, epyc, skylakeaion, irisUtilitiesEncrypted overlay filesystem written in Go. gocryptfs uses file-based encryption that is implemented as a mountable FUSE filesystem. Each file in gocryptfs is stored as one corresponding encrypted file on the hard disk.
    gompi 2020bbroadwell, epyc, skylakeaion, irisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
    gompic 2020bgpuirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain along with CUDA toolkit, including OpenMPI for MPI support with CUDA features enabled.
    gperf 3.1broadwell, skylake, gpuirisDevelopmentGNU gperf is a perfect hash function generator. For a given list of strings, it produces a hash function and hash table, in form of C or C++ code, for looking up a value depending on the input string. The hash function is perfect, which means that the hash table has no collisions, and the hash table lookup needs a single string comparison only.
    groff 1.22.4broadwell, epyc, skylake, gpuaion, irisUtilitiesGroff (GNU troff) is a typesetting system that reads plain text mixed with formatting commands and produces formatted output.
    gzip 1.10broadwell, skylakeirisUtilitiesgzip (GNU zip) is a popular data compression program as a replacement for compress
    h5py 3.1.0broadwell, epyc, skylake, gpuaion, irisData processingHDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.
    help2man 1.47.16broadwell, epyc, skylake, gpuaion, irisUtilitieshelp2man produces simple manual pages from the '--help' and '--version' output of other commands.
    help2man 1.47.4broadwell, epyc, skylake, gpuaion, irisUtilitieshelp2man produces simple manual pages from the '--help' and '--version' output of other commands.
    hwloc 2.2.0broadwell, epyc, skylake, gpuaion, irisSystem-level softwareThe Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.
    hypothesis 5.41.2broadwell, epyc, skylake, gpuaion, irisUtilitiesHypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.
    hypothesis 5.41.5broadwell, epyc, skylake, gpuaion, irisUtilitiesHypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.
    iccifort 2020.4.304broadwell, epyc, skylake, gpuaion, irisCompilersIntel C, C++ & Fortran compilers
    iccifortcuda 2020bgpuirisToolchains (software stacks)Intel C, C++ & Fortran compilers with CUDA toolkit
    iimpi 2020bbroadwell, epyc, skylake, gpuaion, irisToolchains (software stacks)Intel C/C++ and Fortran compilers, alongside Intel MPI.
    iimpic 2020bgpuirisToolchains (software stacks)Intel C/C++ and Fortran compilers, alongside Intel MPI and CUDA.
    imkl 2020.4.304broadwell, epyc, skylake, gpuaion, irisNumerical librariesIntel Math Kernel Library is a library of highly optimized, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more.
    impi 2019.9.304broadwell, epyc, skylake, gpuaion, irisMPIIntel MPI Library, compatible with MPICH ABI
    intel 2020bbroadwell, epyc, skylake, gpuaion, irisToolchains (software stacks)Compiler toolchain including Intel compilers, Intel MPI and Intel Math Kernel Library (MKL).
    intelcuda 2020bgpuirisToolchains (software stacks)Intel Cluster Toolkit Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MPI & Intel MKL, with CUDA toolkit
    intltool 0.51.0broadwell, skylake, gpuirisDevelopmentintltool is a set of tools to centralize translation of many different file formats using GNU gettext-compatible PO files.
    libGLU 9.0.1broadwell, skylake, gpuirisVisualisationThe OpenGL Utility Library (GLU) is a computer graphics library for OpenGL.
    libarchive 3.4.3broadwell, epyc, skylake, gpuaion, irisUtilitiesMulti-format archive and compression library
    libcerf 1.14broadwell, epyc, skylakeaion, irisMathematicslibcerf is a self-contained numeric library that provides an efficient and accurate implementation of complex error functions, along with Dawson, Faddeeva, and Voigt functions.
    libdrm 2.4.102broadwell, epyc, skylake, gpuaion, irisLibrariesDirect Rendering Manager runtime library.
    libepoxy 1.5.4broadwell, skylakeirisLibrariesEpoxy is a library for handling OpenGL function pointer management for you
    libevent 2.1.12broadwell, epyc, skylakeaion, irisLibrariesThe libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts.
    libffi 3.3broadwell, epyc, skylake, gpuaion, irisLibrariesThe libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.
    libgd 2.3.0broadwell, epyc, skylakeaion, irisLibrariesGD is an open source code library for the dynamic creation of images by programmers.
    libgeotiff 1.6.0broadwell, epyc, skylake, gpuaion, irisLibrariesLibrary for reading and writing coordinate system information from/to GeoTIFF files
    libglvnd 1.3.2broadwell, epyc, skylake, gpuaion, irisLibrarieslibglvnd is a vendor-neutral dispatch layer for arbitrating OpenGL API calls between multiple vendors.
    libgpuarray 0.7.6gpuirisLibrariesLibrary to manipulate tensors on the GPU.
    libiconv 1.16broadwell, skylake, gpuirisLibrariesLibiconv converts from one character encoding to another through Unicode conversion
    libjpeg-turbo 2.0.5broadwell, epyc, skylake, gpuaion, irisLibrarieslibjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.
    libogg 1.3.4broadwell, epyc, skylake, gpuaion, irisLibrariesOgg is a multimedia container format, and the native file and stream format for the Xiph.org multimedia codecs.
    libpciaccess 0.16broadwell, epyc, skylake, gpuaion, irisSystem-level softwareGeneric PCI access library.
    libpng 1.6.37broadwell, skylake, gpuirisLibrarieslibpng is the official PNG reference library
    libreadline 8.0broadwell, skylake, gpuirisLibrariesThe GNU Readline library provides a set of functions for use by applications that allow users to edit command lines as they are typed in. Both Emacs and vi editing modes are available. The Readline library includes additional functions to maintain a list of previously-entered command lines, to recall and perhaps reedit those lines, and perform csh-like history expansion on previous commands.
    libsndfile 1.0.28broadwell, skylake, gpuirisLibrariesLibsndfile is a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface.
    libtirpc 1.3.1broadwell, epyc, skylake, gpuaion, irisLibrariesLibtirpc is a port of Suns Transport-Independent RPC library to Linux.
    libtool 2.4.6broadwell, skylake, gpuirisLibrariesGNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface.
    libunwind 1.4.0broadwell, epyc, skylake, gpuaion, irisLibrariesThe primary goal of libunwind is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications
    libvorbis 1.3.7broadwell, epyc, skylake, gpuaion, irisLibrariesOgg Vorbis is a fully open, non-proprietary, patent-and-royalty-free, general-purpose compressed audio format
    libwebp 1.1.0broadwell, epyc, skylakeaion, irisLibrariesWebP is a modern image format that provides superior lossless and lossy compression for images on the web. Using WebP, webmasters and web developers can create smaller, richer images that make the web faster.
    libxc 4.3.4broadwell, skylakeirisChemistryLibxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.
    libxc 5.1.2broadwell, epyc, skylakeaion, irisChemistryLibxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.
    libxml2 2.9.10broadwell, epyc, skylake, gpuaion, irisLibrariesLibxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).
    libyaml 0.2.5broadwell, epyc, skylake, gpuaion, irisLibrariesLibYAML is a YAML parser and emitter written in C.
    lz4 1.9.2broadwell, epyc, skylake, gpuaion, irisLibrariesLZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core. It features an extremely fast decoder, with speed in multiple GB/s per core.
    magma 2.5.4gpuirisMathematicsThe MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current Multicore+GPU systems.
    makeinfo 6.7broadwell, epyc, skylake, gpuaion, irisDevelopmentmakeinfo is part of the Texinfo project, the official documentation format of the GNU project.
    matplotlib 3.3.3broadwell, epyc, skylake, gpuaion, irisVisualisationmatplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.
    ncurses 6.2broadwell, epyc, skylake, gpuaion, irisDevelopmentThe Ncurses (new curses) library is a free software emulation of curses in System V Release 4.0, and more. It uses Terminfo format, supports pads and color and multiple highlights and forms characters and function-key mapping, and has all the other SYSV-curses enhancements over BSD Curses.
    netCDF-Fortran 4.5.3epycaionData processingNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    netCDF 4.7.4broadwell, epyc, skylake, gpuaion, irisData processingNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    nettle 3.6broadwell, epyc, skylake, gpuaion, irisLibrariesNettle is a cryptographic library that is designed to fit easily in more or less any context: In crypto toolkits for object-oriented languages (C++, Python, Pike, ...), in applications like LSH or GNUPG, or even in kernel space.
    networkx 2.5broadwell, epyc, skylake, gpuaion, irisUtilitiesNetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
    nodejs 12.19.0broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesNode.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
    nsync 1.24.0broadwell, skylake, gpuirisDevelopmentnsync is a C library that exports various synchronization primitives, such as mutexes
    numactl 2.0.13broadwell, epyc, skylake, gpuaion, irisUtilitiesThe numactl program allows you to run your application program on specific cpu's and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.
    numba 0.52.0broadwell, epyc, skylake, gpuaion, irisProgramming LanguagesNumba is an Open Source NumPy-aware optimizing compiler for Python sponsored by Continuum Analytics, Inc. It uses the remarkable LLVM compiler infrastructure to compile Python syntax to machine code.
    pixman 0.40.0broadwell, epyc, skylake, gpuaion, irisVisualisationPixman is a low-level software library for pixel manipulation, providing features such as image compositing and trapezoid rasterization. Important users of pixman are the cairo graphics library and the X server.
    pkg-config 0.29.2broadwell, skylake, gpuirisDevelopmentpkg-config is a helper tool used when compiling applications and libraries. It helps you insert the correct compiler options on the command line so an application can use gcc -o test test.c pkg-config --libs --cflags glib-2.0 for instance, rather than hard-coding values on where to find glib (or other libraries).
    pkgconfig 1.5.1broadwell, skylake, gpuirisDevelopmentpkgconfig is a Python module to interface with the pkg-config command line tool
    pocl 1.6gpuirisLibrariesPocl is a portable open source (MIT-licensed) implementation of the OpenCL standard
    protobuf-python 3.14.0broadwell, epyc, skylake, gpuaion, irisDevelopmentPython Protocol Buffers runtime library.
    protobuf 2.5.0broadwell, skylakeirisDevelopmentGoogle Protocol Buffers
    protobuf 3.14.0broadwell, epyc, skylakeaion, irisDevelopmentGoogle Protocol Buffers
    pybind11 2.6.0broadwell, epyc, skylake, gpuaion, irisLibrariespybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code.
    re2c 2.0.3broadwell, epyc, skylakeaion, irisUtilitiesre2c is a free and open-source lexer generator for C and C++. Its main goal is generating fast lexers: at least as fast as their reasonably optimized hand-coded counterparts. Instead of using traditional table-driven approach, re2c encodes the generated finite state automata directly in the form of conditional jumps and comparisons.
    scikit-build 0.11.1broadwell, epyc, skylake, gpuaion, irisLibrariesScikit-Build, or skbuild, is an improved build system generator for CPython C/C++/Fortran/Cython extensions.
    scikit-image 0.18.1broadwell, epyc, skylake, gpuaion, irisVisualisationscikit-image is a collection of algorithms for image processing.
    scikit-learn 0.23.2broadwell, epyc, skylake, gpuaion, irisData processingScikit-learn integrates machine learning algorithms in the tightly-knit scientific Python world, building upon numpy, scipy, and matplotlib. As a machine-learning module, it provides versatile tools for data mining and analysis in any field of science and engineering. It strives to be simple and efficient, accessible to everybody, and reusable in various contexts.
    snappy 1.1.8broadwell, epyc, skylake, gpuaion, irisLibrariesSnappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression.
    sparsehash 2.0.4broadwell, epyc, skylakeaion, irisDevelopmentAn extremely memory-efficient hash_map implementation. 2 bits/entry overhead! The SparseHash library contains several hash-map implementations, including implementations that optimize for space or speed.
    spglib-python 1.16.0broadwell, epyc, skylake, gpuaion, irisChemistrySpglib for Python. Spglib is a library for finding and handling crystal symmetries written in C.
    tbb 2020.3broadwell, epyc, skylakeaion, irisLibrariesIntel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.
    tqdm 4.56.2broadwell, epyc, skylake, gpuaion, irisLibrariesA fast, extensible progress bar for Python and CLI
    typing-extensions 3.7.4.3gpuirisDevelopmentTyping Extensions – Backported and Experimental Type Hints for Python
    util-linux 2.36broadwell, epyc, skylake, gpuaion, irisUtilitiesSet of Linux utilities
    x264 20201026broadwell, epyc, skylake, gpuaion, irisVisualisationx264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.
    x265 3.3broadwell, epyc, skylake, gpuaion, irisVisualisationx265 is a free software library and application for encoding video streams into the H.265 AVC compression format, and is released under the terms of the GNU GPL.
    xorg-macros 1.19.2broadwell, skylake, gpuirisDevelopmentX.org macros utilities.
    xprop 1.2.5broadwell, epyc, skylakeaion, irisVisualisationThe xprop utility is for displaying window and font properties in an X server. One window or font is selected using the command line arguments or possibly in the case of a window, by clicking on the desired window. A list of properties is then given, possibly with formatting information.
    zlib 1.2.11broadwell, skylake, gpuirisLibrarieszlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system.
    zstd 1.4.5broadwell, epyc, skylake, gpuaion, irisLibrariesZstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression/speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/all_softwares/index.html b/software/swsets/all_softwares/index.html new file mode 100644 index 00000000..11a6a7ca --- /dev/null +++ b/software/swsets/all_softwares/index.html @@ -0,0 +1,5919 @@ + + + + + + + + + + + + + + + + + + + + + + + + Full List (alphabetical order) - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Full List (alphabetical order)

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersCategoryDescription
    ABAQUS2018, 20212019b, 2020bbroadwell, skylake, epyciris, aionCFD/Finite element modellingFinite Element Analysis software for modeling, visualization and best-in-class implicit and explicit dynamics FEA.
    ABINIT9.4.12020bepycaionChemistryABINIT is a package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave or wavelet basis.
    ABySS2.2.52020bbroadwell, epyc, skylakeaion, irisBiologyAssembly By Short Sequences - a de novo, parallel, paired-end sequence assembler
    ACTC1.12019b, 2020bbroadwell, skylake, epyciris, aionLibrariesACTC converts independent triangles into triangle strips or fans.
    ANSYS19.4, 21.12019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesANSYS simulation software enables organizations to confidently predict how their products will operate in the real world. We believe that every product is a promise of something greater.
    AOCC3.1.02020bepycaionCompilersAMD Optimized C/C++ & Fortran compilers (AOCC) based on LLVM 12.0
    ASE3.19.0, 3.20.1, 3.21.12019b, 2020bbroadwell, skylake, epyc, gpuiris, aionChemistryASE is a python package providing an open source Atomic Simulation Environment in the Python scripting language. From version 3.20.1 we also include the ase-ext package, it contains optional reimplementations in C of functions in ASE. ASE uses it automatically when installed.
    ATK2.34.1, 2.36.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications.
    Advisor2019_update52019bbroadwell, skylakeirisPerformance measurementsVectorization Optimization and Thread Prototyping - Vectorize & thread code or performance “dies” - Easy workflow + data + tips = faster code faster - Prioritize, Prototype & Predict performance gain
    Anaconda32020.02, 2020.112019b, 2020bbroadwell, skylake, epyciris, aionProgramming LanguagesBuilt to complement the rich, open source Python community, the Anaconda platform provides an enterprise-ready data analytics platform that empowers companies to adopt a modern open data science analytics architecture.
    ArmForge20.0.32019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesThe industry standard development package for C, C++ and Fortran high performance code on Linux. Forge is designed to handle the complex software projects - including parallel, multiprocess and multithreaded code. Arm Forge combines an industry-leading debugger, Arm DDT, and an out-of-the-box-ready profiler, Arm MAP.
    ArmReports20.0.32019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesArm Performance Reports - a low-overhead tool that produces one-page text and HTML reports summarizing and characterizing both scalar and MPI application performance. Arm Performance Reports runs transparently on optimized production-ready codes by adding a single command to your scripts, and provides the most effective way to characterize and understand the performance of HPC application runs.
    Armadillo10.5.3, 9.900.12020b, 2019bbroadwell, epyc, skylakeaion, irisNumerical librariesArmadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.
    Arrow0.16.02019bbroadwell, skylakeirisData processingApache Arrow (incl. PyArrow Python bindings)), a cross-language development platform for in-memory data.
    Aspera-CLI3.9.1, 3.9.62019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesIBM Aspera Command-Line Interface (the Aspera CLI) is a collection of Aspera tools for performing high-speed, secure data transfers from the command line. The Aspera CLI is for users and organizations who want to automate their transfer workflows.
    Autoconf2.692019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentAutoconf is an extensible package of M4 macros that produce shell scripts to automatically configure software source code packages. These scripts can adapt the packages to many kinds of UNIX-like systems without manual user intervention. Autoconf creates a configuration script for a package from a template file that lists the operating system features that the package can use, in the form of M4 macro calls.
    Automake1.16.1, 1.16.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentAutomake: GNU Standards-compliant Makefile generator
    Autotools20180311, 202003212019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentThis bundle collect the standard GNU build tools: Autoconf, Automake and libtool
    BEDTools2.29.2, 2.30.02019b, 2020bbroadwell, skylake, epyciris, aionBiologyBEDTools: a powerful toolset for genome arithmetic. The BEDTools utilities allow one to address common genomics tasks such as finding feature overlaps and computing coverage. The utilities are largely based on four widely-used file formats: BED, GFF/GTF, VCF, and SAM/BAM.
    BLAST+2.11.0, 2.9.02020b, 2019bbroadwell, epyc, skylakeaion, irisBiologyBasic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences.
    BWA0.7.172019b, 2020bbroadwell, skylake, epyciris, aionBiologyBurrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.
    BamTools2.5.12019b, 2020bbroadwell, skylake, epyciris, aionBiologyBamTools provides both a programmer's API and an end-user's toolkit for handling BAM files.
    Bazel0.26.1, 0.29.1, 3.7.22019b, 2020bgpu, broadwell, skylake, epyciris, aionDevelopmentBazel is a build tool that builds code quickly and reliably. It is used to build the majority of Google's software.
    BioPerl1.7.2, 1.7.82019b, 2020bbroadwell, skylake, epyciris, aionBiologyBioperl is the product of a community effort to produce Perl code which is useful in biology. Examples include Sequence objects, Alignment objects and database searching objects.
    Bison3.3.2, 3.5.3, 3.7.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesBison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.
    Boost.Python1.74.02020bbroadwell, epyc, skylakeaion, irisLibrariesBoost.Python is a C++ library which enables seamless interoperability between C++ and the Python programming language.
    Boost1.71.0, 1.74.02019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentBoost provides free peer-reviewed portable C++ source libraries.
    Bowtie22.3.5.1, 2.4.22019b, 2020bbroadwell, skylake, epyciris, aionBiologyBowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM Index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.
    CGAL4.14.1, 5.22019b, 2020bbroadwell, skylake, epyciris, aionNumerical librariesThe goal of the CGAL Open Source Project is to provide easy access to efficient and reliable geometric algorithms in the form of a C++ library.
    CMake3.15.3, 3.18.4, 3.20.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentCMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.
    CPLEX12.102019bbroadwell, skylakeirisMathematicsIBM ILOG CPLEX Optimizer's mathematical programming technology enables analytical decision support for improving efficiency, reducing costs, and increasing profitability.
    CRYSTAL172019bbroadwell, skylakeirisChemistryThe CRYSTAL package performs ab initio calculations of the ground state energy, energy gradient, electronic wave function and properties of periodic systems. Hartree-Fock or Kohn- Sham Hamiltonians (that adopt an Exchange-Correlation potential following the postulates of Density-Functional Theory) can be used.
    CUDA10.1.243, 11.1.12019b, 2020bgpuirisSystem-level softwareCUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
    CUDAcore11.1.12020bgpuirisSystem-level softwareCUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
    Check0.15.22020bgpuirisLibrariesCheck is a unit testing framework for C. It features a simple interface for defining unit tests, putting little in the way of the developer. Tests are run in a separate address space, so both assertion failures and code errors that cause segmentation faults or other signals can be caught. Test results are reportable in the following: Subunit, TAP, XML, and a generic logging format.
    Clang11.0.1, 9.0.12020b, 2019bbroadwell, epyc, skylake, gpuaion, irisCompilersC, C++, Objective-C compiler, based on LLVM. Does not include C++ standard library -- use libstdc++ from GCC.
    CubeGUI4.4.42019bbroadwell, skylakeirisPerformance measurementsCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube graphical report explorer.
    CubeLib4.4.42019bbroadwell, skylakeirisPerformance measurementsCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube general purpose C++ library component and command-line tools.
    CubeWriter4.4.32019bbroadwell, skylakeirisPerformance measurementsCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube high-performance C writer library component.
    DB18.1.32, 18.1.402019b, 2020bbroadwell, skylake, epyc, gpuiris, aionUtilitiesBerkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects.
    DB_File1.8552020bbroadwell, epyc, skylakeaion, irisData processingPerl5 access to Berkeley DB version 1.x.
    DBus1.13.12, 1.13.182019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentD-Bus is a message bus system, a simple way for applications to talk to one another. In addition to interprocess communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a "single instance" application or daemon, and to launch applications and daemons on demand when their services are needed.
    DMTCP2.5.22019bbroadwell, skylakeirisUtilitiesDMTCP is a tool to transparently checkpoint the state of multiple simultaneous applications, including multi-threaded and distributed applications. It operates directly on the user binary executable, without any Linux kernel modules or other kernel modifications.
    Dakota6.11.0, 6.15.02019b, 2020bbroadwell, skylakeirisMathematicsThe Dakota project delivers both state-of-the-art research and robust, usable software for optimization and UQ. Broadly, the Dakota software's advanced parametric analyses enable design exploration, model calibration, risk analysis, and quantification of margins and uncertainty with computational models."
    Doxygen1.8.16, 1.8.202019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentDoxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.
    ELPA2019.11.001, 2020.11.0012019b, 2020bbroadwell, epyc, skylakeiris, aionMathematicsEigenvalue SoLvers for Petaflop-Applications .
    EasyBuild4.3.0, 4.3.3, 4.4.1, 4.4.2, 4.5.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesEasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
    Eigen3.3.7, 3.3.8, 3.4.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionMathematicsEigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
    Elk6.3.2, 7.0.122019b, 2020bbroadwell, skylake, epyciris, aionPhysicsAn all-electron full-potential linearised augmented-plane wave (FP-LAPW) code with many advanced features. Written originally at Karl-Franzens-Universität Graz as a milestone of the EXCITING EU Research and Training Network, the code is designed to be as simple as possible so that new developments in the field of density functional theory (DFT) can be added quickly and reliably.
    FDS6.7.1, 6.7.62019b, 2020bbroadwell, skylake, epyciris, aionPhysicsFire Dynamics Simulator (FDS) is a large-eddy simulation (LES) code for low-speed flows, with an emphasis on smoke and heat transport from fires.
    FFTW3.3.82019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesFFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data.
    FFmpeg4.2.1, 4.3.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationA complete, cross-platform solution to record, convert and stream audio and video.
    FLAC1.3.32020bbroadwell, epyc, skylake, gpuaion, irisLibrariesFLAC stands for Free Lossless Audio Codec, an audio format similar to MP3, but lossless, meaning that audio is compressed in FLAC without any loss in quality.
    FLTK1.3.52019b, 2020bbroadwell, skylake, epyciris, aionVisualisationFLTK is a cross-platform C++ GUI toolkit for UNIX/Linux (X11), Microsoft Windows, and MacOS X. FLTK provides modern GUI functionality without the bloat and supports 3D graphics via OpenGL and its built-in GLUT emulation.
    FastQC0.11.92019b, 2020bbroadwell, skylake, epyciris, aionBiologyFastQC is a quality control application for high throughput sequence data. It reads in sequence data in a variety of formats and can either provide an interactive application to review the results of several different QC checks, or create an HTML based report which can be integrated into a pipeline.
    Flask1.1.22020bbroadwell, epyc, skylake, gpuaion, irisLibrariesFlask is a lightweight WSGI web application framework. It is designed to make getting started quick and easy, with the ability to scale up to complex applications. This module includes the Flask extensions: Flask-Cors
    Flink1.11.22020bbroadwell, epyc, skylakeaion, irisDevelopmentApache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale.
    FreeImage3.18.02020bbroadwell, epyc, skylakeaion, irisVisualisationFreeImage is an Open Source library project for developers who would like to support popular graphics image formats like PNG, BMP, JPEG, TIFF and others as needed by today's multimedia applications. FreeImage is easy to use, fast, multithreading safe.
    FriBidi1.0.10, 1.0.52020b, 2019bbroadwell, epyc, skylake, gpuaion, irisProgramming LanguagesThe Free Implementation of the Unicode Bidirectional Algorithm.
    GCC10.2.0, 8.3.02020b, 2019bbroadwell, epyc, skylake, gpuaion, irisCompilersThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    GCCcore10.2.0, 8.3.02020b, 2019bbroadwell, epyc, skylake, gpuaion, irisCompilersThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    GDAL3.0.2, 3.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingGDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing.
    GDB10.1, 9.12020b, 2019bbroadwell, epyc, skylakeaion, irisDebuggingThe GNU Project Debugger
    GDRCopy2.12020bgpuirisLibrariesA low-latency GPU memory copy library based on NVIDIA GPUDirect RDMA technology.
    GEOS3.8.0, 3.9.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionMathematicsGEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)
    GLPK4.652019b, 2020bbroadwell, skylake, epyc, gpuiris, aionUtilitiesThe GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library.
    GLib2.62.0, 2.66.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationGLib is one of the base libraries of the GTK+ project
    GMP6.1.2, 6.2.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionMathematicsGMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.
    GObject-Introspection1.63.1, 1.66.12019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentGObject introspection is a middleware layer between C libraries (using GObject) and language bindings. The C library can be scanned at compile time and generate a metadata file, in addition to the actual native C library. Then at runtime, language bindings can read this metadata and automatically provide bindings to call into the C library.
    GPAW-setups0.9.200002019bbroadwell, skylakeirisChemistryPAW setup for the GPAW Density Functional Theory package. Users can install setups manually using 'gpaw install-data' or use setups from this package. The versions of GPAW and GPAW-setups can be intermixed.
    GPAW20.1.02019bbroadwell, skylakeirisChemistryGPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). It uses real-space uniform grids and multigrid methods or atom-centered basis-functions.
    GROMACS2019.4, 2019.6, 2020, 2021, 2021.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionBiologyGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a CPU only build, containing both MPI and threadMPI builds for both single and double precision. It also contains the gmxapi extension for the single precision MPI build.
    GSL2.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesThe GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.
    GTK+3.24.13, 3.24.232019b, 2020bbroadwell, skylake, epyciris, aionVisualisationGTK+ is the primary library used to construct user interfaces in GNOME. It provides all the user interface controls, or widgets, used in a common graphical application. Its object-oriented API allows you to construct user interfaces without dealing with the low-level details of drawing and device interaction.
    Gdk-Pixbuf2.38.2, 2.40.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationThe Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3.
    Ghostscript9.50, 9.53.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesGhostscript is a versatile processor for PostScript data with the ability to render PostScript to different targets. It used to be part of the cups printing stack, but is no longer used for that.
    Go1.14.1, 1.16.62019b, 2020bbroadwell, skylake, epyciris, aionCompilersGo is an open source programming language that makes it easy to build simple, reliable, and efficient software.
    Guile1.8.8, 2.2.42019bbroadwell, skylakeirisProgramming LanguagesGuile is a programming language, designed to help programmers create flexible applications that can be extended by users or other programmers with plug-ins, modules, or scripts.
    Gurobi9.0.0, 9.1.22019b, 2020bbroadwell, skylake, epyciris, aionMathematicsThe Gurobi Optimizer is a state-of-the-art solver for mathematical programming. The solvers in the Gurobi Optimizer were designed from the ground up to exploit modern architectures and multi-core processors, using the most advanced implementations of the latest algorithms.
    HDF51.10.5, 1.10.72019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingHDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.
    HDF4.2.152020bbroadwell, epyc, skylake, gpuaion, irisData processingHDF (also known as HDF4) is a library and multi-object file format for storing and managing data between machines.
    HTSlib1.10.2, 1.122019b, 2020bbroadwell, skylake, epyciris, aionBiologyA C library for reading/writing high-throughput sequencing data. This package includes the utilities bgzip and tabix
    Hadoop2.10.02020bbroadwell, epyc, skylakeaion, irisUtilitiesHadoop MapReduce by Cloudera
    HarfBuzz2.6.4, 2.6.72019b, 2020bbroadwell, skylake, epyciris, aionVisualisationHarfBuzz is an OpenType text shaping engine.
    Harminv1.4.12019bbroadwell, skylakeirisMathematicsHarminv is a free program (and accompanying library) to solve the problem of harmonic inversion - given a discrete-time, finite-length signal that consists of a sum of finitely-many sinusoids (possibly exponentially decaying) in a given bandwidth, it determines the frequencies, decay constants, amplitudes, and phases of those sinusoids.
    Horovod0.19.1, 0.22.02019b, 2020bbroadwell, skylake, gpuirisUtilitiesHorovod is a distributed training framework for TensorFlow.
    Hypre2.20.02020bbroadwell, epyc, skylakeaion, irisNumerical librariesHypre is a library for solving large, sparse linear systems of equations on massively parallel computers. The problems of interest arise in the simulation codes being developed at LLNL and elsewhere to study physical phenomena in the defense, environmental, energy, and biological sciences.
    ICU64.2, 67.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications.
    ISL0.232020bbroadwell, epyc, skylakeaion, irisMathematicsisl is a library for manipulating sets and relations of integer points bounded by linear constraints.
    ImageMagick7.0.10-35, 7.0.9-52020b, 2019bbroadwell, epyc, skylake, gpuaion, irisVisualisationImageMagick is a software suite to create, edit, compose, or convert bitmap images
    Inspector2019_update52019bbroadwell, skylakeirisUtilitiesIntel Inspector XE is an easy to use memory error checker and thread checker for serial and parallel applications
    JasPer2.0.14, 2.0.242019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationThe JasPer Project is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.
    Java1.8.0_241, 11.0.2, 13.0.2, 16.0.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    Jellyfish2.3.02019bbroadwell, skylakeirisBiologyJellyfish is a tool for fast, memory-efficient counting of k-mers in DNA.
    JsonCpp1.9.3, 1.9.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesJsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files.
    Julia1.4.1, 1.6.22019b, 2020bbroadwell, skylake, epyciris, aionProgramming LanguagesJulia is a high-level, high-performance dynamic programming language for numerical computing
    Keras2.3.1, 2.4.32019b, 2020bgpu, broadwell, epyc, skylakeiris, aionMathematicsKeras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow.
    LAME3.1002019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingLAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.
    LLVM10.0.1, 11.0.0, 9.0.0, 9.0.12020b, 2019bbroadwell, epyc, skylake, gpuaion, irisCompilersThe LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
    LMDB0.9.242019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLMDB is a fast, memory-efficient database. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases.
    LibTIFF4.0.10, 4.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariestiff: Library and tools for reading and writing TIFF data files
    LittleCMS2.11, 2.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisVisualisationLittle CMS intends to be an OPEN SOURCE small-footprint color management engine, with special focus on accuracy and performance.
    Lua5.1.5, 5.4.22019b, 2020bbroadwell, skylake, epyciris, aionProgramming LanguagesLua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.
    M41.4.182019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentGNU M4 is an implementation of the traditional Unix macro processor. It is mostly SVR4 compatible although it has some extensions (for example, handling more than 9 positional parameters to macros). GNU M4 also has built-in functions for including files, running shell commands, doing arithmetic, etc.
    MATLAB2019b, 2020a, 2021a2019b, 2020bbroadwell, skylake, epyciris, aionMathematicsMATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran.
    METIS5.1.02019b, 2020bbroadwell, skylake, epyciris, aionMathematicsMETIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.
    MPC1.2.12020bbroadwell, epyc, skylakeaion, irisMathematicsGnu Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. It extends the principles of the IEEE-754 standard for fixed precision real floating point numbers to complex numbers, providing well-defined semantics for every operation. At the same time, speed of operation at high precision is a major design goal.
    MPFR4.0.2, 4.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionMathematicsThe MPFR library is a C library for multiple-precision floating-point computations with correct rounding.
    MUMPS5.3.52020bbroadwell, epyc, skylakeaion, irisMathematicsA parallel sparse direct solver
    Mako1.1.0, 1.1.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentA super-fast templating language that borrows the best ideas from the existing templating languages
    Mathematica12.0.0, 12.1.02019b, 2020bbroadwell, skylake, epyciris, aionMathematicsMathematica is a computational software program used in many scientific, engineering, mathematical and computing fields.
    Maven3.6.32019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentBinary maven install, Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.
    Meep1.4.32019bbroadwell, skylakeirisPhysicsMeep (or MEEP) is a free finite-difference time-domain (FDTD) simulation software package developed at MIT to model electromagnetic systems.
    Mesa19.1.7, 19.2.1, 20.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationMesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
    Meson0.51.2, 0.55.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesMeson is a cross-platform build system designed to be both as fast and as user friendly as possible.
    Mesquite2.3.02019bbroadwell, skylakeirisMathematicsMesh-Quality Improvement Library
    NAMD2.132019bbroadwell, skylakeirisChemistryNAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.
    NASM2.14.02, 2.15.052019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesNASM: General-purpose x86 assembler
    NCCL2.4.8, 2.8.32019b, 2020bgpuirisLibrariesThe NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs.
    NLopt2.6.1, 2.6.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesNLopt is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms.
    NSPR4.21, 4.292019b, 2020bbroadwell, skylake, epyciris, aionLibrariesNetscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions.
    NSS3.45, 3.572019b, 2020bbroadwell, skylake, epyciris, aionLibrariesNetwork Security Services (NSS) is a set of libraries designed to support cross-platform development of security-enabled client and server applications.
    Ninja1.10.1, 1.9.02020b, 2019bbroadwell, epyc, skylake, gpuaion, irisUtilitiesNinja is a small build system with a focus on speed.
    OPARI22.0.52019bbroadwell, skylakeirisPerformance measurementsOPARI2, the successor of Forschungszentrum Juelich's OPARI, is a source-to-source instrumentation tool for OpenMP and hybrid codes. It surrounds OpenMP directives and runtime library calls with calls to the POMP2 measurement interface.
    OTF22.22019bbroadwell, skylakeirisPerformance measurementsThe Open Trace Format 2 is a highly scalable, memory efficient event trace data format plus support library. It is the new standard trace format for Scalasca, Vampir, and TAU and is open for other tools.
    OpenBLAS0.3.12, 0.3.72020b, 2019bbroadwell, epyc, skylake, gpuaion, irisNumerical librariesOpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
    OpenCV4.2.0, 4.5.12019b, 2020bbroadwell, skylake, epyciris, aionVisualisationOpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Includes extra modules for OpenCV from the contrib repository.
    OpenEXR2.5.52020bbroadwell, epyc, skylakeaion, irisVisualisationOpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications
    OpenFOAM-Extend4.1-202004082019bbroadwell, skylakeirisCFD/Finite element modellingOpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
    OpenFOAM8, v19122020b, 2019bepyc, broadwell, skylakeaion, irisCFD/Finite element modellingOpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
    OpenMPI3.1.4, 4.0.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionMPIThe Open MPI Project is an open source MPI-3 implementation.
    PAPI6.0.02019b, 2020bbroadwell, skylake, epyciris, aionPerformance measurementsPAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. In addition Component PAPI provides access to a collection of components that expose performance measurement opportunites across the hardware and software stack.
    PCRE210.33, 10.352019b, 2020bbroadwell, skylake, epyc, gpuiris, aionDevelopmentThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PCRE8.43, 8.442019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PDT3.252019bbroadwell, skylakeirisPerformance measurementsProgram Database Toolkit (PDT) is a framework for analyzing source code written in several programming languages and for making rich program knowledge accessible to developers of static and dynamic analysis tools. PDT implements a standard program representation, the program database (PDB), that can be accessed in a uniform way through a class library supporting common PDB operations.
    PETSc3.14.42020bbroadwell, epyc, skylakeaion, irisNumerical librariesPETSc, pronounced PET-see (the S is silent), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
    PGI19.102019bbroadwell, skylakeirisCompilersC, C++ and Fortran compilers from The Portland Group - PGI
    PLUMED2.5.3, 2.7.02019b, 2020bbroadwell, skylake, epyciris, aionChemistryPLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.
    POV-Ray3.7.0.82020bbroadwell, epyc, skylakeaion, irisVisualisationThe Persistence of Vision Raytracer, or POV-Ray, is a ray tracing program which generates images from a text-based scene description, and is available for a variety of computer platforms. POV-Ray is a high-quality, Free Software tool for creating stunning three-dimensional graphics. The source code is available for those wanting to do their own ports.
    PROJ6.2.1, 7.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesProgram proj is a standard Unix filter function which converts geographic longitude and latitude coordinates into cartesian coordinates
    Pango1.44.7, 1.47.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationPango is a library for laying out and rendering of text, with an emphasis on internationalization. Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in the context of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.
    ParMETIS4.0.32019bbroadwell, skylakeirisMathematicsParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes.
    ParMGridGen1.02019bbroadwell, skylakeirisMathematicsParMGridGen is an MPI-based parallel library that is based on the serial package MGridGen, that implements (serial) algorithms for obtaining a sequence of successive coarse grids that are well-suited for geometric multigrid methods.
    ParaView5.6.2, 5.8.12019b, 2020bbroadwell, skylake, epyciris, aionVisualisationParaView is a scientific parallel visualizer.
    Perl5.30.0, 5.32.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesLarry Wall's Practical Extraction and Report Language This is a minimal build without any modules. Should only be used for build dependencies.
    Pillow6.2.1, 8.0.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationPillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.
    PyOpenGL3.1.52020bbroadwell, epyc, skylakeaion, irisVisualisationPyOpenGL is the most common cross platform Python binding to OpenGL and related APIs.
    PyQt55.15.12020bbroadwell, epyc, skylakeaion, irisVisualisationPyQt5 is a set of Python bindings for v5 of the Qt application framework from The Qt Company. This bundle includes PyQtWebEngine, a set of Python bindings for The Qt Company’s Qt WebEngine framework.
    PyQtGraph0.11.12020bbroadwell, epyc, skylakeaion, irisVisualisationPyQtGraph is a pure-python graphics and GUI library built on PyQt5/PySide2 and numpy.
    PyTorch-Geometric1.6.32020bbroadwell, epyc, skylake, gpuaion, irisLibrariesPyTorch Geometric (PyG) is a geometric deep learning extension library for PyTorch.
    PyTorch1.4.0, 1.7.1, 1.8.1, 1.9.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentTensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
    PyYAML5.1.2, 5.3.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesPyYAML is a YAML parser and emitter for the Python programming language.
    Python2.7.16, 2.7.18, 3.7.4, 3.8.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesPython is a programming language that lets you work more quickly and integrate your systems more effectively.
    Qt55.13.1, 5.14.22019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentQt is a comprehensive cross-platform C++ application framework.
    QuantumESPRESSO6.72019b, 2020bbroadwell, epyc, skylakeiris, aionChemistryQuantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).
    RDFlib5.0.02020bbroadwell, epyc, skylake, gpuaion, irisLibrariesRDFLib is a Python library for working with RDF, a simple yet powerful language for representing information.
    R3.6.2, 4.0.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesR is a free software environment for statistical computing and graphics.
    ReFrame2.21, 3.6.32019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentReFrame is a framework for writing regression tests for HPC systems.
    Ruby2.7.1, 2.7.22019b, 2020bbroadwell, skylake, epyciris, aionProgramming LanguagesRuby is a dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write.
    Rust1.37.02019bbroadwell, skylakeirisProgramming LanguagesRust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
    SAMtools1.10, 1.122019b, 2020bbroadwell, skylake, epyciris, aionBiologySAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
    SCOTCH6.0.9, 6.1.02019b, 2020bbroadwell, skylake, epyciris, aionMathematicsSoftware package and libraries for sequential and parallel graph partitioning, static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.
    SDL22.0.142020bbroadwell, epyc, skylakeaion, irisLibrariesSDL: Simple DirectMedia Layer, a cross-platform multimedia library
    SIONlib1.7.62019bbroadwell, skylakeirisLibrariesSIONlib is a scalable I/O library for parallel access to task-local files. The library not only supports writing and reading binary data to or from several thousands of processors into a single or a small number of physical files, but also provides global open and close functions to access SIONlib files in parallel. This package provides a stripped-down installation of SIONlib for use with performance tools (e.g., Score-P), with renamed symbols to avoid conflicts when an application using SIONlib itself is linked against a tool requiring a different SIONlib version.
    SLEPc3.14.22020bbroadwell, epyc, skylakeaion, irisNumerical librariesSLEPc (Scalable Library for Eigenvalue Problem Computations) is a software library for the solution of large scale sparse eigenvalue problems on parallel computers. It is an extension of PETSc and can be used for either standard or generalized eigenproblems, with real or complex arithmetic. It can also be used for computing a partial SVD of a large, sparse, rectangular matrix, and to solve quadratic eigenvalue problems.
    SQLite3.29.0, 3.33.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentSQLite: SQL Database Engine in a C Library
    SWIG4.0.1, 4.0.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentSWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages.
    Salmon1.1.02019bbroadwell, skylakeirisBiologySalmon is a wicked-fast program to produce a highly-accurate, transcript-level quantification estimates from RNA-seq data.
    Salome8.5.0, 9.8.02019b, 2020bbroadwell, skylake, epyciris, aionCFD/Finite element modellingThe SALOME platform is an open source software framework for pre- and post-processing and integration of numerical solvers from various scientific fields. CEA and EDF use SALOME to perform a large number of simulations, typically related to power plant equipment and alternative energy. To address these challenges, SALOME includes a CAD/CAE modelling tool, mesh generators, an advanced 3D visualization tool, etc.
    ScaLAPACK2.0.2, 2.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesThe ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers.
    Scalasca2.52019bbroadwell, skylakeirisPerformance measurementsScalasca is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks -- in particular those concerning communication and synchronization -- and offers guidance in exploring their causes.
    SciPy-bundle2019.10, 2020.112019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesBundle of Python packages for scientific software
    Score-P6.02019bbroadwell, skylakeirisPerformance measurementsThe Score-P measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications.
    Singularity3.6.0, 3.8.12019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesSingularityCE is an open source container platform designed to be simple, fast, and secure. Singularity is optimized for EPC and HPC workloads, allowing untrusted users to run untrusted containers in a trusted way.
    Spack0.12.12019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentSpack is a package manager for supercomputers, Linux, and macOS. It makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.
    Spark2.4.32019bbroadwell, skylakeirisDevelopmentSpark is Hadoop MapReduce done in memory
    Stata172020bbroadwell, epyc, skylakeaion, irisMathematicsStata is a complete, integrated statistical software package that provides everything you need for data analysis, data management, and graphics.
    SuiteSparse5.8.12020bbroadwell, epyc, skylakeaion, irisNumerical librariesSuiteSparse is a collection of libraries manipulate sparse matrices.
    Sumo1.3.12019bbroadwell, skylakeirisUtilitiesSumo is an open source, highly portable, microscopic and continuous traffic simulation package designed to handle large road networks.
    Szip2.1.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesSzip compression software, providing lossless compression of scientific data
    Tcl8.6.10, 8.6.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisProgramming LanguagesTcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more.
    TensorFlow1.15.5, 2.1.0, 2.4.1, 2.5.02019b, 2020bgpu, broadwell, skylake, epyciris, aionLibrariesAn open-source software library for Machine Intelligence
    Theano1.0.4, 1.1.22019b, 2020bgpu, broadwell, epyc, skylakeiris, aionMathematicsTheano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.
    Tk8.6.10, 8.6.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisVisualisationTk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.
    Tkinter3.7.4, 3.8.62019b, 2020bbroadwell, skylake, epyc, gpuiris, aionProgramming LanguagesTkinter module, built with the Python buildsystem
    TopHat2.1.22019b, 2020bbroadwell, skylake, epyciris, aionBiologyTopHat is a fast splice junction mapper for RNA-Seq reads.
    Trinity2.10.02019bbroadwell, skylakeirisBiologyTrinity represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-Seq data. Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-Seq reads.
    UCX1.9.02020bbroadwell, epyc, skylake, gpuaion, irisLibrariesUnified Communication X An open-source production grade communication framework for data centric and high-performance applications
    UDUNITS2.2.262019b, 2020bbroadwell, skylake, gpu, epyciris, aionPhysicsUDUNITS supports conversion of unit specifications between formatted and binary forms, arithmetic manipulation of units, and conversion of values between compatible scales of measurement.
    ULHPC-bd2020b2020bbroadwell, epyc, skylakeaion, irisSystem-level softwareGeneric Module bundle for BigData Analytics software in use on the UL HPC Facility
    ULHPC-bio2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle for Bioinformatics, biology and biomedical software in use on the UL HPC Facility, especially at LCSB
    ULHPC-cs2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle for Computational science software in use on the UL HPC Facility, including: - Computer Aided Engineering, incl. CFD - Chemistry, Computational Chemistry and Quantum Chemistry - Data management & processing tools - Earth Sciences - Quantum Computing - Physics and physical systems simulations
    ULHPC-dl2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle for (CPU-version) of AI / Deep Learning / Machine Learning software in use on the UL HPC Facility
    ULHPC-gpu2019b, 2020b2019b, 2020bgpuirisSystem-level softwareGeneric Module bundle for GPU accelerated User Software in use on the UL HPC Facility
    ULHPC-math2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle for High-level mathematical software and Linear Algrebra libraries in use on the UL HPC Facility
    ULHPC-toolchains2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareGeneric Module bundle that contains all the dependencies required to enable toolchains and building tools/programming language in use on the UL HPC Facility
    ULHPC-tools2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionSystem-level softwareMisc tools, incl. - perf: Performance tools - tools: General purpose tools
    UnZip6.02020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesUnZip is an extraction utility for archives compressed in .zip format (also called "zipfiles"). Although highly compatible both with PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP's own Zip program, our primary objectives have been portability and non-MSDOS functionality.
    VASP5.4.4, 6.2.12019b, 2020bbroadwell, skylake, epyc, gpuiris, aionPhysicsThe Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.
    VMD1.9.4a512020bbroadwell, epyc, skylakeaion, irisVisualisationVMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
    VTK8.2.0, 9.0.12019b, 2020bbroadwell, skylake, epyciris, aionVisualisationThe Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.
    VTune2019_update8, 2020_update32019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesIntel VTune Amplifier XE is the premier performance profiler for C, C++, C#, Fortran, Assembly and Java.
    Valgrind3.15.0, 3.16.12019b, 2020bbroadwell, skylake, epyciris, aionDebuggingValgrind: Debugging and profiling tools
    VirtualGL2.6.22019bbroadwell, skylakeirisVisualisationVirtualGL is an open source toolkit that gives any Linux or Unix remote display software the ability to run OpenGL applications with full hardware acceleration.
    Voro++0.4.62019bbroadwell, skylakeirisMathematicsVoro++ is a software library for carrying out three-dimensional computations of the Voronoi tessellation. A distinguishing feature of the Voro++ library is that it carries out cell-based calculations, computing the Voronoi cell for each particle individually. It is particularly well-suited for applications that rely on cell-based statistics, where features of Voronoi cells (eg. volume, centroid, number of faces) can be used to analyze a system of particles.
    Wannier903.1.02020bbroadwell, epyc, skylakeaion, irisChemistryA tool for obtaining maximally-localised Wannier functions
    X1120190717, 202010082019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationThe X Window System (X11) is a windowing system for bitmap displays
    XML-LibXML2.0201, 2.02062019b, 2020bbroadwell, skylake, epyciris, aionData processingPerl binding for libxml2
    XZ5.2.4, 5.2.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesxz: XZ utilities
    Xerces-C++3.2.22019bbroadwell, skylakeirisLibrariesXerces-C++ is a validating XML parser written in a portable subset of C++. Xerces-C++ makes it easy to give your application the ability to read and write XML data. A shared library is provided for parsing, generating, manipulating, and validating XML documents using the DOM, SAX, and SAX2 APIs.
    Xvfb1.20.92020bbroadwell, epyc, skylake, gpuaion, irisVisualisationXvfb is an X server that can run on machines with no display hardware and no physical input devices. It emulates a dumb framebuffer using virtual memory.
    YACS0.1.82020bbroadwell, epyc, skylakeaion, irisLibrariesYACS was created as a lightweight library to define and manage system configurations, such as those commonly found in software designed for scientific experimentation. These "configurations" typically cover concepts like hyperparameters used in training a machine learning model or configurable model hyperparameters, such as the depth of a convolutional neural network.
    Yasm1.3.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesYasm: Complete rewrite of the NASM assembler with BSD license
    Z34.8.102020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesZ3 is a theorem prover from Microsoft Research.
    Zip3.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesZip is a compression and file packaging/archive utility. Although highly compatible both with PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP's own UnZip, our primary objectives have been portability and other-than-MSDOS functionality
    ant1.10.6, 1.10.7, 1.10.92019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentApache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.
    archspec0.1.02019bbroadwell, skylakeirisUtilitiesA library for detecting, labeling, and reasoning about microarchitectures
    arpack-ng3.7.0, 3.8.02019b, 2020bbroadwell, skylake, epyciris, aionNumerical librariesARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.
    at-spi2-atk2.34.1, 2.38.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationAT-SPI 2 toolkit bridge
    at-spi2-core2.34.0, 2.38.02019b, 2020bbroadwell, skylake, epyciris, aionVisualisationAssistive Technology Service Provider Interface.
    binutils2.32, 2.352019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesbinutils: GNU binary utilities
    bokeh2.2.32020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesStatistical and novel interactive HTML plots for Python
    bzip21.0.82019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesbzip2 is a freely available, patent free, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression.
    cURL7.66.0, 7.72.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitieslibcurl is a free and easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file transfer resume, http proxy tunneling and more.
    cairo1.16.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationCairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB
    cuDNN7.6.4.38, 8.0.4.30, 8.0.5.392019b, 2020bgpuirisNumerical librariesThe NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks.
    dask2021.2.02020bbroadwell, epyc, skylake, gpuaion, irisData processingDask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love.
    double-conversion3.1.4, 3.1.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesEfficient binary-decimal and decimal-binary conversion routines for IEEE doubles.
    elfutils0.1832020bgpuirisLibrariesThe elfutils project provides libraries and tools for ELF files and DWARF data.
    expat2.2.7, 2.2.92019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesExpat is an XML parser library written in C. It is a stream-oriented parser in which an application registers handlers for things the parser might find in the XML document (like start tags)
    flatbuffers-python1.122020bbroadwell, epyc, skylake, gpuaion, irisDevelopmentPython Flatbuffers runtime library.
    flatbuffers1.12.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentFlatBuffers: Memory Efficient Serialization Library
    flex2.6.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgramming LanguagesFlex (Fast Lexical Analyzer) is a tool for generating scanners. A scanner, sometimes called a tokenizer, is a program which recognizes lexical patterns in text.
    fontconfig2.13.1, 2.13.922019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationFontconfig is a library designed to provide system-wide font configuration, customization and application access.
    foss2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    fosscuda2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)GCC based compiler toolchain with CUDA support, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    freetype2.10.1, 2.10.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationFreeType 2 is a software font engine that is designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other products as well.
    gc7.6.122019bbroadwell, skylakeirisLibrariesThe Boehm-Demers-Weiser conservative garbage collector can be used as a garbage collecting replacement for C malloc or C++ new.
    gcccuda2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, along with CUDA toolkit.
    gettext0.19.8.1, 0.20.1, 0.212019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesGNU 'gettext' is an important step for the GNU Translation Project, as it is an asset on which we may build many other steps. This package offers to programmers, translators, and even users, a well integrated set of tools and documentation
    gflags2.2.22019bbroadwell, skylakeirisDevelopmentThe gflags package contains a C++ library that implements commandline flags processing. It includes built-in support for standard types such as string and the ability to define flags in the source file in which they are used.
    giflib5.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesgiflib is a library for reading and writing gif images. It is API and ABI compatible with libungif which was in wide use while the LZW compression algorithm was patented.
    git2.23.0, 2.28.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesGit is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
    glog0.4.02019bbroadwell, skylakeirisDevelopmentA C++ implementation of the Google logging module.
    gmsh4.4.02019bbroadwell, skylakeirisCFD/Finite element modellingSalome is an open-source software that provides a generic Pre- and Post-Processing platform for numerical simulation. It is based on an open and flexible architecture made of reusable components.
    gmsh4.8.42020bbroadwell, epyc, skylakeaion, irisMathematicsGmsh is a 3D finite element grid generator with a build-in CAD engine and post-processor.
    gnuplot5.2.8, 5.4.12019b, 2020bbroadwell, skylake, epyciris, aionVisualisationPortable interactive, function plotting utility
    gocryptfs1.7.1, 2.0.12019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesEncrypted overlay filesystem written in Go. gocryptfs uses file-based encryption that is implemented as a mountable FUSE filesystem. Each file in gocryptfs is stored as one corresponding encrypted file on the hard disk.
    gompi2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
    gompic2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)GNU Compiler Collection (GCC) based compiler toolchain along with CUDA toolkit, including OpenMPI for MPI support with CUDA features enabled.
    googletest1.10.02019bbroadwell, skylakeirisDevelopmentGoogle's framework for writing C++ tests on a variety of platforms
    gperf3.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentGNU gperf is a perfect hash function generator. For a given list of strings, it produces a hash function and hash table, in form of C or C++ code, for looking up a value depending on the input string. The hash function is perfect, which means that the hash table has no collisions, and the hash table lookup needs a single string comparison only.
    groff1.22.42020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesGroff (GNU troff) is a typesetting system that reads plain text mixed with formatting commands and produces formatted output.
    gzip1.102019b, 2020bbroadwell, skylake, epyc, gpuiris, aionUtilitiesgzip (GNU zip) is a popular data compression program as a replacement for compress
    h5py2.10.0, 3.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingHDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.
    help2man1.47.16, 1.47.4, 1.47.82020b, 2019bbroadwell, epyc, skylake, gpuaion, irisUtilitieshelp2man produces simple manual pages from the '--help' and '--version' output of other commands.
    hwloc1.11.12, 2.2.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionSystem-level softwareThe Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.
    hypothesis4.44.2, 5.41.2, 5.41.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesHypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.
    iccifort2019.5.281, 2020.4.3042019b, 2020bbroadwell, skylake, gpu, epyciris, aionCompilersIntel C, C++ & Fortran compilers
    iccifortcuda2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)Intel C, C++ & Fortran compilers with CUDA toolkit
    iimpi2019b, 2020b2019b, 2020bbroadwell, skylake, epyc, gpuiris, aionToolchains (software stacks)Intel C/C++ and Fortran compilers, alongside Intel MPI.
    iimpic2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)Intel C/C++ and Fortran compilers, alongside Intel MPI and CUDA.
    imkl2019.5.281, 2020.4.3042019b, 2020bbroadwell, skylake, gpu, epyciris, aionNumerical librariesIntel Math Kernel Library is a library of highly optimized, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more.
    impi2018.5.288, 2019.9.3042019b, 2020bbroadwell, skylake, gpu, epyciris, aionMPIIntel MPI Library, compatible with MPICH ABI
    intel2019b, 2020b2019b, 2020bbroadwell, skylake, epyc, gpuiris, aionToolchains (software stacks)Compiler toolchain including Intel compilers, Intel MPI and Intel Math Kernel Library (MKL).
    intelcuda2019b, 2020b2019b, 2020bgpuirisToolchains (software stacks)Intel Cluster Toolkit Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MPI & Intel MKL, with CUDA toolkit
    intltool0.51.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentintltool is a set of tools to centralize translation of many different file formats using GNU gettext-compatible PO files.
    itac2019.4.0362019bbroadwell, skylakeirisUtilitiesThe Intel Trace Collector is a low-overhead tracing library that performs event-based tracing in applications. The Intel Trace Analyzer provides a convenient way to monitor application activities gathered by the Intel Trace Collector through graphical displays.
    jemalloc5.2.12019bbroadwell, skylakeirisLibrariesjemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
    kallisto0.46.12019bbroadwell, skylakeirisBiologykallisto is a program for quantifying abundances of transcripts from RNA-Seq data, or more generally of target sequences using high-throughput sequencing reads.
    kim-api2.1.32019bbroadwell, skylakeirisChemistryOpen Knowledgebase of Interatomic Models. KIM is an API and OpenKIM is a collection of interatomic models (potentials) for atomistic simulations. This is a library that can be used by simulation programs to get access to the models in the OpenKIM database. This EasyBuild only installs the API, the models can be installed with the package openkim-models, or the user can install them manually by running kim-api-collections-management install user MODELNAME or kim-api-collections-management install user OpenKIM to install them all.
    libGLU9.0.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationThe OpenGL Utility Library (GLU) is a computer graphics library for OpenGL.
    libarchive3.4.32020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesMulti-format archive and compression library
    libcerf1.13, 1.142019b, 2020bbroadwell, skylake, epyciris, aionMathematicslibcerf is a self-contained numeric library that provides an efficient and accurate implementation of complex error functions, along with Dawson, Faddeeva, and Voigt functions.
    libctl4.0.02019bbroadwell, skylakeirisChemistrylibctl is a free Guile-based library implementing flexible control files for scientific simulations.
    libdrm2.4.102, 2.4.992020b, 2019bbroadwell, epyc, skylake, gpuaion, irisLibrariesDirect Rendering Manager runtime library.
    libepoxy1.5.42019b, 2020bbroadwell, skylake, epyciris, aionLibrariesEpoxy is a library for handling OpenGL function pointer management for you
    libevent2.1.11, 2.1.122019b, 2020bbroadwell, skylake, epyciris, aionLibrariesThe libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts.
    libffi3.2.1, 3.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesThe libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.
    libgd2.2.5, 2.3.02019b, 2020bbroadwell, skylake, epyciris, aionLibrariesGD is an open source code library for the dynamic creation of images by programmers.
    libgeotiff1.5.1, 1.6.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLibrary for reading and writing coordinate system information from/to GeoTIFF files
    libglvnd1.2.0, 1.3.22019b, 2020bbroadwell, skylake, epyc, gpuiris, aionLibrarieslibglvnd is a vendor-neutral dispatch layer for arbitrating OpenGL API calls between multiple vendors.
    libgpuarray0.7.62019b, 2020bgpuirisLibrariesLibrary to manipulate tensors on the GPU.
    libiconv1.162019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLibiconv converts from one character encoding to another through Unicode conversion
    libjpeg-turbo2.0.3, 2.0.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrarieslibjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.
    libmatheval1.1.112019bbroadwell, skylakeirisLibrariesGNU libmatheval is a library (callable from C and Fortran) to parse and evaluate symbolic expressions input as text.
    libogg1.3.42020bbroadwell, epyc, skylake, gpuaion, irisLibrariesOgg is a multimedia container format, and the native file and stream format for the Xiph.org multimedia codecs.
    libpciaccess0.14, 0.162019b, 2020bbroadwell, skylake, gpu, epyciris, aionSystem-level softwareGeneric PCI access library.
    libpng1.6.372019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrarieslibpng is the official PNG reference library
    libreadline8.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesThe GNU Readline library provides a set of functions for use by applications that allow users to edit command lines as they are typed in. Both Emacs and vi editing modes are available. The Readline library includes additional functions to maintain a list of previously-entered command lines, to recall and perhaps reedit those lines, and perform csh-like history expansion on previous commands.
    libsndfile1.0.282019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLibsndfile is a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface.
    libtirpc1.3.12020bbroadwell, epyc, skylake, gpuaion, irisLibrariesLibtirpc is a port of Suns Transport-Independent RPC library to Linux.
    libtool2.4.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesGNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface.
    libunistring0.9.102019bbroadwell, skylakeirisLibrariesThis library provides functions for manipulating Unicode strings and for manipulating C strings according to the Unicode standard.
    libunwind1.3.1, 1.4.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesThe primary goal of libunwind is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications
    libvorbis1.3.72020bbroadwell, epyc, skylake, gpuaion, irisLibrariesOgg Vorbis is a fully open, non-proprietary, patent-and-royalty-free, general-purpose compressed audio format
    libwebp1.1.02020bbroadwell, epyc, skylakeaion, irisLibrariesWebP is a modern image format that provides superior lossless and lossy compression for images on the web. Using WebP, webmasters and web developers can create smaller, richer images that make the web faster.
    libxc4.3.4, 5.1.22019b, 2020bbroadwell, skylake, epyciris, aionChemistryLibxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.
    libxml22.9.10, 2.9.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisLibrariesLibxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).
    libxslt1.1.342019bbroadwell, skylakeirisLibrariesLibxslt is the XSLT C library developed for the GNOME project (but usable outside of the Gnome platform).
    libyaml0.2.2, 0.2.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesLibYAML is a YAML parser and emitter written in C.
    lxml4.4.22019bbroadwell, skylakeirisLibrariesThe lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt.
    lz41.9.22020bbroadwell, epyc, skylake, gpuaion, irisLibrariesLZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core. It features an extremely fast decoder, with speed in multiple GB/s per core.
    magma2.5.1, 2.5.42019b, 2020bgpuirisMathematicsThe MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current Multicore+GPU systems.
    makeinfo6.72020bbroadwell, epyc, skylake, gpuaion, irisDevelopmentmakeinfo is part of the Texinfo project, the official documentation format of the GNU project.
    matplotlib3.1.1, 3.3.32019b, 2020bbroadwell, skylake, epyc, gpuiris, aionVisualisationmatplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.
    molmod1.4.52019bbroadwell, skylakeirisMathematicsMolMod is a Python library with many compoments that are useful to write molecular modeling programs.
    ncurses6.0, 6.1, 6.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentThe Ncurses (new curses) library is a free software emulation of curses in System V Release 4.0, and more. It uses Terminfo format, supports pads and color and multiple highlights and forms characters and function-key mapping, and has all the other SYSV-curses enhancements over BSD Curses.
    netCDF-Fortran4.5.2, 4.5.32019b, 2020bbroadwell, skylake, epyciris, aionData processingNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    netCDF4.7.1, 4.7.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionData processingNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    nettle3.5.1, 3.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesNettle is a cryptographic library that is designed to fit easily in more or less any context: In crypto toolkits for object-oriented languages (C++, Python, Pike, ...), in applications like LSH or GNUPG, or even in kernel space.
    networkx2.52020bbroadwell, epyc, skylake, gpuaion, irisUtilitiesNetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
    nodejs12.19.02020bbroadwell, epyc, skylake, gpuaion, irisProgramming LanguagesNode.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
    nsync1.24.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentnsync is a C library that exports various synchronization primitives, such as mutexes
    numactl2.0.12, 2.0.132019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesThe numactl program allows you to run your application program on specific cpu's and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.
    numba0.52.02020bbroadwell, epyc, skylake, gpuaion, irisProgramming LanguagesNumba is an Open Source NumPy-aware optimizing compiler for Python sponsored by Continuum Analytics, Inc. It uses the remarkable LLVM compiler infrastructure to compile Python syntax to machine code.
    phonopy2.2.02019bbroadwell, skylakeirisLibrariesPhonopy is an open source package of phonon calculations based on the supercell approach.
    pixman0.38.4, 0.40.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationPixman is a low-level software library for pixel manipulation, providing features such as image compositing and trapezoid rasterization. Important users of pixman are the cairo graphics library and the X server.
    pkg-config0.29.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentpkg-config is a helper tool used when compiling applications and libraries. It helps you insert the correct compiler options on the command line so an application can use gcc -o test test.c pkg-config --libs --cflags glib-2.0 for instance, rather than hard-coding values on where to find glib (or other libraries).
    pkgconfig1.5.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentpkgconfig is a Python module to interface with the pkg-config command line tool
    pocl1.4, 1.62019b, 2020bgpuirisLibrariesPocl is a portable open source (MIT-licensed) implementation of the OpenCL standard
    protobuf-python3.10.0, 3.14.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentPython Protocol Buffers runtime library.
    protobuf2.5.0, 3.10.0, 3.14.02019b, 2020bbroadwell, skylake, epyc, gpuiris, aionDevelopmentGoogle Protocol Buffers
    pybind112.4.3, 2.6.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariespybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code.
    re2c1.2.1, 2.0.32019b, 2020bbroadwell, skylake, epyciris, aionUtilitiesre2c is a free and open-source lexer generator for C and C++. Its main goal is generating fast lexers: at least as fast as their reasonably optimized hand-coded counterparts. Instead of using traditional table-driven approach, re2c encodes the generated finite state automata directly in the form of conditional jumps and comparisons.
    scikit-build0.11.12020bbroadwell, epyc, skylake, gpuaion, irisLibrariesScikit-Build, or skbuild, is an improved build system generator for CPython C/C++/Fortran/Cython extensions.
    scikit-image0.18.12020bbroadwell, epyc, skylake, gpuaion, irisVisualisationscikit-image is a collection of algorithms for image processing.
    scikit-learn0.23.22020bbroadwell, epyc, skylake, gpuaion, irisData processingScikit-learn integrates machine learning algorithms in the tightly-knit scientific Python world, building upon numpy, scipy, and matplotlib. As a machine-learning module, it provides versatile tools for data mining and analysis in any field of science and engineering. It strives to be simple and efficient, accessible to everybody, and reusable in various contexts.
    scipy1.4.12019bbroadwell, skylake, gpuirisMathematicsSciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension for Python.
    setuptools41.0.12019bbroadwell, skylakeirisDevelopmentEasily download, build, install, upgrade, and uninstall Python packages
    snappy1.1.7, 1.1.82019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrariesSnappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression.
    sparsehash2.0.3, 2.0.42019b, 2020bbroadwell, skylake, epyciris, aionDevelopmentAn extremely memory-efficient hash_map implementation. 2 bits/entry overhead! The SparseHash library contains several hash-map implementations, including implementations that optimize for space or speed.
    spglib-python1.16.02020bbroadwell, epyc, skylake, gpuaion, irisChemistrySpglib for Python. Spglib is a library for finding and handling crystal symmetries written in C.
    tbb2019_U9, 2020.2, 2020.32019b, 2020bbroadwell, skylake, epyciris, aionLibrariesIntel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.
    texinfo6.72019bbroadwell, skylakeirisDevelopmentTexinfo is the official documentation format of the GNU project.
    tqdm4.56.22020bbroadwell, epyc, skylake, gpuaion, irisLibrariesA fast, extensible progress bar for Python and CLI
    typing-extensions3.7.4.32019b, 2020bgpu, broadwell, epyc, skylakeiris, aionDevelopmentTyping Extensions – Backported and Experimental Type Hints for Python
    util-linux2.34, 2.362019b, 2020bbroadwell, skylake, gpu, epyciris, aionUtilitiesSet of Linux utilities
    x26420190925, 202010262019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationx264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.
    x2653.2, 3.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionVisualisationx265 is a free software library and application for encoding video streams into the H.265 AVC compression format, and is released under the terms of the GNU GPL.
    xorg-macros1.19.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionDevelopmentX.org macros utilities.
    xprop1.2.4, 1.2.52019b, 2020bbroadwell, skylake, epyciris, aionVisualisationThe xprop utility is for displaying window and font properties in an X server. One window or font is selected using the command line arguments or possibly in the case of a window, by clicking on the desired window. A list of properties is then given, possibly with formatting information.
    yaff1.6.02019bbroadwell, skylakeirisChemistryYaff stands for 'Yet another force field'. It is a pythonic force-field code.
    zlib1.2.112019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrarieszlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system.
    zstd1.4.52020bbroadwell, epyc, skylake, gpuaion, irisLibrariesZstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression/speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/bio/index.html b/software/swsets/bio/index.html new file mode 100644 index 00000000..0f2e4f34 --- /dev/null +++ b/software/swsets/bio/index.html @@ -0,0 +1,2943 @@ + + + + + + + + + + + + + + + + + + + + + + + + Biology - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Biology

    + +

    Alphabetical list of available ULHPC software belonging to the 'bio' category. +To load a software of this category, use: module load bio/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    ABySS2.2.52020bbroadwell, epyc, skylakeaion, irisAssembly By Short Sequences - a de novo, parallel, paired-end sequence assembler
    BEDTools2.29.2, 2.30.02019b, 2020bbroadwell, skylake, epyciris, aionBEDTools: a powerful toolset for genome arithmetic. The BEDTools utilities allow one to address common genomics tasks such as finding feature overlaps and computing coverage. The utilities are largely based on four widely-used file formats: BED, GFF/GTF, VCF, and SAM/BAM.
    BLAST+2.11.0, 2.9.02020b, 2019bbroadwell, epyc, skylakeaion, irisBasic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences.
    BWA0.7.172019b, 2020bbroadwell, skylake, epyciris, aionBurrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.
    BamTools2.5.12019b, 2020bbroadwell, skylake, epyciris, aionBamTools provides both a programmer's API and an end-user's toolkit for handling BAM files.
    BioPerl1.7.2, 1.7.82019b, 2020bbroadwell, skylake, epyciris, aionBioperl is the product of a community effort to produce Perl code which is useful in biology. Examples include Sequence objects, Alignment objects and database searching objects.
    Bowtie22.3.5.1, 2.4.22019b, 2020bbroadwell, skylake, epyciris, aionBowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM Index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.
    FastQC0.11.92019b, 2020bbroadwell, skylake, epyciris, aionFastQC is a quality control application for high throughput sequence data. It reads in sequence data in a variety of formats and can either provide an interactive application to review the results of several different QC checks, or create an HTML based report which can be integrated into a pipeline.
    GROMACS2019.4, 2019.6, 2020, 2021, 2021.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. This is a CPU only build, containing both MPI and threadMPI builds for both single and double precision. It also contains the gmxapi extension for the single precision MPI build.
    HTSlib1.10.2, 1.122019b, 2020bbroadwell, skylake, epyciris, aionA C library for reading/writing high-throughput sequencing data. This package includes the utilities bgzip and tabix
    Jellyfish2.3.02019bbroadwell, skylakeirisJellyfish is a tool for fast, memory-efficient counting of k-mers in DNA.
    SAMtools1.10, 1.122019b, 2020bbroadwell, skylake, epyciris, aionSAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
    Salmon1.1.02019bbroadwell, skylakeirisSalmon is a wicked-fast program to produce a highly-accurate, transcript-level quantification estimates from RNA-seq data.
    TopHat2.1.22019b, 2020bbroadwell, skylake, epyciris, aionTopHat is a fast splice junction mapper for RNA-Seq reads.
    Trinity2.10.02019bbroadwell, skylakeirisTrinity represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-Seq data. Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-Seq reads.
    kallisto0.46.12019bbroadwell, skylakeiriskallisto is a program for quantifying abundances of transcripts from RNA-Seq data, or more generally of target sequences using high-throughput sequencing reads.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/cae/index.html b/software/swsets/cae/index.html new file mode 100644 index 00000000..1ff41f56 --- /dev/null +++ b/software/swsets/cae/index.html @@ -0,0 +1,2855 @@ + + + + + + + + + + + + + + + + + + + + + + + + CFD/Finite element modelling - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    CFD/Finite element modelling

    + +

    Alphabetical list of available ULHPC software belonging to the 'cae' category. +To load a software of this category, use: module load cae/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    ABAQUS2018, 20212019b, 2020bbroadwell, skylake, epyciris, aionFinite Element Analysis software for modeling, visualization and best-in-class implicit and explicit dynamics FEA.
    OpenFOAM-Extend4.1-202004082019bbroadwell, skylakeirisOpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
    OpenFOAM8, v19122020b, 2019bepyc, broadwell, skylakeaion, irisOpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
    Salome8.5.0, 9.8.02019b, 2020bbroadwell, skylake, epyciris, aionThe SALOME platform is an open source software framework for pre- and post-processing and integration of numerical solvers from various scientific fields. CEA and EDF use SALOME to perform a large number of simulations, typically related to power plant equipment and alternative energy. To address these challenges, SALOME includes a CAD/CAE modelling tool, mesh generators, an advanced 3D visualization tool, etc.
    gmsh4.4.02019bbroadwell, skylakeirisSalome is an open-source software that provides a generic Pre- and Post-Processing platform for numerical simulation. It is based on an open and flexible architecture made of reusable components.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/chem/index.html b/software/swsets/chem/index.html new file mode 100644 index 00000000..933ef1ee --- /dev/null +++ b/software/swsets/chem/index.html @@ -0,0 +1,2927 @@ + + + + + + + + + + + + + + + + + + + + + + + + Chemistry - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Chemistry

    + +

    Alphabetical list of available ULHPC software belonging to the 'chem' category. +To load a software of this category, use: module load chem/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    ABINIT9.4.12020bepycaionABINIT is a package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave or wavelet basis.
    ASE3.19.0, 3.20.1, 3.21.12019b, 2020bbroadwell, skylake, epyc, gpuiris, aionASE is a python package providing an open source Atomic Simulation Environment in the Python scripting language. From version 3.20.1 we also include the ase-ext package, it contains optional reimplementations in C of functions in ASE. ASE uses it automatically when installed.
    CRYSTAL172019bbroadwell, skylakeirisThe CRYSTAL package performs ab initio calculations of the ground state energy, energy gradient, electronic wave function and properties of periodic systems. Hartree-Fock or Kohn- Sham Hamiltonians (that adopt an Exchange-Correlation potential following the postulates of Density-Functional Theory) can be used.
    GPAW-setups0.9.200002019bbroadwell, skylakeirisPAW setup for the GPAW Density Functional Theory package. Users can install setups manually using 'gpaw install-data' or use setups from this package. The versions of GPAW and GPAW-setups can be intermixed.
    GPAW20.1.02019bbroadwell, skylakeirisGPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). It uses real-space uniform grids and multigrid methods or atom-centered basis-functions.
    NAMD2.132019bbroadwell, skylakeirisNAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.
    PLUMED2.5.3, 2.7.02019b, 2020bbroadwell, skylake, epyciris, aionPLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.
    QuantumESPRESSO6.72019b, 2020bbroadwell, epyc, skylakeiris, aionQuantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).
    Wannier903.1.02020bbroadwell, epyc, skylakeaion, irisA tool for obtaining maximally-localised Wannier functions
    kim-api2.1.32019bbroadwell, skylakeirisOpen Knowledgebase of Interatomic Models. KIM is an API and OpenKIM is a collection of interatomic models (potentials) for atomistic simulations. This is a library that can be used by simulation programs to get access to the models in the OpenKIM database. This EasyBuild only installs the API, the models can be installed with the package openkim-models, or the user can install them manually by running kim-api-collections-management install user MODELNAME or kim-api-collections-management install user OpenKIM to install them all.
    libctl4.0.02019bbroadwell, skylakeirislibctl is a free Guile-based library implementing flexible control files for scientific simulations.
    libxc4.3.4, 5.1.22019b, 2020bbroadwell, skylake, epyciris, aionLibxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.
    spglib-python1.16.02020bbroadwell, epyc, skylake, gpuaion, irisSpglib for Python. Spglib is a library for finding and handling crystal symmetries written in C.
    yaff1.6.02019bbroadwell, skylakeirisYaff stands for 'Yet another force field'. It is a pythonic force-field code.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/compiler/index.html b/software/swsets/compiler/index.html new file mode 100644 index 00000000..a2046c00 --- /dev/null +++ b/software/swsets/compiler/index.html @@ -0,0 +1,2879 @@ + + + + + + + + + + + + + + + + + + + + + + + + Compilers - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Compilers

    + +

    Alphabetical list of available ULHPC software belonging to the 'compiler' category. +To load a software of this category, use: module load compiler/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    AOCC3.1.02020bepycaionAMD Optimized C/C++ & Fortran compilers (AOCC) based on LLVM 12.0
    Clang11.0.1, 9.0.12020b, 2019bbroadwell, epyc, skylake, gpuaion, irisC, C++, Objective-C compiler, based on LLVM. Does not include C++ standard library -- use libstdc++ from GCC.
    GCC10.2.0, 8.3.02020b, 2019bbroadwell, epyc, skylake, gpuaion, irisThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    GCCcore10.2.0, 8.3.02020b, 2019bbroadwell, epyc, skylake, gpuaion, irisThe GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
    Go1.14.1, 1.16.62019b, 2020bbroadwell, skylake, epyciris, aionGo is an open source programming language that makes it easy to build simple, reliable, and efficient software.
    LLVM10.0.1, 11.0.0, 9.0.0, 9.0.12020b, 2019bbroadwell, epyc, skylake, gpuaion, irisThe LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
    PGI19.102019bbroadwell, skylakeirisC, C++ and Fortran compilers from The Portland Group - PGI
    iccifort2019.5.281, 2020.4.3042019b, 2020bbroadwell, skylake, gpu, epyciris, aionIntel C, C++ & Fortran compilers
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/data/index.html b/software/swsets/data/index.html new file mode 100644 index 00000000..df0dac24 --- /dev/null +++ b/software/swsets/data/index.html @@ -0,0 +1,2911 @@ + + + + + + + + + + + + + + + + + + + + + + + + Data processing - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Data processing

    + +

    Alphabetical list of available ULHPC software belonging to the 'data' category. +To load a software of this category, use: module load data/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    Arrow0.16.02019bbroadwell, skylakeirisApache Arrow (incl. PyArrow Python bindings)), a cross-language development platform for in-memory data.
    DB_File1.8552020bbroadwell, epyc, skylakeaion, irisPerl5 access to Berkeley DB version 1.x.
    GDAL3.0.2, 3.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionGDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing.
    HDF51.10.5, 1.10.72019b, 2020bbroadwell, skylake, gpu, epyciris, aionHDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.
    HDF4.2.152020bbroadwell, epyc, skylake, gpuaion, irisHDF (also known as HDF4) is a library and multi-object file format for storing and managing data between machines.
    LAME3.1002019b, 2020bbroadwell, skylake, gpu, epyciris, aionLAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.
    XML-LibXML2.0201, 2.02062019b, 2020bbroadwell, skylake, epyciris, aionPerl binding for libxml2
    dask2021.2.02020bbroadwell, epyc, skylake, gpuaion, irisDask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love.
    h5py2.10.0, 3.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionHDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.
    netCDF-Fortran4.5.2, 4.5.32019b, 2020bbroadwell, skylake, epyciris, aionNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    netCDF4.7.1, 4.7.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionNetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
    scikit-learn0.23.22020bbroadwell, epyc, skylake, gpuaion, irisScikit-learn integrates machine learning algorithms in the tightly-knit scientific Python world, building upon numpy, scipy, and matplotlib. As a machine-learning module, it provides versatile tools for data mining and analysis in any field of science and engineering. It strives to be simple and efficient, accessible to everybody, and reusable in various contexts.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/debugger/index.html b/software/swsets/debugger/index.html new file mode 100644 index 00000000..83b6c91e --- /dev/null +++ b/software/swsets/debugger/index.html @@ -0,0 +1,2831 @@ + + + + + + + + + + + + + + + + + + + + + + + + Debugging - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Debugging

    + +

    Alphabetical list of available ULHPC software belonging to the 'debugger' category. +To load a software of this category, use: module load debugger/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    GDB10.1, 9.12020b, 2019bbroadwell, epyc, skylakeaion, irisThe GNU Project Debugger
    Valgrind3.15.0, 3.16.12019b, 2020bbroadwell, skylake, epyciris, aionValgrind: Debugging and profiling tools
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/devel/index.html b/software/swsets/devel/index.html new file mode 100644 index 00000000..918896d9 --- /dev/null +++ b/software/swsets/devel/index.html @@ -0,0 +1,3151 @@ + + + + + + + + + + + + + + + + + + + + + + + + Development - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Development

    + +

    Alphabetical list of available ULHPC software belonging to the 'devel' category. +To load a software of this category, use: module load devel/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    Autoconf2.692019b, 2020bbroadwell, skylake, gpu, epyciris, aionAutoconf is an extensible package of M4 macros that produce shell scripts to automatically configure software source code packages. These scripts can adapt the packages to many kinds of UNIX-like systems without manual user intervention. Autoconf creates a configuration script for a package from a template file that lists the operating system features that the package can use, in the form of M4 macro calls.
    Automake1.16.1, 1.16.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionAutomake: GNU Standards-compliant Makefile generator
    Autotools20180311, 202003212019b, 2020bbroadwell, skylake, gpu, epyciris, aionThis bundle collect the standard GNU build tools: Autoconf, Automake and libtool
    Bazel0.26.1, 0.29.1, 3.7.22019b, 2020bgpu, broadwell, skylake, epyciris, aionBazel is a build tool that builds code quickly and reliably. It is used to build the majority of Google's software.
    Boost1.71.0, 1.74.02019b, 2020bbroadwell, skylake, epyciris, aionBoost provides free peer-reviewed portable C++ source libraries.
    CMake3.15.3, 3.18.4, 3.20.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionCMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.
    DBus1.13.12, 1.13.182019b, 2020bbroadwell, skylake, epyciris, aionD-Bus is a message bus system, a simple way for applications to talk to one another. In addition to interprocess communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a "single instance" application or daemon, and to launch applications and daemons on demand when their services are needed.
    Doxygen1.8.16, 1.8.202019b, 2020bbroadwell, skylake, gpu, epyciris, aionDoxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.
    Flink1.11.22020bbroadwell, epyc, skylakeaion, irisApache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale.
    GObject-Introspection1.63.1, 1.66.12019b, 2020bbroadwell, skylake, epyciris, aionGObject introspection is a middleware layer between C libraries (using GObject) and language bindings. The C library can be scanned at compile time and generate a metadata file, in addition to the actual native C library. Then at runtime, language bindings can read this metadata and automatically provide bindings to call into the C library.
    M41.4.182019b, 2020bbroadwell, skylake, gpu, epyciris, aionGNU M4 is an implementation of the traditional Unix macro processor. It is mostly SVR4 compatible although it has some extensions (for example, handling more than 9 positional parameters to macros). GNU M4 also has built-in functions for including files, running shell commands, doing arithmetic, etc.
    Mako1.1.0, 1.1.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionA super-fast templating language that borrows the best ideas from the existing templating languages
    Maven3.6.32019b, 2020bbroadwell, skylake, epyciris, aionBinary maven install, Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.
    PCRE210.33, 10.352019b, 2020bbroadwell, skylake, epyc, gpuiris, aionThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PCRE8.43, 8.442019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
    PyTorch1.4.0, 1.7.1, 1.8.1, 1.9.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionTensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
    Qt55.13.1, 5.14.22019b, 2020bbroadwell, skylake, epyciris, aionQt is a comprehensive cross-platform C++ application framework.
    ReFrame2.21, 3.6.32019b, 2020bbroadwell, skylake, epyciris, aionReFrame is a framework for writing regression tests for HPC systems.
    SQLite3.29.0, 3.33.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionSQLite: SQL Database Engine in a C Library
    SWIG4.0.1, 4.0.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionSWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages.
    Spack0.12.12019b, 2020bbroadwell, skylake, epyciris, aionSpack is a package manager for supercomputers, Linux, and macOS. It makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.
    Spark2.4.32019bbroadwell, skylakeirisSpark is Hadoop MapReduce done in memory
    ant1.10.6, 1.10.7, 1.10.92019b, 2020bbroadwell, skylake, epyciris, aionApache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.
    flatbuffers-python1.122020bbroadwell, epyc, skylake, gpuaion, irisPython Flatbuffers runtime library.
    flatbuffers1.12.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionFlatBuffers: Memory Efficient Serialization Library
    gflags2.2.22019bbroadwell, skylakeirisThe gflags package contains a C++ library that implements commandline flags processing. It includes built-in support for standard types such as string and the ability to define flags in the source file in which they are used.
    glog0.4.02019bbroadwell, skylakeirisA C++ implementation of the Google logging module.
    googletest1.10.02019bbroadwell, skylakeirisGoogle's framework for writing C++ tests on a variety of platforms
    gperf3.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionGNU gperf is a perfect hash function generator. For a given list of strings, it produces a hash function and hash table, in form of C or C++ code, for looking up a value depending on the input string. The hash function is perfect, which means that the hash table has no collisions, and the hash table lookup needs a single string comparison only.
    intltool0.51.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionintltool is a set of tools to centralize translation of many different file formats using GNU gettext-compatible PO files.
    makeinfo6.72020bbroadwell, epyc, skylake, gpuaion, irismakeinfo is part of the Texinfo project, the official documentation format of the GNU project.
    ncurses6.0, 6.1, 6.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe Ncurses (new curses) library is a free software emulation of curses in System V Release 4.0, and more. It uses Terminfo format, supports pads and color and multiple highlights and forms characters and function-key mapping, and has all the other SYSV-curses enhancements over BSD Curses.
    nsync1.24.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionnsync is a C library that exports various synchronization primitives, such as mutexes
    pkg-config0.29.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionpkg-config is a helper tool used when compiling applications and libraries. It helps you insert the correct compiler options on the command line so an application can use gcc -o test test.c pkg-config --libs --cflags glib-2.0 for instance, rather than hard-coding values on where to find glib (or other libraries).
    pkgconfig1.5.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionpkgconfig is a Python module to interface with the pkg-config command line tool
    protobuf-python3.10.0, 3.14.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionPython Protocol Buffers runtime library.
    protobuf2.5.0, 3.10.0, 3.14.02019b, 2020bbroadwell, skylake, epyc, gpuiris, aionGoogle Protocol Buffers
    setuptools41.0.12019bbroadwell, skylakeirisEasily download, build, install, upgrade, and uninstall Python packages
    sparsehash2.0.3, 2.0.42019b, 2020bbroadwell, skylake, epyciris, aionAn extremely memory-efficient hash_map implementation. 2 bits/entry overhead! The SparseHash library contains several hash-map implementations, including implementations that optimize for space or speed.
    texinfo6.72019bbroadwell, skylakeirisTexinfo is the official documentation format of the GNU project.
    typing-extensions3.7.4.32019b, 2020bgpu, broadwell, epyc, skylakeiris, aionTyping Extensions – Backported and Experimental Type Hints for Python
    xorg-macros1.19.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionX.org macros utilities.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/index.html b/software/swsets/index.html new file mode 100644 index 00000000..70d50c15 --- /dev/null +++ b/software/swsets/index.html @@ -0,0 +1,2851 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Supported Software Sets

    +

    You can find here the list of the supported software modules that you can use on the ULHPC facility.

    + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/lang/index.html b/software/swsets/lang/index.html new file mode 100644 index 00000000..60f387a2 --- /dev/null +++ b/software/swsets/lang/index.html @@ -0,0 +1,2975 @@ + + + + + + + + + + + + + + + + + + + + + + + + Programming Languages - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Programming Languages

    + +

    Alphabetical list of available ULHPC software belonging to the 'lang' category. +To load a software of this category, use: module load lang/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    Anaconda32020.02, 2020.112019b, 2020bbroadwell, skylake, epyciris, aionBuilt to complement the rich, open source Python community, the Anaconda platform provides an enterprise-ready data analytics platform that empowers companies to adopt a modern open data science analytics architecture.
    Bison3.3.2, 3.5.3, 3.7.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionBison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.
    FriBidi1.0.10, 1.0.52020b, 2019bbroadwell, epyc, skylake, gpuaion, irisThe Free Implementation of the Unicode Bidirectional Algorithm.
    Guile1.8.8, 2.2.42019bbroadwell, skylakeirisGuile is a programming language, designed to help programmers create flexible applications that can be extended by users or other programmers with plug-ins, modules, or scripts.
    Java1.8.0_241, 11.0.2, 13.0.2, 16.0.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionJava Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.
    Julia1.4.1, 1.6.22019b, 2020bbroadwell, skylake, epyciris, aionJulia is a high-level, high-performance dynamic programming language for numerical computing
    Lua5.1.5, 5.4.22019b, 2020bbroadwell, skylake, epyciris, aionLua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.
    NASM2.14.02, 2.15.052019b, 2020bbroadwell, skylake, gpu, epyciris, aionNASM: General-purpose x86 assembler
    Perl5.30.0, 5.32.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLarry Wall's Practical Extraction and Report Language This is a minimal build without any modules. Should only be used for build dependencies.
    Python2.7.16, 2.7.18, 3.7.4, 3.8.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionPython is a programming language that lets you work more quickly and integrate your systems more effectively.
    R3.6.2, 4.0.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionR is a free software environment for statistical computing and graphics.
    Ruby2.7.1, 2.7.22019b, 2020bbroadwell, skylake, epyciris, aionRuby is a dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write.
    Rust1.37.02019bbroadwell, skylakeirisRust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
    SciPy-bundle2019.10, 2020.112019b, 2020bbroadwell, skylake, gpu, epyciris, aionBundle of Python packages for scientific software
    Tcl8.6.10, 8.6.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisTcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more.
    Tkinter3.7.4, 3.8.62019b, 2020bbroadwell, skylake, epyc, gpuiris, aionTkinter module, built with the Python buildsystem
    Yasm1.3.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionYasm: Complete rewrite of the NASM assembler with BSD license
    flex2.6.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionFlex (Fast Lexical Analyzer) is a tool for generating scanners. A scanner, sometimes called a tokenizer, is a program which recognizes lexical patterns in text.
    nodejs12.19.02020bbroadwell, epyc, skylake, gpuaion, irisNode.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
    numba0.52.02020bbroadwell, epyc, skylake, gpuaion, irisNumba is an Open Source NumPy-aware optimizing compiler for Python sponsored by Continuum Analytics, Inc. It uses the remarkable LLVM compiler infrastructure to compile Python syntax to machine code.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/lib/index.html b/software/swsets/lib/index.html new file mode 100644 index 00000000..bd20ebfc --- /dev/null +++ b/software/swsets/lib/index.html @@ -0,0 +1,3327 @@ + + + + + + + + + + + + + + + + + + + + + + + + Libraries - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Libraries

    + +

    Alphabetical list of available ULHPC software belonging to the 'lib' category. +To load a software of this category, use: module load lib/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    ACTC1.12019b, 2020bbroadwell, skylake, epyciris, aionACTC converts independent triangles into triangle strips or fans.
    Boost.Python1.74.02020bbroadwell, epyc, skylakeaion, irisBoost.Python is a C++ library which enables seamless interoperability between C++ and the Python programming language.
    Check0.15.22020bgpuirisCheck is a unit testing framework for C. It features a simple interface for defining unit tests, putting little in the way of the developer. Tests are run in a separate address space, so both assertion failures and code errors that cause segmentation faults or other signals can be caught. Test results are reportable in the following: Subunit, TAP, XML, and a generic logging format.
    FLAC1.3.32020bbroadwell, epyc, skylake, gpuaion, irisFLAC stands for Free Lossless Audio Codec, an audio format similar to MP3, but lossless, meaning that audio is compressed in FLAC without any loss in quality.
    Flask1.1.22020bbroadwell, epyc, skylake, gpuaion, irisFlask is a lightweight WSGI web application framework. It is designed to make getting started quick and easy, with the ability to scale up to complex applications. This module includes the Flask extensions: Flask-Cors
    GDRCopy2.12020bgpuirisA low-latency GPU memory copy library based on NVIDIA GPUDirect RDMA technology.
    ICU64.2, 67.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications.
    JsonCpp1.9.3, 1.9.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionJsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files.
    LMDB0.9.242019b, 2020bbroadwell, skylake, gpu, epyciris, aionLMDB is a fast, memory-efficient database. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases.
    LibTIFF4.0.10, 4.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aiontiff: Library and tools for reading and writing TIFF data files
    NCCL2.4.8, 2.8.32019b, 2020bgpuirisThe NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs.
    NSPR4.21, 4.292019b, 2020bbroadwell, skylake, epyciris, aionNetscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions.
    NSS3.45, 3.572019b, 2020bbroadwell, skylake, epyciris, aionNetwork Security Services (NSS) is a set of libraries designed to support cross-platform development of security-enabled client and server applications.
    PROJ6.2.1, 7.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionProgram proj is a standard Unix filter function which converts geographic longitude and latitude coordinates into cartesian coordinates
    PyTorch-Geometric1.6.32020bbroadwell, epyc, skylake, gpuaion, irisPyTorch Geometric (PyG) is a geometric deep learning extension library for PyTorch.
    PyYAML5.1.2, 5.3.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionPyYAML is a YAML parser and emitter for the Python programming language.
    RDFlib5.0.02020bbroadwell, epyc, skylake, gpuaion, irisRDFLib is a Python library for working with RDF, a simple yet powerful language for representing information.
    SDL22.0.142020bbroadwell, epyc, skylakeaion, irisSDL: Simple DirectMedia Layer, a cross-platform multimedia library
    SIONlib1.7.62019bbroadwell, skylakeirisSIONlib is a scalable I/O library for parallel access to task-local files. The library not only supports writing and reading binary data to or from several thousands of processors into a single or a small number of physical files, but also provides global open and close functions to access SIONlib files in parallel. This package provides a stripped-down installation of SIONlib for use with performance tools (e.g., Score-P), with renamed symbols to avoid conflicts when an application using SIONlib itself is linked against a tool requiring a different SIONlib version.
    TensorFlow1.15.5, 2.1.0, 2.4.1, 2.5.02019b, 2020bgpu, broadwell, skylake, epyciris, aionAn open-source software library for Machine Intelligence
    UCX1.9.02020bbroadwell, epyc, skylake, gpuaion, irisUnified Communication X An open-source production grade communication framework for data centric and high-performance applications
    Xerces-C++3.2.22019bbroadwell, skylakeirisXerces-C++ is a validating XML parser written in a portable subset of C++. Xerces-C++ makes it easy to give your application the ability to read and write XML data. A shared library is provided for parsing, generating, manipulating, and validating XML documents using the DOM, SAX, and SAX2 APIs.
    YACS0.1.82020bbroadwell, epyc, skylakeaion, irisYACS was created as a lightweight library to define and manage system configurations, such as those commonly found in software designed for scientific experimentation. These "configurations" typically cover concepts like hyperparameters used in training a machine learning model or configurable model hyperparameters, such as the depth of a convolutional neural network.
    double-conversion3.1.4, 3.1.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionEfficient binary-decimal and decimal-binary conversion routines for IEEE doubles.
    elfutils0.1832020bgpuirisThe elfutils project provides libraries and tools for ELF files and DWARF data.
    gc7.6.122019bbroadwell, skylakeirisThe Boehm-Demers-Weiser conservative garbage collector can be used as a garbage collecting replacement for C malloc or C++ new.
    giflib5.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aiongiflib is a library for reading and writing gif images. It is API and ABI compatible with libungif which was in wide use while the LZW compression algorithm was patented.
    jemalloc5.2.12019bbroadwell, skylakeirisjemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
    libdrm2.4.102, 2.4.992020b, 2019bbroadwell, epyc, skylake, gpuaion, irisDirect Rendering Manager runtime library.
    libepoxy1.5.42019b, 2020bbroadwell, skylake, epyciris, aionEpoxy is a library for handling OpenGL function pointer management for you
    libevent2.1.11, 2.1.122019b, 2020bbroadwell, skylake, epyciris, aionThe libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts.
    libffi3.2.1, 3.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.
    libgd2.2.5, 2.3.02019b, 2020bbroadwell, skylake, epyciris, aionGD is an open source code library for the dynamic creation of images by programmers.
    libgeotiff1.5.1, 1.6.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibrary for reading and writing coordinate system information from/to GeoTIFF files
    libglvnd1.2.0, 1.3.22019b, 2020bbroadwell, skylake, epyc, gpuiris, aionlibglvnd is a vendor-neutral dispatch layer for arbitrating OpenGL API calls between multiple vendors.
    libgpuarray0.7.62019b, 2020bgpuirisLibrary to manipulate tensors on the GPU.
    libiconv1.162019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibiconv converts from one character encoding to another through Unicode conversion
    libjpeg-turbo2.0.3, 2.0.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionlibjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.
    libmatheval1.1.112019bbroadwell, skylakeirisGNU libmatheval is a library (callable from C and Fortran) to parse and evaluate symbolic expressions input as text.
    libogg1.3.42020bbroadwell, epyc, skylake, gpuaion, irisOgg is a multimedia container format, and the native file and stream format for the Xiph.org multimedia codecs.
    libpng1.6.372019b, 2020bbroadwell, skylake, gpu, epyciris, aionlibpng is the official PNG reference library
    libreadline8.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe GNU Readline library provides a set of functions for use by applications that allow users to edit command lines as they are typed in. Both Emacs and vi editing modes are available. The Readline library includes additional functions to maintain a list of previously-entered command lines, to recall and perhaps reedit those lines, and perform csh-like history expansion on previous commands.
    libsndfile1.0.282019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibsndfile is a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface.
    libtirpc1.3.12020bbroadwell, epyc, skylake, gpuaion, irisLibtirpc is a port of Suns Transport-Independent RPC library to Linux.
    libtool2.4.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionGNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface.
    libunistring0.9.102019bbroadwell, skylakeirisThis library provides functions for manipulating Unicode strings and for manipulating C strings according to the Unicode standard.
    libunwind1.3.1, 1.4.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe primary goal of libunwind is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications
    libvorbis1.3.72020bbroadwell, epyc, skylake, gpuaion, irisOgg Vorbis is a fully open, non-proprietary, patent-and-royalty-free, general-purpose compressed audio format
    libwebp1.1.02020bbroadwell, epyc, skylakeaion, irisWebP is a modern image format that provides superior lossless and lossy compression for images on the web. Using WebP, webmasters and web developers can create smaller, richer images that make the web faster.
    libxml22.9.10, 2.9.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisLibxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).
    libxslt1.1.342019bbroadwell, skylakeirisLibxslt is the XSLT C library developed for the GNOME project (but usable outside of the Gnome platform).
    libyaml0.2.2, 0.2.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionLibYAML is a YAML parser and emitter written in C.
    lxml4.4.22019bbroadwell, skylakeirisThe lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt.
    lz41.9.22020bbroadwell, epyc, skylake, gpuaion, irisLZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core. It features an extremely fast decoder, with speed in multiple GB/s per core.
    nettle3.5.1, 3.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionNettle is a cryptographic library that is designed to fit easily in more or less any context: In crypto toolkits for object-oriented languages (C++, Python, Pike, ...), in applications like LSH or GNUPG, or even in kernel space.
    phonopy2.2.02019bbroadwell, skylakeirisPhonopy is an open source package of phonon calculations based on the supercell approach.
    pocl1.4, 1.62019b, 2020bgpuirisPocl is a portable open source (MIT-licensed) implementation of the OpenCL standard
    pybind112.4.3, 2.6.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionpybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code.
    scikit-build0.11.12020bbroadwell, epyc, skylake, gpuaion, irisScikit-Build, or skbuild, is an improved build system generator for CPython C/C++/Fortran/Cython extensions.
    snappy1.1.7, 1.1.82019b, 2020bbroadwell, skylake, gpu, epyciris, aionSnappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression.
    tbb2019_U9, 2020.2, 2020.32019b, 2020bbroadwell, skylake, epyciris, aionIntel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.
    tqdm4.56.22020bbroadwell, epyc, skylake, gpuaion, irisA fast, extensible progress bar for Python and CLI
    zlib1.2.112019b, 2020bbroadwell, skylake, gpu, epyciris, aionzlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system.
    zstd1.4.52020bbroadwell, epyc, skylake, gpuaion, irisZstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression/speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/math/index.html b/software/swsets/math/index.html new file mode 100644 index 00000000..4ff5c8dd --- /dev/null +++ b/software/swsets/math/index.html @@ -0,0 +1,3039 @@ + + + + + + + + + + + + + + + + + + + + + + + + Mathematics - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Mathematics

    + +

    Alphabetical list of available ULHPC software belonging to the 'math' category. +To load a software of this category, use: module load math/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    CPLEX12.102019bbroadwell, skylakeirisIBM ILOG CPLEX Optimizer's mathematical programming technology enables analytical decision support for improving efficiency, reducing costs, and increasing profitability.
    Dakota6.11.0, 6.15.02019b, 2020bbroadwell, skylakeirisThe Dakota project delivers both state-of-the-art research and robust, usable software for optimization and UQ. Broadly, the Dakota software's advanced parametric analyses enable design exploration, model calibration, risk analysis, and quantification of margins and uncertainty with computational models."
    ELPA2019.11.001, 2020.11.0012019b, 2020bbroadwell, epyc, skylakeiris, aionEigenvalue SoLvers for Petaflop-Applications .
    Eigen3.3.7, 3.3.8, 3.4.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionEigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
    GEOS3.8.0, 3.9.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionGEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)
    GMP6.1.2, 6.2.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionGMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.
    Gurobi9.0.0, 9.1.22019b, 2020bbroadwell, skylake, epyciris, aionThe Gurobi Optimizer is a state-of-the-art solver for mathematical programming. The solvers in the Gurobi Optimizer were designed from the ground up to exploit modern architectures and multi-core processors, using the most advanced implementations of the latest algorithms.
    Harminv1.4.12019bbroadwell, skylakeirisHarminv is a free program (and accompanying library) to solve the problem of harmonic inversion - given a discrete-time, finite-length signal that consists of a sum of finitely-many sinusoids (possibly exponentially decaying) in a given bandwidth, it determines the frequencies, decay constants, amplitudes, and phases of those sinusoids.
    ISL0.232020bbroadwell, epyc, skylakeaion, irisisl is a library for manipulating sets and relations of integer points bounded by linear constraints.
    Keras2.3.1, 2.4.32019b, 2020bgpu, broadwell, epyc, skylakeiris, aionKeras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow.
    MATLAB2019b, 2020a, 2021a2019b, 2020bbroadwell, skylake, epyciris, aionMATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran.
    METIS5.1.02019b, 2020bbroadwell, skylake, epyciris, aionMETIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.
    MPC1.2.12020bbroadwell, epyc, skylakeaion, irisGnu Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. It extends the principles of the IEEE-754 standard for fixed precision real floating point numbers to complex numbers, providing well-defined semantics for every operation. At the same time, speed of operation at high precision is a major design goal.
    MPFR4.0.2, 4.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe MPFR library is a C library for multiple-precision floating-point computations with correct rounding.
    MUMPS5.3.52020bbroadwell, epyc, skylakeaion, irisA parallel sparse direct solver
    Mathematica12.0.0, 12.1.02019b, 2020bbroadwell, skylake, epyciris, aionMathematica is a computational software program used in many scientific, engineering, mathematical and computing fields.
    Mesquite2.3.02019bbroadwell, skylakeirisMesh-Quality Improvement Library
    ParMETIS4.0.32019bbroadwell, skylakeirisParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes.
    ParMGridGen1.02019bbroadwell, skylakeirisParMGridGen is an MPI-based parallel library that is based on the serial package MGridGen, that implements (serial) algorithms for obtaining a sequence of successive coarse grids that are well-suited for geometric multigrid methods.
    SCOTCH6.0.9, 6.1.02019b, 2020bbroadwell, skylake, epyciris, aionSoftware package and libraries for sequential and parallel graph partitioning, static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.
    Stata172020bbroadwell, epyc, skylakeaion, irisStata is a complete, integrated statistical software package that provides everything you need for data analysis, data management, and graphics.
    Theano1.0.4, 1.1.22019b, 2020bgpu, broadwell, epyc, skylakeiris, aionTheano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.
    Voro++0.4.62019bbroadwell, skylakeirisVoro++ is a software library for carrying out three-dimensional computations of the Voronoi tessellation. A distinguishing feature of the Voro++ library is that it carries out cell-based calculations, computing the Voronoi cell for each particle individually. It is particularly well-suited for applications that rely on cell-based statistics, where features of Voronoi cells (eg. volume, centroid, number of faces) can be used to analyze a system of particles.
    gmsh4.8.42020bbroadwell, epyc, skylakeaion, irisGmsh is a 3D finite element grid generator with a build-in CAD engine and post-processor.
    libcerf1.13, 1.142019b, 2020bbroadwell, skylake, epyciris, aionlibcerf is a self-contained numeric library that provides an efficient and accurate implementation of complex error functions, along with Dawson, Faddeeva, and Voigt functions.
    magma2.5.1, 2.5.42019b, 2020bgpuirisThe MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current Multicore+GPU systems.
    molmod1.4.52019bbroadwell, skylakeirisMolMod is a Python library with many compoments that are useful to write molecular modeling programs.
    scipy1.4.12019bbroadwell, skylake, gpuirisSciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension for Python.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/mpi/index.html b/software/swsets/mpi/index.html new file mode 100644 index 00000000..d136604e --- /dev/null +++ b/software/swsets/mpi/index.html @@ -0,0 +1,2831 @@ + + + + + + + + + + + + + + + + + + + + + + + + MPI - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    MPI

    + +

    Alphabetical list of available ULHPC software belonging to the 'mpi' category. +To load a software of this category, use: module load mpi/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    OpenMPI3.1.4, 4.0.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe Open MPI Project is an open source MPI-3 implementation.
    impi2018.5.288, 2019.9.3042019b, 2020bbroadwell, skylake, gpu, epyciris, aionIntel MPI Library, compatible with MPICH ABI
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/numlib/index.html b/software/swsets/numlib/index.html new file mode 100644 index 00000000..409a72b0 --- /dev/null +++ b/software/swsets/numlib/index.html @@ -0,0 +1,2927 @@ + + + + + + + + + + + + + + + + + + + + + + + + Numerical libraries - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Numerical libraries

    + +

    Alphabetical list of available ULHPC software belonging to the 'numlib' category. +To load a software of this category, use: module load numlib/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    Armadillo10.5.3, 9.900.12020b, 2019bbroadwell, epyc, skylakeaion, irisArmadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.
    CGAL4.14.1, 5.22019b, 2020bbroadwell, skylake, epyciris, aionThe goal of the CGAL Open Source Project is to provide easy access to efficient and reliable geometric algorithms in the form of a C++ library.
    FFTW3.3.82019b, 2020bbroadwell, skylake, gpu, epyciris, aionFFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data.
    GSL2.62019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.
    Hypre2.20.02020bbroadwell, epyc, skylakeaion, irisHypre is a library for solving large, sparse linear systems of equations on massively parallel computers. The problems of interest arise in the simulation codes being developed at LLNL and elsewhere to study physical phenomena in the defense, environmental, energy, and biological sciences.
    NLopt2.6.1, 2.6.22019b, 2020bbroadwell, skylake, gpu, epyciris, aionNLopt is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms.
    OpenBLAS0.3.12, 0.3.72020b, 2019bbroadwell, epyc, skylake, gpuaion, irisOpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
    PETSc3.14.42020bbroadwell, epyc, skylakeaion, irisPETSc, pronounced PET-see (the S is silent), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
    SLEPc3.14.22020bbroadwell, epyc, skylakeaion, irisSLEPc (Scalable Library for Eigenvalue Problem Computations) is a software library for the solution of large scale sparse eigenvalue problems on parallel computers. It is an extension of PETSc and can be used for either standard or generalized eigenproblems, with real or complex arithmetic. It can also be used for computing a partial SVD of a large, sparse, rectangular matrix, and to solve quadratic eigenvalue problems.
    ScaLAPACK2.0.2, 2.1.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers.
    SuiteSparse5.8.12020bbroadwell, epyc, skylakeaion, irisSuiteSparse is a collection of libraries manipulate sparse matrices.
    arpack-ng3.7.0, 3.8.02019b, 2020bbroadwell, skylake, epyciris, aionARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.
    cuDNN7.6.4.38, 8.0.4.30, 8.0.5.392019b, 2020bgpuirisThe NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks.
    imkl2019.5.281, 2020.4.3042019b, 2020bbroadwell, skylake, gpu, epyciris, aionIntel Math Kernel Library is a library of highly optimized, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/perf/index.html b/software/swsets/perf/index.html new file mode 100644 index 00000000..67f16110 --- /dev/null +++ b/software/swsets/perf/index.html @@ -0,0 +1,2895 @@ + + + + + + + + + + + + + + + + + + + + + + + + Performance measurements - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Performance measurements

    + +

    Alphabetical list of available ULHPC software belonging to the 'perf' category. +To load a software of this category, use: module load perf/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    Advisor2019_update52019bbroadwell, skylakeirisVectorization Optimization and Thread Prototyping - Vectorize & thread code or performance “dies” - Easy workflow + data + tips = faster code faster - Prioritize, Prototype & Predict performance gain
    CubeGUI4.4.42019bbroadwell, skylakeirisCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube graphical report explorer.
    CubeLib4.4.42019bbroadwell, skylakeirisCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube general purpose C++ library component and command-line tools.
    CubeWriter4.4.32019bbroadwell, skylakeirisCube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube high-performance C writer library component.
    OPARI22.0.52019bbroadwell, skylakeirisOPARI2, the successor of Forschungszentrum Juelich's OPARI, is a source-to-source instrumentation tool for OpenMP and hybrid codes. It surrounds OpenMP directives and runtime library calls with calls to the POMP2 measurement interface.
    OTF22.22019bbroadwell, skylakeirisThe Open Trace Format 2 is a highly scalable, memory efficient event trace data format plus support library. It is the new standard trace format for Scalasca, Vampir, and TAU and is open for other tools.
    PAPI6.0.02019b, 2020bbroadwell, skylake, epyciris, aionPAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. In addition Component PAPI provides access to a collection of components that expose performance measurement opportunites across the hardware and software stack.
    PDT3.252019bbroadwell, skylakeirisProgram Database Toolkit (PDT) is a framework for analyzing source code written in several programming languages and for making rich program knowledge accessible to developers of static and dynamic analysis tools. PDT implements a standard program representation, the program database (PDB), that can be accessed in a uniform way through a class library supporting common PDB operations.
    Scalasca2.52019bbroadwell, skylakeirisScalasca is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks -- in particular those concerning communication and synchronization -- and offers guidance in exploring their causes.
    Score-P6.02019bbroadwell, skylakeirisThe Score-P measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/phys/index.html b/software/swsets/phys/index.html new file mode 100644 index 00000000..0e2e2df0 --- /dev/null +++ b/software/swsets/phys/index.html @@ -0,0 +1,2855 @@ + + + + + + + + + + + + + + + + + + + + + + + + Physics - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Physics

    + +

    Alphabetical list of available ULHPC software belonging to the 'phys' category. +To load a software of this category, use: module load phys/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    Elk6.3.2, 7.0.122019b, 2020bbroadwell, skylake, epyciris, aionAn all-electron full-potential linearised augmented-plane wave (FP-LAPW) code with many advanced features. Written originally at Karl-Franzens-Universität Graz as a milestone of the EXCITING EU Research and Training Network, the code is designed to be as simple as possible so that new developments in the field of density functional theory (DFT) can be added quickly and reliably.
    FDS6.7.1, 6.7.62019b, 2020bbroadwell, skylake, epyciris, aionFire Dynamics Simulator (FDS) is a large-eddy simulation (LES) code for low-speed flows, with an emphasis on smoke and heat transport from fires.
    Meep1.4.32019bbroadwell, skylakeirisMeep (or MEEP) is a free finite-difference time-domain (FDTD) simulation software package developed at MIT to model electromagnetic systems.
    UDUNITS2.2.262019b, 2020bbroadwell, skylake, gpu, epyciris, aionUDUNITS supports conversion of unit specifications between formatted and binary forms, arithmetic manipulation of units, and conversion of values between compatible scales of measurement.
    VASP5.4.4, 6.2.12019b, 2020bbroadwell, skylake, epyc, gpuiris, aionThe Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/system/index.html b/software/swsets/system/index.html new file mode 100644 index 00000000..0f54e840 --- /dev/null +++ b/software/swsets/system/index.html @@ -0,0 +1,2911 @@ + + + + + + + + + + + + + + + + + + + + + + + + System-level software - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    System-level software

    + +

    Alphabetical list of available ULHPC software belonging to the 'system' category. +To load a software of this category, use: module load system/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    CUDA10.1.243, 11.1.12019b, 2020bgpuirisCUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
    CUDAcore11.1.12020bgpuirisCUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
    ULHPC-bd2020b2020bbroadwell, epyc, skylakeaion, irisGeneric Module bundle for BigData Analytics software in use on the UL HPC Facility
    ULHPC-bio2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionGeneric Module bundle for Bioinformatics, biology and biomedical software in use on the UL HPC Facility, especially at LCSB
    ULHPC-cs2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionGeneric Module bundle for Computational science software in use on the UL HPC Facility, including: - Computer Aided Engineering, incl. CFD - Chemistry, Computational Chemistry and Quantum Chemistry - Data management & processing tools - Earth Sciences - Quantum Computing - Physics and physical systems simulations
    ULHPC-dl2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionGeneric Module bundle for (CPU-version) of AI / Deep Learning / Machine Learning software in use on the UL HPC Facility
    ULHPC-gpu2019b, 2020b2019b, 2020bgpuirisGeneric Module bundle for GPU accelerated User Software in use on the UL HPC Facility
    ULHPC-math2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionGeneric Module bundle for High-level mathematical software and Linear Algrebra libraries in use on the UL HPC Facility
    ULHPC-toolchains2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionGeneric Module bundle that contains all the dependencies required to enable toolchains and building tools/programming language in use on the UL HPC Facility
    ULHPC-tools2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionMisc tools, incl. - perf: Performance tools - tools: General purpose tools
    hwloc1.11.12, 2.2.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.
    libpciaccess0.14, 0.162019b, 2020bbroadwell, skylake, gpu, epyciris, aionGeneric PCI access library.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/toolchain/index.html b/software/swsets/toolchain/index.html new file mode 100644 index 00000000..f62f8e68 --- /dev/null +++ b/software/swsets/toolchain/index.html @@ -0,0 +1,2895 @@ + + + + + + + + + + + + + + + + + + + + + + + + Toolchains (software stacks) - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Toolchains (software stacks)

    + +

    Alphabetical list of available ULHPC software belonging to the 'toolchain' category. +To load a software of this category, use: module load toolchain/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    foss2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionGNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    fosscuda2019b, 2020b2019b, 2020bgpuirisGCC based compiler toolchain with CUDA support, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
    gcccuda2019b, 2020b2019b, 2020bgpuirisGNU Compiler Collection (GCC) based compiler toolchain, along with CUDA toolkit.
    gompi2019b, 2020b2019b, 2020bbroadwell, skylake, epyciris, aionGNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
    gompic2019b, 2020b2019b, 2020bgpuirisGNU Compiler Collection (GCC) based compiler toolchain along with CUDA toolkit, including OpenMPI for MPI support with CUDA features enabled.
    iccifortcuda2019b, 2020b2019b, 2020bgpuirisIntel C, C++ & Fortran compilers with CUDA toolkit
    iimpi2019b, 2020b2019b, 2020bbroadwell, skylake, epyc, gpuiris, aionIntel C/C++ and Fortran compilers, alongside Intel MPI.
    iimpic2019b, 2020b2019b, 2020bgpuirisIntel C/C++ and Fortran compilers, alongside Intel MPI and CUDA.
    intel2019b, 2020b2019b, 2020bbroadwell, skylake, epyc, gpuiris, aionCompiler toolchain including Intel compilers, Intel MPI and Intel Math Kernel Library (MKL).
    intelcuda2019b, 2020b2019b, 2020bgpuirisIntel Cluster Toolkit Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MPI & Intel MKL, with CUDA toolkit
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/tools/index.html b/software/swsets/tools/index.html new file mode 100644 index 00000000..37945d0f --- /dev/null +++ b/software/swsets/tools/index.html @@ -0,0 +1,3143 @@ + + + + + + + + + + + + + + + + + + + + + + + + Utilities - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Utilities

    + +

    Alphabetical list of available ULHPC software belonging to the 'tools' category. +To load a software of this category, use: module load tools/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    ANSYS19.4, 21.12019b, 2020bbroadwell, skylake, epyciris, aionANSYS simulation software enables organizations to confidently predict how their products will operate in the real world. We believe that every product is a promise of something greater.
    ArmForge20.0.32019b, 2020bbroadwell, skylake, epyciris, aionThe industry standard development package for C, C++ and Fortran high performance code on Linux. Forge is designed to handle the complex software projects - including parallel, multiprocess and multithreaded code. Arm Forge combines an industry-leading debugger, Arm DDT, and an out-of-the-box-ready profiler, Arm MAP.
    ArmReports20.0.32019b, 2020bbroadwell, skylake, epyciris, aionArm Performance Reports - a low-overhead tool that produces one-page text and HTML reports summarizing and characterizing both scalar and MPI application performance. Arm Performance Reports runs transparently on optimized production-ready codes by adding a single command to your scripts, and provides the most effective way to characterize and understand the performance of HPC application runs.
    Aspera-CLI3.9.1, 3.9.62019b, 2020bbroadwell, skylake, epyciris, aionIBM Aspera Command-Line Interface (the Aspera CLI) is a collection of Aspera tools for performing high-speed, secure data transfers from the command line. The Aspera CLI is for users and organizations who want to automate their transfer workflows.
    DB18.1.32, 18.1.402019b, 2020bbroadwell, skylake, epyc, gpuiris, aionBerkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects.
    DMTCP2.5.22019bbroadwell, skylakeirisDMTCP is a tool to transparently checkpoint the state of multiple simultaneous applications, including multi-threaded and distributed applications. It operates directly on the user binary executable, without any Linux kernel modules or other kernel modifications.
    EasyBuild4.3.0, 4.3.3, 4.4.1, 4.4.2, 4.5.42019b, 2020bbroadwell, skylake, gpu, epyciris, aionEasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
    GLPK4.652019b, 2020bbroadwell, skylake, epyc, gpuiris, aionThe GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library.
    Ghostscript9.50, 9.53.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionGhostscript is a versatile processor for PostScript data with the ability to render PostScript to different targets. It used to be part of the cups printing stack, but is no longer used for that.
    Hadoop2.10.02020bbroadwell, epyc, skylakeaion, irisHadoop MapReduce by Cloudera
    Horovod0.19.1, 0.22.02019b, 2020bbroadwell, skylake, gpuirisHorovod is a distributed training framework for TensorFlow.
    Inspector2019_update52019bbroadwell, skylakeirisIntel Inspector XE is an easy to use memory error checker and thread checker for serial and parallel applications
    Meson0.51.2, 0.55.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionMeson is a cross-platform build system designed to be both as fast and as user friendly as possible.
    Ninja1.10.1, 1.9.02020b, 2019bbroadwell, epyc, skylake, gpuaion, irisNinja is a small build system with a focus on speed.
    Singularity3.6.0, 3.8.12019b, 2020bbroadwell, skylake, epyciris, aionSingularityCE is an open source container platform designed to be simple, fast, and secure. Singularity is optimized for EPC and HPC workloads, allowing untrusted users to run untrusted containers in a trusted way.
    Sumo1.3.12019bbroadwell, skylakeirisSumo is an open source, highly portable, microscopic and continuous traffic simulation package designed to handle large road networks.
    Szip2.1.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionSzip compression software, providing lossless compression of scientific data
    UnZip6.02020bbroadwell, epyc, skylake, gpuaion, irisUnZip is an extraction utility for archives compressed in .zip format (also called "zipfiles"). Although highly compatible both with PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP's own Zip program, our primary objectives have been portability and non-MSDOS functionality.
    VTune2019_update8, 2020_update32019b, 2020bbroadwell, skylake, epyciris, aionIntel VTune Amplifier XE is the premier performance profiler for C, C++, C#, Fortran, Assembly and Java.
    XZ5.2.4, 5.2.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionxz: XZ utilities
    Z34.8.102020bbroadwell, epyc, skylake, gpuaion, irisZ3 is a theorem prover from Microsoft Research.
    Zip3.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionZip is a compression and file packaging/archive utility. Although highly compatible both with PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP's own UnZip, our primary objectives have been portability and other-than-MSDOS functionality
    archspec0.1.02019bbroadwell, skylakeirisA library for detecting, labeling, and reasoning about microarchitectures
    binutils2.32, 2.352019b, 2020bbroadwell, skylake, gpu, epyciris, aionbinutils: GNU binary utilities
    bokeh2.2.32020bbroadwell, epyc, skylake, gpuaion, irisStatistical and novel interactive HTML plots for Python
    bzip21.0.82019b, 2020bbroadwell, skylake, gpu, epyciris, aionbzip2 is a freely available, patent free, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression.
    cURL7.66.0, 7.72.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionlibcurl is a free and easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file transfer resume, http proxy tunneling and more.
    expat2.2.7, 2.2.92019b, 2020bbroadwell, skylake, gpu, epyciris, aionExpat is an XML parser library written in C. It is a stream-oriented parser in which an application registers handlers for things the parser might find in the XML document (like start tags)
    gettext0.19.8.1, 0.20.1, 0.212019b, 2020bbroadwell, skylake, gpu, epyciris, aionGNU 'gettext' is an important step for the GNU Translation Project, as it is an asset on which we may build many other steps. This package offers to programmers, translators, and even users, a well integrated set of tools and documentation
    git2.23.0, 2.28.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionGit is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
    gocryptfs1.7.1, 2.0.12019b, 2020bbroadwell, skylake, epyciris, aionEncrypted overlay filesystem written in Go. gocryptfs uses file-based encryption that is implemented as a mountable FUSE filesystem. Each file in gocryptfs is stored as one corresponding encrypted file on the hard disk.
    groff1.22.42020bbroadwell, epyc, skylake, gpuaion, irisGroff (GNU troff) is a typesetting system that reads plain text mixed with formatting commands and produces formatted output.
    gzip1.102019b, 2020bbroadwell, skylake, epyc, gpuiris, aiongzip (GNU zip) is a popular data compression program as a replacement for compress
    help2man1.47.16, 1.47.4, 1.47.82020b, 2019bbroadwell, epyc, skylake, gpuaion, irishelp2man produces simple manual pages from the '--help' and '--version' output of other commands.
    hypothesis4.44.2, 5.41.2, 5.41.52019b, 2020bbroadwell, skylake, gpu, epyciris, aionHypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.
    itac2019.4.0362019bbroadwell, skylakeirisThe Intel Trace Collector is a low-overhead tracing library that performs event-based tracing in applications. The Intel Trace Analyzer provides a convenient way to monitor application activities gathered by the Intel Trace Collector through graphical displays.
    libarchive3.4.32020bbroadwell, epyc, skylake, gpuaion, irisMulti-format archive and compression library
    networkx2.52020bbroadwell, epyc, skylake, gpuaion, irisNetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
    numactl2.0.12, 2.0.132019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe numactl program allows you to run your application program on specific cpu's and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.
    re2c1.2.1, 2.0.32019b, 2020bbroadwell, skylake, epyciris, aionre2c is a free and open-source lexer generator for C and C++. Its main goal is generating fast lexers: at least as fast as their reasonably optimized hand-coded counterparts. Instead of using traditional table-driven approach, re2c encodes the generated finite state automata directly in the form of conditional jumps and comparisons.
    util-linux2.34, 2.362019b, 2020bbroadwell, skylake, gpu, epyciris, aionSet of Linux utilities
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/swsets/vis/index.html b/software/swsets/vis/index.html new file mode 100644 index 00000000..f8b1c41f --- /dev/null +++ b/software/swsets/vis/index.html @@ -0,0 +1,3135 @@ + + + + + + + + + + + + + + + + + + + + + + + + Visualisation - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + + +
    +
    + + + + + + + + + + +

    Visualisation

    + +

    Alphabetical list of available ULHPC software belonging to the 'vis' category. +To load a software of this category, use: module load vis/<software>[/<version>]

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwareVersionsSwsetsArchitecturesClustersDescription
    ATK2.34.1, 2.36.02019b, 2020bbroadwell, skylake, epyciris, aionATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications.
    FFmpeg4.2.1, 4.3.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionA complete, cross-platform solution to record, convert and stream audio and video.
    FLTK1.3.52019b, 2020bbroadwell, skylake, epyciris, aionFLTK is a cross-platform C++ GUI toolkit for UNIX/Linux (X11), Microsoft Windows, and MacOS X. FLTK provides modern GUI functionality without the bloat and supports 3D graphics via OpenGL and its built-in GLUT emulation.
    FreeImage3.18.02020bbroadwell, epyc, skylakeaion, irisFreeImage is an Open Source library project for developers who would like to support popular graphics image formats like PNG, BMP, JPEG, TIFF and others as needed by today's multimedia applications. FreeImage is easy to use, fast, multithreading safe.
    GLib2.62.0, 2.66.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionGLib is one of the base libraries of the GTK+ project
    GTK+3.24.13, 3.24.232019b, 2020bbroadwell, skylake, epyciris, aionGTK+ is the primary library used to construct user interfaces in GNOME. It provides all the user interface controls, or widgets, used in a common graphical application. Its object-oriented API allows you to construct user interfaces without dealing with the low-level details of drawing and device interaction.
    Gdk-Pixbuf2.38.2, 2.40.02019b, 2020bbroadwell, skylake, epyciris, aionThe Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3.
    HarfBuzz2.6.4, 2.6.72019b, 2020bbroadwell, skylake, epyciris, aionHarfBuzz is an OpenType text shaping engine.
    ImageMagick7.0.10-35, 7.0.9-52020b, 2019bbroadwell, epyc, skylake, gpuaion, irisImageMagick is a software suite to create, edit, compose, or convert bitmap images
    JasPer2.0.14, 2.0.242019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe JasPer Project is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.
    LittleCMS2.11, 2.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisLittle CMS intends to be an OPEN SOURCE small-footprint color management engine, with special focus on accuracy and performance.
    Mesa19.1.7, 19.2.1, 20.2.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionMesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
    OpenCV4.2.0, 4.5.12019b, 2020bbroadwell, skylake, epyciris, aionOpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Includes extra modules for OpenCV from the contrib repository.
    OpenEXR2.5.52020bbroadwell, epyc, skylakeaion, irisOpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications
    POV-Ray3.7.0.82020bbroadwell, epyc, skylakeaion, irisThe Persistence of Vision Raytracer, or POV-Ray, is a ray tracing program which generates images from a text-based scene description, and is available for a variety of computer platforms. POV-Ray is a high-quality, Free Software tool for creating stunning three-dimensional graphics. The source code is available for those wanting to do their own ports.
    Pango1.44.7, 1.47.02019b, 2020bbroadwell, skylake, epyciris, aionPango is a library for laying out and rendering of text, with an emphasis on internationalization. Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in the context of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.
    ParaView5.6.2, 5.8.12019b, 2020bbroadwell, skylake, epyciris, aionParaView is a scientific parallel visualizer.
    Pillow6.2.1, 8.0.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionPillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.
    PyOpenGL3.1.52020bbroadwell, epyc, skylakeaion, irisPyOpenGL is the most common cross platform Python binding to OpenGL and related APIs.
    PyQt55.15.12020bbroadwell, epyc, skylakeaion, irisPyQt5 is a set of Python bindings for v5 of the Qt application framework from The Qt Company. This bundle includes PyQtWebEngine, a set of Python bindings for The Qt Company’s Qt WebEngine framework.
    PyQtGraph0.11.12020bbroadwell, epyc, skylakeaion, irisPyQtGraph is a pure-python graphics and GUI library built on PyQt5/PySide2 and numpy.
    Tk8.6.10, 8.6.92020b, 2019bbroadwell, epyc, skylake, gpuaion, irisTk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.
    VMD1.9.4a512020bbroadwell, epyc, skylakeaion, irisVMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
    VTK8.2.0, 9.0.12019b, 2020bbroadwell, skylake, epyciris, aionThe Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.
    VirtualGL2.6.22019bbroadwell, skylakeirisVirtualGL is an open source toolkit that gives any Linux or Unix remote display software the ability to run OpenGL applications with full hardware acceleration.
    X1120190717, 202010082019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe X Window System (X11) is a windowing system for bitmap displays
    Xvfb1.20.92020bbroadwell, epyc, skylake, gpuaion, irisXvfb is an X server that can run on machines with no display hardware and no physical input devices. It emulates a dumb framebuffer using virtual memory.
    at-spi2-atk2.34.1, 2.38.02019b, 2020bbroadwell, skylake, epyciris, aionAT-SPI 2 toolkit bridge
    at-spi2-core2.34.0, 2.38.02019b, 2020bbroadwell, skylake, epyciris, aionAssistive Technology Service Provider Interface.
    cairo1.16.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionCairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB
    fontconfig2.13.1, 2.13.922019b, 2020bbroadwell, skylake, gpu, epyciris, aionFontconfig is a library designed to provide system-wide font configuration, customization and application access.
    freetype2.10.1, 2.10.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionFreeType 2 is a software font engine that is designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other products as well.
    gnuplot5.2.8, 5.4.12019b, 2020bbroadwell, skylake, epyciris, aionPortable interactive, function plotting utility
    libGLU9.0.12019b, 2020bbroadwell, skylake, gpu, epyciris, aionThe OpenGL Utility Library (GLU) is a computer graphics library for OpenGL.
    matplotlib3.1.1, 3.3.32019b, 2020bbroadwell, skylake, epyc, gpuiris, aionmatplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.
    pixman0.38.4, 0.40.02019b, 2020bbroadwell, skylake, gpu, epyciris, aionPixman is a low-level software library for pixel manipulation, providing features such as image compositing and trapezoid rasterization. Important users of pixman are the cairo graphics library and the X server.
    scikit-image0.18.12020bbroadwell, epyc, skylake, gpuaion, irisscikit-image is a collection of algorithms for image processing.
    x26420190925, 202010262019b, 2020bbroadwell, skylake, gpu, epyciris, aionx264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.
    x2653.2, 3.32019b, 2020bbroadwell, skylake, gpu, epyciris, aionx265 is a free software library and application for encoding video streams into the H.265 AVC compression format, and is released under the terms of the GNU GPL.
    xprop1.2.4, 1.2.52019b, 2020bbroadwell, skylake, epyciris, aionThe xprop utility is for displaying window and font properties in an X server. One window or font is selected using the command line arguments or possibly in the case of a window, by clicking on the desired window. A list of properties is then given, possibly with formatting information.
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/software/visu/paraview/index.html b/software/visu/paraview/index.html new file mode 100644 index 00000000..38ac2601 --- /dev/null +++ b/software/visu/paraview/index.html @@ -0,0 +1,2977 @@ + + + + + + + + + + + + + + + + + + + + + + + + ParaView - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    ParaView

    + +

    +

    ParaView is an open-source, multi-platform data analysis +and visualization application. ParaView users can quickly build visualizations +to analyze their data using qualitative and quantitative techniques. +The data exploration can be done interactively in 3D or programmatically +using ParaView’s batch processing capabilities.

    +

    ParaView was developed to analyse extremely large datasets using distributed +memory computing resources. It can be run on supercomputers to analyse datasets of +petascale size as well as on laptops for smaller data, has become an integral tool +in many national laboratories, universities and industry, +and has won several awards related to high performance computation.

    +

    ParaView ia an open-source, interactive, scalable, data analysis and +scientific visualization tools. It can be used to visualize the +simulation data or processing the data by using GUI or non-interactive +mode by using the Python scripting. Using non-interacting mode, +that is using the python scripting is much faster than using the interactive mode, +when the data set is larger in both ParaView and VisIt.

    +

    Available versions of ParaView in ULHPC

    +

    To check available versions of ParaView at ULHPC type module spider paraview. +The following list shows the available versions of ParaView in ULHPC. +

    vis/ParaView/5.5.0-intel-2018a-mpi
    +vis/ParaView/5.6.2-foss-2019a-mpi
    +vis/ParaView/5.6.2-intel-2019a-mpi
    +

    +

    Interactive mode

    +

    To open an ParaView in the interactive mode, please follow the following steps:

    +
    # From your local computer
    +$ ssh -X iris-cluster
    +
    +# Reserve the node for interactive computation
    +$ salloc -p interactive --time=00:30:00 --ntasks 1 -c 4 --x11  # OR si --x11 [...]
    +
    +# Load the module abinit and needed environment
    +$ module purge 
    +$ module load swenv/default-env/latest
    +$ module load vis/ParaView/5.6.2-intel-2019a-mpi
    +
    +$ paraview &
    +
    + +

    Batch mode

    +
    #!/bin/bash -l
    +#SBATCH -J ParaView
    +###SBATCH -A <project name>
    +#SBATCH -N 2
    +#SBATCH --ntasks-per-node=28
    +#SBATCH --time=00:30:00
    +#SBATCH -p batch
    +
    +# Load the module Paraview and needed environment
    +module purge 
    +module load swenv/default-env/latest
    +module load vis/ParaView/5.6.2-intel-2019a-mpi
    +
    +srun -n ${SLURM_NTASKS} pvbatch python-script.py
    +
    + +

    Additional information

    +

    ParaView's User Manual has a +detail instructions about visualization and processing data in ParaView. There are two +ways of getting or writing the python script for the ParaView:

    +
      +
    1. Reading the ParaView's python scripting wiki and ParaView's Python Scripting Manual.
    2. +
    3. Record the commands that we do in ParaView GUI. Later this commands put into python script and it can be run as python scripting in ParaView.
    4. +
    +
    +

    Tip

    +

    If you find some issues with the instructions above, +please report it to us using support ticket.

    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/stylesheets/bootstrap-4.5.0.min.css b/stylesheets/bootstrap-4.5.0.min.css new file mode 100644 index 00000000..7d2a868f --- /dev/null +++ b/stylesheets/bootstrap-4.5.0.min.css @@ -0,0 +1,7 @@ +/*! + * Bootstrap v4.5.0 (https://getbootstrap.com/) + * Copyright 2011-2020 The Bootstrap Authors + * Copyright 2011-2020 Twitter, Inc. + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) + */:root{--blue:#007bff;--indigo:#6610f2;--purple:#6f42c1;--pink:#e83e8c;--red:#dc3545;--orange:#fd7e14;--yellow:#ffc107;--green:#28a745;--teal:#20c997;--cyan:#17a2b8;--white:#fff;--gray:#6c757d;--gray-dark:#343a40;--primary:#007bff;--secondary:#6c757d;--success:#28a745;--info:#17a2b8;--warning:#ffc107;--danger:#dc3545;--light:#f8f9fa;--dark:#343a40;--breakpoint-xs:0;--breakpoint-sm:576px;--breakpoint-md:768px;--breakpoint-lg:992px;--breakpoint-xl:1200px;--font-family-sans-serif:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";--font-family-monospace:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace}*,::after,::before{box-sizing:border-box}html{font-family:sans-serif;line-height:1.15;-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:transparent}article,aside,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}body{margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-size:1rem;font-weight:400;line-height:1.5;color:#212529;text-align:left;background-color:#fff}[tabindex="-1"]:focus:not(:focus-visible){outline:0!important}hr{box-sizing:content-box;height:0;overflow:visible}h1,h2,h3,h4,h5,h6{margin-top:0;margin-bottom:.5rem}p{margin-top:0;margin-bottom:1rem}abbr[data-original-title],abbr[title]{text-decoration:underline;-webkit-text-decoration:underline dotted;text-decoration:underline dotted;cursor:help;border-bottom:0;-webkit-text-decoration-skip-ink:none;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}dl,ol,ul{margin-top:0;margin-bottom:1rem}ol ol,ol ul,ul ol,ul ul{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem}b,strong{font-weight:bolder}small{font-size:80%}sub,sup{position:relative;font-size:75%;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}a{color:#007bff;text-decoration:none;background-color:transparent}a:hover{color:#0056b3;text-decoration:underline}a:not([href]){color:inherit;text-decoration:none}a:not([href]):hover{color:inherit;text-decoration:none}code,kbd,pre,samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;font-size:1em}pre{margin-top:0;margin-bottom:1rem;overflow:auto;-ms-overflow-style:scrollbar}figure{margin:0 0 1rem}img{vertical-align:middle;border-style:none}svg{overflow:hidden;vertical-align:middle}table{border-collapse:collapse}caption{padding-top:.75rem;padding-bottom:.75rem;color:#6c757d;text-align:left;caption-side:bottom}th{text-align:inherit}label{display:inline-block;margin-bottom:.5rem}button{border-radius:0}button:focus{outline:1px dotted;outline:5px auto -webkit-focus-ring-color}button,input,optgroup,select,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,input{overflow:visible}button,select{text-transform:none}[role=button]{cursor:pointer}select{word-wrap:normal}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled),button:not(:disabled){cursor:pointer}[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner,button::-moz-focus-inner{padding:0;border-style:none}input[type=checkbox],input[type=radio]{box-sizing:border-box;padding:0}textarea{overflow:auto;resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{display:block;width:100%;max-width:100%;padding:0;margin-bottom:.5rem;font-size:1.5rem;line-height:inherit;color:inherit;white-space:normal}progress{vertical-align:baseline}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:none}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}summary{display:list-item;cursor:pointer}template{display:none}[hidden]{display:none!important}.h1,.h2,.h3,.h4,.h5,.h6,h1,h2,h3,h4,h5,h6{margin-bottom:.5rem;font-weight:500;line-height:1.2}.h1,h1{font-size:2.5rem}.h2,h2{font-size:2rem}.h3,h3{font-size:1.75rem}.h4,h4{font-size:1.5rem}.h5,h5{font-size:1.25rem}.h6,h6{font-size:1rem}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:6rem;font-weight:300;line-height:1.2}.display-2{font-size:5.5rem;font-weight:300;line-height:1.2}.display-3{font-size:4.5rem;font-weight:300;line-height:1.2}.display-4{font-size:3.5rem;font-weight:300;line-height:1.2}hr{margin-top:1rem;margin-bottom:1rem;border:0;border-top:1px solid rgba(0,0,0,.1)}.small,small{font-size:80%;font-weight:400}.mark,mark{padding:.2em;background-color:#fcf8e3}.list-unstyled{padding-left:0;list-style:none}.list-inline{padding-left:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-right:.5rem}.initialism{font-size:90%;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote-footer{display:block;font-size:80%;color:#6c757d}.blockquote-footer::before{content:"\2014\00A0"}.img-fluid{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;border-radius:.25rem;max-width:100%;height:auto}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:90%;color:#6c757d}code{font-size:87.5%;color:#e83e8c;word-wrap:break-word}a>code{color:inherit}kbd{padding:.2rem .4rem;font-size:87.5%;color:#fff;background-color:#212529;border-radius:.2rem}kbd kbd{padding:0;font-size:100%;font-weight:700}pre{display:block;font-size:87.5%;color:#212529}pre code{font-size:inherit;color:inherit;word-break:normal}.pre-scrollable{max-height:340px;overflow-y:scroll}.container{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container{max-width:540px}}@media (min-width:768px){.container{max-width:720px}}@media (min-width:992px){.container{max-width:960px}}@media (min-width:1200px){.container{max-width:1140px}}.container-fluid,.container-lg,.container-md,.container-sm,.container-xl{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container,.container-sm{max-width:540px}}@media (min-width:768px){.container,.container-md,.container-sm{max-width:720px}}@media (min-width:992px){.container,.container-lg,.container-md,.container-sm{max-width:960px}}@media (min-width:1200px){.container,.container-lg,.container-md,.container-sm,.container-xl{max-width:1140px}}.row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-15px;margin-left:-15px}.no-gutters{margin-right:0;margin-left:0}.no-gutters>.col,.no-gutters>[class*=col-]{padding-right:0;padding-left:0}.col,.col-1,.col-10,.col-11,.col-12,.col-2,.col-3,.col-4,.col-5,.col-6,.col-7,.col-8,.col-9,.col-auto,.col-lg,.col-lg-1,.col-lg-10,.col-lg-11,.col-lg-12,.col-lg-2,.col-lg-3,.col-lg-4,.col-lg-5,.col-lg-6,.col-lg-7,.col-lg-8,.col-lg-9,.col-lg-auto,.col-md,.col-md-1,.col-md-10,.col-md-11,.col-md-12,.col-md-2,.col-md-3,.col-md-4,.col-md-5,.col-md-6,.col-md-7,.col-md-8,.col-md-9,.col-md-auto,.col-sm,.col-sm-1,.col-sm-10,.col-sm-11,.col-sm-12,.col-sm-2,.col-sm-3,.col-sm-4,.col-sm-5,.col-sm-6,.col-sm-7,.col-sm-8,.col-sm-9,.col-sm-auto,.col-xl,.col-xl-1,.col-xl-10,.col-xl-11,.col-xl-12,.col-xl-2,.col-xl-3,.col-xl-4,.col-xl-5,.col-xl-6,.col-xl-7,.col-xl-8,.col-xl-9,.col-xl-auto{position:relative;width:100%;padding-right:15px;padding-left:15px}.col{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-first{-ms-flex-order:-1;order:-1}.order-last{-ms-flex-order:13;order:13}.order-0{-ms-flex-order:0;order:0}.order-1{-ms-flex-order:1;order:1}.order-2{-ms-flex-order:2;order:2}.order-3{-ms-flex-order:3;order:3}.order-4{-ms-flex-order:4;order:4}.order-5{-ms-flex-order:5;order:5}.order-6{-ms-flex-order:6;order:6}.order-7{-ms-flex-order:7;order:7}.order-8{-ms-flex-order:8;order:8}.order-9{-ms-flex-order:9;order:9}.order-10{-ms-flex-order:10;order:10}.order-11{-ms-flex-order:11;order:11}.order-12{-ms-flex-order:12;order:12}.offset-1{margin-left:8.333333%}.offset-2{margin-left:16.666667%}.offset-3{margin-left:25%}.offset-4{margin-left:33.333333%}.offset-5{margin-left:41.666667%}.offset-6{margin-left:50%}.offset-7{margin-left:58.333333%}.offset-8{margin-left:66.666667%}.offset-9{margin-left:75%}.offset-10{margin-left:83.333333%}.offset-11{margin-left:91.666667%}@media (min-width:576px){.col-sm{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-sm-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-sm-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-sm-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-sm-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-sm-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-sm-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-sm-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-sm-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-sm-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-sm-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-sm-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-sm-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-sm-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-sm-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-sm-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-sm-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-sm-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-sm-first{-ms-flex-order:-1;order:-1}.order-sm-last{-ms-flex-order:13;order:13}.order-sm-0{-ms-flex-order:0;order:0}.order-sm-1{-ms-flex-order:1;order:1}.order-sm-2{-ms-flex-order:2;order:2}.order-sm-3{-ms-flex-order:3;order:3}.order-sm-4{-ms-flex-order:4;order:4}.order-sm-5{-ms-flex-order:5;order:5}.order-sm-6{-ms-flex-order:6;order:6}.order-sm-7{-ms-flex-order:7;order:7}.order-sm-8{-ms-flex-order:8;order:8}.order-sm-9{-ms-flex-order:9;order:9}.order-sm-10{-ms-flex-order:10;order:10}.order-sm-11{-ms-flex-order:11;order:11}.order-sm-12{-ms-flex-order:12;order:12}.offset-sm-0{margin-left:0}.offset-sm-1{margin-left:8.333333%}.offset-sm-2{margin-left:16.666667%}.offset-sm-3{margin-left:25%}.offset-sm-4{margin-left:33.333333%}.offset-sm-5{margin-left:41.666667%}.offset-sm-6{margin-left:50%}.offset-sm-7{margin-left:58.333333%}.offset-sm-8{margin-left:66.666667%}.offset-sm-9{margin-left:75%}.offset-sm-10{margin-left:83.333333%}.offset-sm-11{margin-left:91.666667%}}@media (min-width:768px){.col-md{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-md-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-md-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-md-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-md-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-md-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-md-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-md-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-md-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-md-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-md-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-md-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-md-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-md-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-md-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-md-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-md-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-md-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-md-first{-ms-flex-order:-1;order:-1}.order-md-last{-ms-flex-order:13;order:13}.order-md-0{-ms-flex-order:0;order:0}.order-md-1{-ms-flex-order:1;order:1}.order-md-2{-ms-flex-order:2;order:2}.order-md-3{-ms-flex-order:3;order:3}.order-md-4{-ms-flex-order:4;order:4}.order-md-5{-ms-flex-order:5;order:5}.order-md-6{-ms-flex-order:6;order:6}.order-md-7{-ms-flex-order:7;order:7}.order-md-8{-ms-flex-order:8;order:8}.order-md-9{-ms-flex-order:9;order:9}.order-md-10{-ms-flex-order:10;order:10}.order-md-11{-ms-flex-order:11;order:11}.order-md-12{-ms-flex-order:12;order:12}.offset-md-0{margin-left:0}.offset-md-1{margin-left:8.333333%}.offset-md-2{margin-left:16.666667%}.offset-md-3{margin-left:25%}.offset-md-4{margin-left:33.333333%}.offset-md-5{margin-left:41.666667%}.offset-md-6{margin-left:50%}.offset-md-7{margin-left:58.333333%}.offset-md-8{margin-left:66.666667%}.offset-md-9{margin-left:75%}.offset-md-10{margin-left:83.333333%}.offset-md-11{margin-left:91.666667%}}@media (min-width:992px){.col-lg{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-lg-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-lg-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-lg-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-lg-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-lg-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-lg-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-lg-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-lg-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-lg-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-lg-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-lg-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-lg-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-lg-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-lg-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-lg-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-lg-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-lg-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-lg-first{-ms-flex-order:-1;order:-1}.order-lg-last{-ms-flex-order:13;order:13}.order-lg-0{-ms-flex-order:0;order:0}.order-lg-1{-ms-flex-order:1;order:1}.order-lg-2{-ms-flex-order:2;order:2}.order-lg-3{-ms-flex-order:3;order:3}.order-lg-4{-ms-flex-order:4;order:4}.order-lg-5{-ms-flex-order:5;order:5}.order-lg-6{-ms-flex-order:6;order:6}.order-lg-7{-ms-flex-order:7;order:7}.order-lg-8{-ms-flex-order:8;order:8}.order-lg-9{-ms-flex-order:9;order:9}.order-lg-10{-ms-flex-order:10;order:10}.order-lg-11{-ms-flex-order:11;order:11}.order-lg-12{-ms-flex-order:12;order:12}.offset-lg-0{margin-left:0}.offset-lg-1{margin-left:8.333333%}.offset-lg-2{margin-left:16.666667%}.offset-lg-3{margin-left:25%}.offset-lg-4{margin-left:33.333333%}.offset-lg-5{margin-left:41.666667%}.offset-lg-6{margin-left:50%}.offset-lg-7{margin-left:58.333333%}.offset-lg-8{margin-left:66.666667%}.offset-lg-9{margin-left:75%}.offset-lg-10{margin-left:83.333333%}.offset-lg-11{margin-left:91.666667%}}@media (min-width:1200px){.col-xl{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-xl-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-xl-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-xl-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-xl-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-xl-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-xl-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-xl-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-xl-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-xl-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-xl-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-xl-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-xl-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-xl-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-xl-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-xl-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-xl-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-xl-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-xl-first{-ms-flex-order:-1;order:-1}.order-xl-last{-ms-flex-order:13;order:13}.order-xl-0{-ms-flex-order:0;order:0}.order-xl-1{-ms-flex-order:1;order:1}.order-xl-2{-ms-flex-order:2;order:2}.order-xl-3{-ms-flex-order:3;order:3}.order-xl-4{-ms-flex-order:4;order:4}.order-xl-5{-ms-flex-order:5;order:5}.order-xl-6{-ms-flex-order:6;order:6}.order-xl-7{-ms-flex-order:7;order:7}.order-xl-8{-ms-flex-order:8;order:8}.order-xl-9{-ms-flex-order:9;order:9}.order-xl-10{-ms-flex-order:10;order:10}.order-xl-11{-ms-flex-order:11;order:11}.order-xl-12{-ms-flex-order:12;order:12}.offset-xl-0{margin-left:0}.offset-xl-1{margin-left:8.333333%}.offset-xl-2{margin-left:16.666667%}.offset-xl-3{margin-left:25%}.offset-xl-4{margin-left:33.333333%}.offset-xl-5{margin-left:41.666667%}.offset-xl-6{margin-left:50%}.offset-xl-7{margin-left:58.333333%}.offset-xl-8{margin-left:66.666667%}.offset-xl-9{margin-left:75%}.offset-xl-10{margin-left:83.333333%}.offset-xl-11{margin-left:91.666667%}}.table{width:100%;margin-bottom:1rem;color:#212529}.table td,.table th{padding:.75rem;vertical-align:top;border-top:1px solid #dee2e6}.table thead th{vertical-align:bottom;border-bottom:2px solid #dee2e6}.table tbody+tbody{border-top:2px solid #dee2e6}.table-sm td,.table-sm th{padding:.3rem}.table-bordered{border:1px solid #dee2e6}.table-bordered td,.table-bordered th{border:1px solid #dee2e6}.table-bordered thead td,.table-bordered thead th{border-bottom-width:2px}.table-borderless tbody+tbody,.table-borderless td,.table-borderless th,.table-borderless thead th{border:0}.table-striped tbody tr:nth-of-type(odd){background-color:rgba(0,0,0,.05)}.table-hover tbody tr:hover{color:#212529;background-color:rgba(0,0,0,.075)}.table-primary,.table-primary>td,.table-primary>th{background-color:#b8daff}.table-primary tbody+tbody,.table-primary td,.table-primary th,.table-primary thead th{border-color:#7abaff}.table-hover .table-primary:hover{background-color:#9fcdff}.table-hover .table-primary:hover>td,.table-hover .table-primary:hover>th{background-color:#9fcdff}.table-secondary,.table-secondary>td,.table-secondary>th{background-color:#d6d8db}.table-secondary tbody+tbody,.table-secondary td,.table-secondary th,.table-secondary thead th{border-color:#b3b7bb}.table-hover .table-secondary:hover{background-color:#c8cbcf}.table-hover .table-secondary:hover>td,.table-hover .table-secondary:hover>th{background-color:#c8cbcf}.table-success,.table-success>td,.table-success>th{background-color:#c3e6cb}.table-success tbody+tbody,.table-success td,.table-success th,.table-success thead th{border-color:#8fd19e}.table-hover .table-success:hover{background-color:#b1dfbb}.table-hover .table-success:hover>td,.table-hover .table-success:hover>th{background-color:#b1dfbb}.table-info,.table-info>td,.table-info>th{background-color:#bee5eb}.table-info tbody+tbody,.table-info td,.table-info th,.table-info thead th{border-color:#86cfda}.table-hover .table-info:hover{background-color:#abdde5}.table-hover .table-info:hover>td,.table-hover .table-info:hover>th{background-color:#abdde5}.table-warning,.table-warning>td,.table-warning>th{background-color:#ffeeba}.table-warning tbody+tbody,.table-warning td,.table-warning th,.table-warning thead th{border-color:#ffdf7e}.table-hover .table-warning:hover{background-color:#ffe8a1}.table-hover .table-warning:hover>td,.table-hover .table-warning:hover>th{background-color:#ffe8a1}.table-danger,.table-danger>td,.table-danger>th{background-color:#f5c6cb}.table-danger tbody+tbody,.table-danger td,.table-danger th,.table-danger thead th{border-color:#ed969e}.table-hover .table-danger:hover{background-color:#f1b0b7}.table-hover .table-danger:hover>td,.table-hover .table-danger:hover>th{background-color:#f1b0b7}.table-light,.table-light>td,.table-light>th{background-color:#fdfdfe}.table-light tbody+tbody,.table-light td,.table-light th,.table-light thead th{border-color:#fbfcfc}.table-hover .table-light:hover{background-color:#ececf6}.table-hover .table-light:hover>td,.table-hover .table-light:hover>th{background-color:#ececf6}.table-dark,.table-dark>td,.table-dark>th{background-color:#c6c8ca}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#95999c}.table-hover .table-dark:hover{background-color:#b9bbbe}.table-hover .table-dark:hover>td,.table-hover .table-dark:hover>th{background-color:#b9bbbe}.table-active,.table-active>td,.table-active>th{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover>td,.table-hover .table-active:hover>th{background-color:rgba(0,0,0,.075)}.table .thead-dark th{color:#fff;background-color:#343a40;border-color:#454d55}.table .thead-light th{color:#495057;background-color:#e9ecef;border-color:#dee2e6}.table-dark{color:#fff;background-color:#343a40}.table-dark td,.table-dark th,.table-dark thead th{border-color:#454d55}.table-dark.table-bordered{border:0}.table-dark.table-striped tbody tr:nth-of-type(odd){background-color:rgba(255,255,255,.05)}.table-dark.table-hover tbody tr:hover{color:#fff;background-color:rgba(255,255,255,.075)}@media (max-width:575.98px){.table-responsive-sm{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-sm>.table-bordered{border:0}}@media (max-width:767.98px){.table-responsive-md{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-md>.table-bordered{border:0}}@media (max-width:991.98px){.table-responsive-lg{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-lg>.table-bordered{border:0}}@media (max-width:1199.98px){.table-responsive-xl{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-xl>.table-bordered{border:0}}.table-responsive{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive>.table-bordered{border:0}.form-control{display:block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control{transition:none}}.form-control::-ms-expand{background-color:transparent;border:0}.form-control:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.form-control:focus{color:#495057;background-color:#fff;border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.form-control::-webkit-input-placeholder{color:#6c757d;opacity:1}.form-control::-moz-placeholder{color:#6c757d;opacity:1}.form-control:-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}input[type=date].form-control,input[type=datetime-local].form-control,input[type=month].form-control,input[type=time].form-control{-webkit-appearance:none;-moz-appearance:none;appearance:none}select.form-control:focus::-ms-value{color:#495057;background-color:#fff}.form-control-file,.form-control-range{display:block;width:100%}.col-form-label{padding-top:calc(.375rem + 1px);padding-bottom:calc(.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(.5rem + 1px);padding-bottom:calc(.5rem + 1px);font-size:1.25rem;line-height:1.5}.col-form-label-sm{padding-top:calc(.25rem + 1px);padding-bottom:calc(.25rem + 1px);font-size:.875rem;line-height:1.5}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;font-size:1rem;line-height:1.5;color:#212529;background-color:transparent;border:solid transparent;border-width:1px 0}.form-control-plaintext.form-control-lg,.form-control-plaintext.form-control-sm{padding-right:0;padding-left:0}.form-control-sm{height:calc(1.5em + .5rem + 2px);padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.form-control-lg{height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}select.form-control[multiple],select.form-control[size]{height:auto}textarea.form-control{height:auto}.form-group{margin-bottom:1rem}.form-text{display:block;margin-top:.25rem}.form-row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-5px;margin-left:-5px}.form-row>.col,.form-row>[class*=col-]{padding-right:5px;padding-left:5px}.form-check{position:relative;display:block;padding-left:1.25rem}.form-check-input{position:absolute;margin-top:.3rem;margin-left:-1.25rem}.form-check-input:disabled~.form-check-label,.form-check-input[disabled]~.form-check-label{color:#6c757d}.form-check-label{margin-bottom:0}.form-check-inline{display:-ms-inline-flexbox;display:inline-flex;-ms-flex-align:center;align-items:center;padding-left:0;margin-right:.75rem}.form-check-inline .form-check-input{position:static;margin-top:0;margin-right:.3125rem;margin-left:0}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#28a745}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(40,167,69,.9);border-radius:.25rem}.is-valid~.valid-feedback,.is-valid~.valid-tooltip,.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip{display:block}.form-control.is-valid,.was-validated .form-control:valid{border-color:#28a745;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-valid:focus,.was-validated .form-control:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-valid,.was-validated .custom-select:valid{border-color:#28a745;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-valid:focus,.was-validated .custom-select:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.form-check-input.is-valid~.form-check-label,.was-validated .form-check-input:valid~.form-check-label{color:#28a745}.form-check-input.is-valid~.valid-feedback,.form-check-input.is-valid~.valid-tooltip,.was-validated .form-check-input:valid~.valid-feedback,.was-validated .form-check-input:valid~.valid-tooltip{display:block}.custom-control-input.is-valid~.custom-control-label,.was-validated .custom-control-input:valid~.custom-control-label{color:#28a745}.custom-control-input.is-valid~.custom-control-label::before,.was-validated .custom-control-input:valid~.custom-control-label::before{border-color:#28a745}.custom-control-input.is-valid:checked~.custom-control-label::before,.was-validated .custom-control-input:valid:checked~.custom-control-label::before{border-color:#34ce57;background-color:#34ce57}.custom-control-input.is-valid:focus~.custom-control-label::before,.was-validated .custom-control-input:valid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.custom-control-input.is-valid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:valid:focus:not(:checked)~.custom-control-label::before{border-color:#28a745}.custom-file-input.is-valid~.custom-file-label,.was-validated .custom-file-input:valid~.custom-file-label{border-color:#28a745}.custom-file-input.is-valid:focus~.custom-file-label,.was-validated .custom-file-input:valid:focus~.custom-file-label{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#dc3545}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(220,53,69,.9);border-radius:.25rem}.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip,.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip{display:block}.form-control.is-invalid,.was-validated .form-control:invalid{border-color:#dc3545;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-invalid:focus,.was-validated .form-control:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-invalid,.was-validated .custom-select:invalid{border-color:#dc3545;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-invalid:focus,.was-validated .custom-select:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-check-input.is-invalid~.form-check-label,.was-validated .form-check-input:invalid~.form-check-label{color:#dc3545}.form-check-input.is-invalid~.invalid-feedback,.form-check-input.is-invalid~.invalid-tooltip,.was-validated .form-check-input:invalid~.invalid-feedback,.was-validated .form-check-input:invalid~.invalid-tooltip{display:block}.custom-control-input.is-invalid~.custom-control-label,.was-validated .custom-control-input:invalid~.custom-control-label{color:#dc3545}.custom-control-input.is-invalid~.custom-control-label::before,.was-validated .custom-control-input:invalid~.custom-control-label::before{border-color:#dc3545}.custom-control-input.is-invalid:checked~.custom-control-label::before,.was-validated .custom-control-input:invalid:checked~.custom-control-label::before{border-color:#e4606d;background-color:#e4606d}.custom-control-input.is-invalid:focus~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.custom-control-input.is-invalid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus:not(:checked)~.custom-control-label::before{border-color:#dc3545}.custom-file-input.is-invalid~.custom-file-label,.was-validated .custom-file-input:invalid~.custom-file-label{border-color:#dc3545}.custom-file-input.is-invalid:focus~.custom-file-label,.was-validated .custom-file-input:invalid:focus~.custom-file-label{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-inline{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center}.form-inline .form-check{width:100%}@media (min-width:576px){.form-inline label{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;margin-bottom:0}.form-inline .form-group{display:-ms-flexbox;display:flex;-ms-flex:0 0 auto;flex:0 0 auto;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center;margin-bottom:0}.form-inline .form-control{display:inline-block;width:auto;vertical-align:middle}.form-inline .form-control-plaintext{display:inline-block}.form-inline .custom-select,.form-inline .input-group{width:auto}.form-inline .form-check{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:auto;padding-left:0}.form-inline .form-check-input{position:relative;-ms-flex-negative:0;flex-shrink:0;margin-top:0;margin-right:.25rem;margin-left:0}.form-inline .custom-control{-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center}.form-inline .custom-control-label{margin-bottom:0}}.btn{display:inline-block;font-weight:400;color:#212529;text-align:center;vertical-align:middle;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;background-color:transparent;border:1px solid transparent;padding:.375rem .75rem;font-size:1rem;line-height:1.5;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.btn{transition:none}}.btn:hover{color:#212529;text-decoration:none}.btn.focus,.btn:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.btn.disabled,.btn:disabled{opacity:.65}.btn:not(:disabled):not(.disabled){cursor:pointer}a.btn.disabled,fieldset:disabled a.btn{pointer-events:none}.btn-primary{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary:hover{color:#fff;background-color:#0069d9;border-color:#0062cc}.btn-primary.focus,.btn-primary:focus{color:#fff;background-color:#0069d9;border-color:#0062cc;box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-primary.disabled,.btn-primary:disabled{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary:not(:disabled):not(.disabled).active,.btn-primary:not(:disabled):not(.disabled):active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#0062cc;border-color:#005cbf}.btn-primary:not(:disabled):not(.disabled).active:focus,.btn-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-secondary{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:hover{color:#fff;background-color:#5a6268;border-color:#545b62}.btn-secondary.focus,.btn-secondary:focus{color:#fff;background-color:#5a6268;border-color:#545b62;box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-secondary.disabled,.btn-secondary:disabled{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:not(:disabled):not(.disabled).active,.btn-secondary:not(:disabled):not(.disabled):active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#545b62;border-color:#4e555b}.btn-secondary:not(:disabled):not(.disabled).active:focus,.btn-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-success{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:hover{color:#fff;background-color:#218838;border-color:#1e7e34}.btn-success.focus,.btn-success:focus{color:#fff;background-color:#218838;border-color:#1e7e34;box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-success.disabled,.btn-success:disabled{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:not(:disabled):not(.disabled).active,.btn-success:not(:disabled):not(.disabled):active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#1e7e34;border-color:#1c7430}.btn-success:not(:disabled):not(.disabled).active:focus,.btn-success:not(:disabled):not(.disabled):active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-info{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:hover{color:#fff;background-color:#138496;border-color:#117a8b}.btn-info.focus,.btn-info:focus{color:#fff;background-color:#138496;border-color:#117a8b;box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-info.disabled,.btn-info:disabled{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:not(:disabled):not(.disabled).active,.btn-info:not(:disabled):not(.disabled):active,.show>.btn-info.dropdown-toggle{color:#fff;background-color:#117a8b;border-color:#10707f}.btn-info:not(:disabled):not(.disabled).active:focus,.btn-info:not(:disabled):not(.disabled):active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-warning{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:hover{color:#212529;background-color:#e0a800;border-color:#d39e00}.btn-warning.focus,.btn-warning:focus{color:#212529;background-color:#e0a800;border-color:#d39e00;box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-warning.disabled,.btn-warning:disabled{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:not(:disabled):not(.disabled).active,.btn-warning:not(:disabled):not(.disabled):active,.show>.btn-warning.dropdown-toggle{color:#212529;background-color:#d39e00;border-color:#c69500}.btn-warning:not(:disabled):not(.disabled).active:focus,.btn-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-danger{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:hover{color:#fff;background-color:#c82333;border-color:#bd2130}.btn-danger.focus,.btn-danger:focus{color:#fff;background-color:#c82333;border-color:#bd2130;box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-danger.disabled,.btn-danger:disabled{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:not(:disabled):not(.disabled).active,.btn-danger:not(:disabled):not(.disabled):active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#bd2130;border-color:#b21f2d}.btn-danger:not(:disabled):not(.disabled).active:focus,.btn-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-light{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:hover{color:#212529;background-color:#e2e6ea;border-color:#dae0e5}.btn-light.focus,.btn-light:focus{color:#212529;background-color:#e2e6ea;border-color:#dae0e5;box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-light.disabled,.btn-light:disabled{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:not(:disabled):not(.disabled).active,.btn-light:not(:disabled):not(.disabled):active,.show>.btn-light.dropdown-toggle{color:#212529;background-color:#dae0e5;border-color:#d3d9df}.btn-light:not(:disabled):not(.disabled).active:focus,.btn-light:not(:disabled):not(.disabled):active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-dark{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:hover{color:#fff;background-color:#23272b;border-color:#1d2124}.btn-dark.focus,.btn-dark:focus{color:#fff;background-color:#23272b;border-color:#1d2124;box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-dark.disabled,.btn-dark:disabled{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:not(:disabled):not(.disabled).active,.btn-dark:not(:disabled):not(.disabled):active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#1d2124;border-color:#171a1d}.btn-dark:not(:disabled):not(.disabled).active:focus,.btn-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-outline-primary{color:#007bff;border-color:#007bff}.btn-outline-primary:hover{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary.focus,.btn-outline-primary:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-primary.disabled,.btn-outline-primary:disabled{color:#007bff;background-color:transparent}.btn-outline-primary:not(:disabled):not(.disabled).active,.btn-outline-primary:not(:disabled):not(.disabled):active,.show>.btn-outline-primary.dropdown-toggle{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary:not(:disabled):not(.disabled).active:focus,.btn-outline-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-secondary{color:#6c757d;border-color:#6c757d}.btn-outline-secondary:hover{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary.focus,.btn-outline-secondary:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-secondary.disabled,.btn-outline-secondary:disabled{color:#6c757d;background-color:transparent}.btn-outline-secondary:not(:disabled):not(.disabled).active,.btn-outline-secondary:not(:disabled):not(.disabled):active,.show>.btn-outline-secondary.dropdown-toggle{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary:not(:disabled):not(.disabled).active:focus,.btn-outline-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-success{color:#28a745;border-color:#28a745}.btn-outline-success:hover{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success.focus,.btn-outline-success:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-success.disabled,.btn-outline-success:disabled{color:#28a745;background-color:transparent}.btn-outline-success:not(:disabled):not(.disabled).active,.btn-outline-success:not(:disabled):not(.disabled):active,.show>.btn-outline-success.dropdown-toggle{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success:not(:disabled):not(.disabled).active:focus,.btn-outline-success:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-info{color:#17a2b8;border-color:#17a2b8}.btn-outline-info:hover{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info.focus,.btn-outline-info:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-info.disabled,.btn-outline-info:disabled{color:#17a2b8;background-color:transparent}.btn-outline-info:not(:disabled):not(.disabled).active,.btn-outline-info:not(:disabled):not(.disabled):active,.show>.btn-outline-info.dropdown-toggle{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info:not(:disabled):not(.disabled).active:focus,.btn-outline-info:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-warning{color:#ffc107;border-color:#ffc107}.btn-outline-warning:hover{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning.focus,.btn-outline-warning:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-warning.disabled,.btn-outline-warning:disabled{color:#ffc107;background-color:transparent}.btn-outline-warning:not(:disabled):not(.disabled).active,.btn-outline-warning:not(:disabled):not(.disabled):active,.show>.btn-outline-warning.dropdown-toggle{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning:not(:disabled):not(.disabled).active:focus,.btn-outline-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-danger{color:#dc3545;border-color:#dc3545}.btn-outline-danger:hover{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger.focus,.btn-outline-danger:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-danger.disabled,.btn-outline-danger:disabled{color:#dc3545;background-color:transparent}.btn-outline-danger:not(:disabled):not(.disabled).active,.btn-outline-danger:not(:disabled):not(.disabled):active,.show>.btn-outline-danger.dropdown-toggle{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger:not(:disabled):not(.disabled).active:focus,.btn-outline-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:hover{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light.focus,.btn-outline-light:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-light.disabled,.btn-outline-light:disabled{color:#f8f9fa;background-color:transparent}.btn-outline-light:not(:disabled):not(.disabled).active,.btn-outline-light:not(:disabled):not(.disabled):active,.show>.btn-outline-light.dropdown-toggle{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:not(:disabled):not(.disabled).active:focus,.btn-outline-light:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-dark{color:#343a40;border-color:#343a40}.btn-outline-dark:hover{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark.focus,.btn-outline-dark:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-outline-dark.disabled,.btn-outline-dark:disabled{color:#343a40;background-color:transparent}.btn-outline-dark:not(:disabled):not(.disabled).active,.btn-outline-dark:not(:disabled):not(.disabled):active,.show>.btn-outline-dark.dropdown-toggle{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark:not(:disabled):not(.disabled).active:focus,.btn-outline-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-link{font-weight:400;color:#007bff;text-decoration:none}.btn-link:hover{color:#0056b3;text-decoration:underline}.btn-link.focus,.btn-link:focus{text-decoration:underline}.btn-link.disabled,.btn-link:disabled{color:#6c757d;pointer-events:none}.btn-group-lg>.btn,.btn-lg{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.btn-group-sm>.btn,.btn-sm{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.btn-block{display:block;width:100%}.btn-block+.btn-block{margin-top:.5rem}input[type=button].btn-block,input[type=reset].btn-block,input[type=submit].btn-block{width:100%}.fade{transition:opacity .15s linear}@media (prefers-reduced-motion:reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{position:relative;height:0;overflow:hidden;transition:height .35s ease}@media (prefers-reduced-motion:reduce){.collapsing{transition:none}}.dropdown,.dropleft,.dropright,.dropup{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid transparent;border-bottom:0;border-left:.3em solid transparent}.dropdown-toggle:empty::after{margin-left:0}.dropdown-menu{position:absolute;top:100%;left:0;z-index:1000;display:none;float:left;min-width:10rem;padding:.5rem 0;margin:.125rem 0 0;font-size:1rem;color:#212529;text-align:left;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15);border-radius:.25rem}.dropdown-menu-left{right:auto;left:0}.dropdown-menu-right{right:0;left:auto}@media (min-width:576px){.dropdown-menu-sm-left{right:auto;left:0}.dropdown-menu-sm-right{right:0;left:auto}}@media (min-width:768px){.dropdown-menu-md-left{right:auto;left:0}.dropdown-menu-md-right{right:0;left:auto}}@media (min-width:992px){.dropdown-menu-lg-left{right:auto;left:0}.dropdown-menu-lg-right{right:0;left:auto}}@media (min-width:1200px){.dropdown-menu-xl-left{right:auto;left:0}.dropdown-menu-xl-right{right:0;left:auto}}.dropup .dropdown-menu{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid transparent;border-bottom:.3em solid;border-left:.3em solid transparent}.dropup .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-menu{top:0;right:auto;left:100%;margin-top:0;margin-left:.125rem}.dropright .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:0;border-bottom:.3em solid transparent;border-left:.3em solid}.dropright .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-toggle::after{vertical-align:0}.dropleft .dropdown-menu{top:0;right:100%;left:auto;margin-top:0;margin-right:.125rem}.dropleft .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:""}.dropleft .dropdown-toggle::after{display:none}.dropleft .dropdown-toggle::before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:.3em solid;border-bottom:.3em solid transparent}.dropleft .dropdown-toggle:empty::after{margin-left:0}.dropleft .dropdown-toggle::before{vertical-align:0}.dropdown-menu[x-placement^=bottom],.dropdown-menu[x-placement^=left],.dropdown-menu[x-placement^=right],.dropdown-menu[x-placement^=top]{right:auto;bottom:auto}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid #e9ecef}.dropdown-item{display:block;width:100%;padding:.25rem 1.5rem;clear:both;font-weight:400;color:#212529;text-align:inherit;white-space:nowrap;background-color:transparent;border:0}.dropdown-item:focus,.dropdown-item:hover{color:#16181b;text-decoration:none;background-color:#f8f9fa}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#007bff}.dropdown-item.disabled,.dropdown-item:disabled{color:#6c757d;pointer-events:none;background-color:transparent}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1.5rem;margin-bottom:0;font-size:.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1.5rem;color:#212529}.btn-group,.btn-group-vertical{position:relative;display:-ms-inline-flexbox;display:inline-flex;vertical-align:middle}.btn-group-vertical>.btn,.btn-group>.btn{position:relative;-ms-flex:1 1 auto;flex:1 1 auto}.btn-group-vertical>.btn:hover,.btn-group>.btn:hover{z-index:1}.btn-group-vertical>.btn.active,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn:focus,.btn-group>.btn.active,.btn-group>.btn:active,.btn-group>.btn:focus{z-index:1}.btn-toolbar{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-pack:start;justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn-group:not(:first-child),.btn-group>.btn:not(:first-child){margin-left:-1px}.btn-group>.btn-group:not(:last-child)>.btn,.btn-group>.btn:not(:last-child):not(.dropdown-toggle){border-top-right-radius:0;border-bottom-right-radius:0}.btn-group>.btn-group:not(:first-child)>.btn,.btn-group>.btn:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split::after,.dropright .dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after{margin-left:0}.dropleft .dropdown-toggle-split::before{margin-right:0}.btn-group-sm>.btn+.dropdown-toggle-split,.btn-sm+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-group-lg>.btn+.dropdown-toggle-split,.btn-lg+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{-ms-flex-direction:column;flex-direction:column;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:center;justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn-group:not(:first-child),.btn-group-vertical>.btn:not(:first-child){margin-top:-1px}.btn-group-vertical>.btn-group:not(:last-child)>.btn,.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle){border-bottom-right-radius:0;border-bottom-left-radius:0}.btn-group-vertical>.btn-group:not(:first-child)>.btn,.btn-group-vertical>.btn:not(:first-child){border-top-left-radius:0;border-top-right-radius:0}.btn-group-toggle>.btn,.btn-group-toggle>.btn-group>.btn{margin-bottom:0}.btn-group-toggle>.btn input[type=checkbox],.btn-group-toggle>.btn input[type=radio],.btn-group-toggle>.btn-group>.btn input[type=checkbox],.btn-group-toggle>.btn-group>.btn input[type=radio]{position:absolute;clip:rect(0,0,0,0);pointer-events:none}.input-group{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:stretch;align-items:stretch;width:100%}.input-group>.custom-file,.input-group>.custom-select,.input-group>.form-control,.input-group>.form-control-plaintext{position:relative;-ms-flex:1 1 auto;flex:1 1 auto;width:1%;min-width:0;margin-bottom:0}.input-group>.custom-file+.custom-file,.input-group>.custom-file+.custom-select,.input-group>.custom-file+.form-control,.input-group>.custom-select+.custom-file,.input-group>.custom-select+.custom-select,.input-group>.custom-select+.form-control,.input-group>.form-control+.custom-file,.input-group>.form-control+.custom-select,.input-group>.form-control+.form-control,.input-group>.form-control-plaintext+.custom-file,.input-group>.form-control-plaintext+.custom-select,.input-group>.form-control-plaintext+.form-control{margin-left:-1px}.input-group>.custom-file .custom-file-input:focus~.custom-file-label,.input-group>.custom-select:focus,.input-group>.form-control:focus{z-index:3}.input-group>.custom-file .custom-file-input:focus{z-index:4}.input-group>.custom-select:not(:last-child),.input-group>.form-control:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-select:not(:first-child),.input-group>.form-control:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.input-group>.custom-file{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center}.input-group>.custom-file:not(:last-child) .custom-file-label,.input-group>.custom-file:not(:last-child) .custom-file-label::after{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-file:not(:first-child) .custom-file-label{border-top-left-radius:0;border-bottom-left-radius:0}.input-group-append,.input-group-prepend{display:-ms-flexbox;display:flex}.input-group-append .btn,.input-group-prepend .btn{position:relative;z-index:2}.input-group-append .btn:focus,.input-group-prepend .btn:focus{z-index:3}.input-group-append .btn+.btn,.input-group-append .btn+.input-group-text,.input-group-append .input-group-text+.btn,.input-group-append .input-group-text+.input-group-text,.input-group-prepend .btn+.btn,.input-group-prepend .btn+.input-group-text,.input-group-prepend .input-group-text+.btn,.input-group-prepend .input-group-text+.input-group-text{margin-left:-1px}.input-group-prepend{margin-right:-1px}.input-group-append{margin-left:-1px}.input-group-text{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.375rem .75rem;margin-bottom:0;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da;border-radius:.25rem}.input-group-text input[type=checkbox],.input-group-text input[type=radio]{margin-top:0}.input-group-lg>.custom-select,.input-group-lg>.form-control:not(textarea){height:calc(1.5em + 1rem + 2px)}.input-group-lg>.custom-select,.input-group-lg>.form-control,.input-group-lg>.input-group-append>.btn,.input-group-lg>.input-group-append>.input-group-text,.input-group-lg>.input-group-prepend>.btn,.input-group-lg>.input-group-prepend>.input-group-text{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.input-group-sm>.custom-select,.input-group-sm>.form-control:not(textarea){height:calc(1.5em + .5rem + 2px)}.input-group-sm>.custom-select,.input-group-sm>.form-control,.input-group-sm>.input-group-append>.btn,.input-group-sm>.input-group-append>.input-group-text,.input-group-sm>.input-group-prepend>.btn,.input-group-sm>.input-group-prepend>.input-group-text{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.input-group-lg>.custom-select,.input-group-sm>.custom-select{padding-right:1.75rem}.input-group>.input-group-append:last-child>.btn:not(:last-child):not(.dropdown-toggle),.input-group>.input-group-append:last-child>.input-group-text:not(:last-child),.input-group>.input-group-append:not(:last-child)>.btn,.input-group>.input-group-append:not(:last-child)>.input-group-text,.input-group>.input-group-prepend>.btn,.input-group>.input-group-prepend>.input-group-text{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.input-group-append>.btn,.input-group>.input-group-append>.input-group-text,.input-group>.input-group-prepend:first-child>.btn:not(:first-child),.input-group>.input-group-prepend:first-child>.input-group-text:not(:first-child),.input-group>.input-group-prepend:not(:first-child)>.btn,.input-group>.input-group-prepend:not(:first-child)>.input-group-text{border-top-left-radius:0;border-bottom-left-radius:0}.custom-control{position:relative;display:block;min-height:1.5rem;padding-left:1.5rem}.custom-control-inline{display:-ms-inline-flexbox;display:inline-flex;margin-right:1rem}.custom-control-input{position:absolute;left:0;z-index:-1;width:1rem;height:1.25rem;opacity:0}.custom-control-input:checked~.custom-control-label::before{color:#fff;border-color:#007bff;background-color:#007bff}.custom-control-input:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-control-input:focus:not(:checked)~.custom-control-label::before{border-color:#80bdff}.custom-control-input:not(:disabled):active~.custom-control-label::before{color:#fff;background-color:#b3d7ff;border-color:#b3d7ff}.custom-control-input:disabled~.custom-control-label,.custom-control-input[disabled]~.custom-control-label{color:#6c757d}.custom-control-input:disabled~.custom-control-label::before,.custom-control-input[disabled]~.custom-control-label::before{background-color:#e9ecef}.custom-control-label{position:relative;margin-bottom:0;vertical-align:top}.custom-control-label::before{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;pointer-events:none;content:"";background-color:#fff;border:#adb5bd solid 1px}.custom-control-label::after{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;content:"";background:no-repeat 50%/50% 50%}.custom-checkbox .custom-control-label::before{border-radius:.25rem}.custom-checkbox .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%23fff' d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26l2.974 2.99L8 2.193z'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::before{border-color:#007bff;background-color:#007bff}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='4' viewBox='0 0 4 4'%3e%3cpath stroke='%23fff' d='M0 2h4'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-checkbox .custom-control-input:disabled:indeterminate~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-radio .custom-control-label::before{border-radius:50%}.custom-radio .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.custom-radio .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-switch{padding-left:2.25rem}.custom-switch .custom-control-label::before{left:-2.25rem;width:1.75rem;pointer-events:all;border-radius:.5rem}.custom-switch .custom-control-label::after{top:calc(.25rem + 2px);left:calc(-2.25rem + 2px);width:calc(1rem - 4px);height:calc(1rem - 4px);background-color:#adb5bd;border-radius:.5rem;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-switch .custom-control-label::after{transition:none}}.custom-switch .custom-control-input:checked~.custom-control-label::after{background-color:#fff;-webkit-transform:translateX(.75rem);transform:translateX(.75rem)}.custom-switch .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-select{display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem 1.75rem .375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;vertical-align:middle;background:#fff url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px;border:1px solid #ced4da;border-radius:.25rem;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-select:focus{border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-select:focus::-ms-value{color:#495057;background-color:#fff}.custom-select[multiple],.custom-select[size]:not([size="1"]){height:auto;padding-right:.75rem;background-image:none}.custom-select:disabled{color:#6c757d;background-color:#e9ecef}.custom-select::-ms-expand{display:none}.custom-select:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.custom-select-sm{height:calc(1.5em + .5rem + 2px);padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:.875rem}.custom-select-lg{height:calc(1.5em + 1rem + 2px);padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem}.custom-file{position:relative;display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);margin-bottom:0}.custom-file-input{position:relative;z-index:2;width:100%;height:calc(1.5em + .75rem + 2px);margin:0;opacity:0}.custom-file-input:focus~.custom-file-label{border-color:#80bdff;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-file-input:disabled~.custom-file-label,.custom-file-input[disabled]~.custom-file-label{background-color:#e9ecef}.custom-file-input:lang(en)~.custom-file-label::after{content:"Browse"}.custom-file-input~.custom-file-label[data-browse]::after{content:attr(data-browse)}.custom-file-label{position:absolute;top:0;right:0;left:0;z-index:1;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;border:1px solid #ced4da;border-radius:.25rem}.custom-file-label::after{position:absolute;top:0;right:0;bottom:0;z-index:3;display:block;height:calc(1.5em + .75rem);padding:.375rem .75rem;line-height:1.5;color:#495057;content:"Browse";background-color:#e9ecef;border-left:inherit;border-radius:0 .25rem .25rem 0}.custom-range{width:100%;height:1.4rem;padding:0;background-color:transparent;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-range:focus{outline:0}.custom-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-ms-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range::-moz-focus-outer{border:0}.custom-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-.25rem;background-color:#007bff;border:0;border-radius:1rem;-webkit-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-webkit-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-webkit-slider-thumb{-webkit-transition:none;transition:none}}.custom-range::-webkit-slider-thumb:active{background-color:#b3d7ff}.custom-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#007bff;border:0;border-radius:1rem;-moz-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-moz-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-moz-range-thumb{-moz-transition:none;transition:none}}.custom-range::-moz-range-thumb:active{background-color:#b3d7ff}.custom-range::-moz-range-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-ms-thumb{width:1rem;height:1rem;margin-top:0;margin-right:.2rem;margin-left:.2rem;background-color:#007bff;border:0;border-radius:1rem;-ms-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-ms-thumb{-ms-transition:none;transition:none}}.custom-range::-ms-thumb:active{background-color:#b3d7ff}.custom-range::-ms-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:transparent;border-color:transparent;border-width:.5rem}.custom-range::-ms-fill-lower{background-color:#dee2e6;border-radius:1rem}.custom-range::-ms-fill-upper{margin-right:15px;background-color:#dee2e6;border-radius:1rem}.custom-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.custom-range:disabled::-webkit-slider-runnable-track{cursor:default}.custom-range:disabled::-moz-range-thumb{background-color:#adb5bd}.custom-range:disabled::-moz-range-track{cursor:default}.custom-range:disabled::-ms-thumb{background-color:#adb5bd}.custom-control-label::before,.custom-file-label,.custom-select{transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-control-label::before,.custom-file-label,.custom-select{transition:none}}.nav{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem}.nav-link:focus,.nav-link:hover{text-decoration:none}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-item{margin-bottom:-1px}.nav-tabs .nav-link{border:1px solid transparent;border-top-left-radius:.25rem;border-top-right-radius:.25rem}.nav-tabs .nav-link:focus,.nav-tabs .nav-link:hover{border-color:#e9ecef #e9ecef #dee2e6}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:transparent;border-color:transparent}.nav-tabs .nav-item.show .nav-link,.nav-tabs .nav-link.active{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px;border-top-left-radius:0;border-top-right-radius:0}.nav-pills .nav-link{border-radius:.25rem}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#007bff}.nav-fill .nav-item{-ms-flex:1 1 auto;flex:1 1 auto;text-align:center}.nav-justified .nav-item{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;text-align:center}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between;padding:.5rem 1rem}.navbar .container,.navbar .container-fluid,.navbar .container-lg,.navbar .container-md,.navbar .container-sm,.navbar .container-xl{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between}.navbar-brand{display:inline-block;padding-top:.3125rem;padding-bottom:.3125rem;margin-right:1rem;font-size:1.25rem;line-height:inherit;white-space:nowrap}.navbar-brand:focus,.navbar-brand:hover{text-decoration:none}.navbar-nav{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-right:0;padding-left:0}.navbar-nav .dropdown-menu{position:static;float:none}.navbar-text{display:inline-block;padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{-ms-flex-preferred-size:100%;flex-basis:100%;-ms-flex-positive:1;flex-grow:1;-ms-flex-align:center;align-items:center}.navbar-toggler{padding:.25rem .75rem;font-size:1.25rem;line-height:1;background-color:transparent;border:1px solid transparent;border-radius:.25rem}.navbar-toggler:focus,.navbar-toggler:hover{text-decoration:none}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;content:"";background:no-repeat center center;background-size:100% 100%}@media (max-width:575.98px){.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{padding-right:0;padding-left:0}}@media (min-width:576px){.navbar-expand-sm{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-sm .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-sm .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}}@media (max-width:767.98px){.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{padding-right:0;padding-left:0}}@media (min-width:768px){.navbar-expand-md{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-md .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-md .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}}@media (max-width:991.98px){.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{padding-right:0;padding-left:0}}@media (min-width:992px){.navbar-expand-lg{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-lg .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-lg .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}}@media (max-width:1199.98px){.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{padding-right:0;padding-left:0}}@media (min-width:1200px){.navbar-expand-xl{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-xl .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-xl .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}}.navbar-expand{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{padding-right:0;padding-left:0}.navbar-expand .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-light .navbar-brand{color:rgba(0,0,0,.9)}.navbar-light .navbar-brand:focus,.navbar-light .navbar-brand:hover{color:rgba(0,0,0,.9)}.navbar-light .navbar-nav .nav-link{color:rgba(0,0,0,.5)}.navbar-light .navbar-nav .nav-link:focus,.navbar-light .navbar-nav .nav-link:hover{color:rgba(0,0,0,.7)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(0,0,0,.3)}.navbar-light .navbar-nav .active>.nav-link,.navbar-light .navbar-nav .nav-link.active,.navbar-light .navbar-nav .nav-link.show,.navbar-light .navbar-nav .show>.nav-link{color:rgba(0,0,0,.9)}.navbar-light .navbar-toggler{color:rgba(0,0,0,.5);border-color:rgba(0,0,0,.1)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%280, 0, 0, 0.5%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-light .navbar-text{color:rgba(0,0,0,.5)}.navbar-light .navbar-text a{color:rgba(0,0,0,.9)}.navbar-light .navbar-text a:focus,.navbar-light .navbar-text a:hover{color:rgba(0,0,0,.9)}.navbar-dark .navbar-brand{color:#fff}.navbar-dark .navbar-brand:focus,.navbar-dark .navbar-brand:hover{color:#fff}.navbar-dark .navbar-nav .nav-link{color:rgba(255,255,255,.5)}.navbar-dark .navbar-nav .nav-link:focus,.navbar-dark .navbar-nav .nav-link:hover{color:rgba(255,255,255,.75)}.navbar-dark .navbar-nav .nav-link.disabled{color:rgba(255,255,255,.25)}.navbar-dark .navbar-nav .active>.nav-link,.navbar-dark .navbar-nav .nav-link.active,.navbar-dark .navbar-nav .nav-link.show,.navbar-dark .navbar-nav .show>.nav-link{color:#fff}.navbar-dark .navbar-toggler{color:rgba(255,255,255,.5);border-color:rgba(255,255,255,.1)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%28255, 255, 255, 0.5%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-dark .navbar-text{color:rgba(255,255,255,.5)}.navbar-dark .navbar-text a{color:#fff}.navbar-dark .navbar-text a:focus,.navbar-dark .navbar-text a:hover{color:#fff}.card{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125);border-radius:.25rem}.card>hr{margin-right:0;margin-left:0}.card>.list-group{border-top:inherit;border-bottom:inherit}.card>.list-group:first-child{border-top-width:0;border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card>.list-group:last-child{border-bottom-width:0;border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-body{-ms-flex:1 1 auto;flex:1 1 auto;min-height:1px;padding:1.25rem}.card-title{margin-bottom:.75rem}.card-subtitle{margin-top:-.375rem;margin-bottom:0}.card-text:last-child{margin-bottom:0}.card-link:hover{text-decoration:none}.card-link+.card-link{margin-left:1.25rem}.card-header{padding:.75rem 1.25rem;margin-bottom:0;background-color:rgba(0,0,0,.03);border-bottom:1px solid rgba(0,0,0,.125)}.card-header:first-child{border-radius:calc(.25rem - 1px) calc(.25rem - 1px) 0 0}.card-header+.list-group .list-group-item:first-child{border-top:0}.card-footer{padding:.75rem 1.25rem;background-color:rgba(0,0,0,.03);border-top:1px solid rgba(0,0,0,.125)}.card-footer:last-child{border-radius:0 0 calc(.25rem - 1px) calc(.25rem - 1px)}.card-header-tabs{margin-right:-.625rem;margin-bottom:-.75rem;margin-left:-.625rem;border-bottom:0}.card-header-pills{margin-right:-.625rem;margin-left:-.625rem}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1.25rem}.card-img,.card-img-bottom,.card-img-top{-ms-flex-negative:0;flex-shrink:0;width:100%}.card-img,.card-img-top{border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom{border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-deck .card{margin-bottom:15px}@media (min-width:576px){.card-deck{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;margin-right:-15px;margin-left:-15px}.card-deck .card{-ms-flex:1 0 0%;flex:1 0 0%;margin-right:15px;margin-bottom:0;margin-left:15px}}.card-group>.card{margin-bottom:15px}@media (min-width:576px){.card-group{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap}.card-group>.card{-ms-flex:1 0 0%;flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}.card-group>.card:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.card-group>.card:not(:last-child) .card-header,.card-group>.card:not(:last-child) .card-img-top{border-top-right-radius:0}.card-group>.card:not(:last-child) .card-footer,.card-group>.card:not(:last-child) .card-img-bottom{border-bottom-right-radius:0}.card-group>.card:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.card-group>.card:not(:first-child) .card-header,.card-group>.card:not(:first-child) .card-img-top{border-top-left-radius:0}.card-group>.card:not(:first-child) .card-footer,.card-group>.card:not(:first-child) .card-img-bottom{border-bottom-left-radius:0}}.card-columns .card{margin-bottom:.75rem}@media (min-width:576px){.card-columns{-webkit-column-count:3;-moz-column-count:3;column-count:3;-webkit-column-gap:1.25rem;-moz-column-gap:1.25rem;column-gap:1.25rem;orphans:1;widows:1}.card-columns .card{display:inline-block;width:100%}}.accordion>.card{overflow:hidden}.accordion>.card:not(:last-of-type){border-bottom:0;border-bottom-right-radius:0;border-bottom-left-radius:0}.accordion>.card:not(:first-of-type){border-top-left-radius:0;border-top-right-radius:0}.accordion>.card>.card-header{border-radius:0;margin-bottom:-1px}.breadcrumb{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding:.75rem 1rem;margin-bottom:1rem;list-style:none;background-color:#e9ecef;border-radius:.25rem}.breadcrumb-item{display:-ms-flexbox;display:flex}.breadcrumb-item+.breadcrumb-item{padding-left:.5rem}.breadcrumb-item+.breadcrumb-item::before{display:inline-block;padding-right:.5rem;color:#6c757d;content:"/"}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:underline}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:none}.breadcrumb-item.active{color:#6c757d}.pagination{display:-ms-flexbox;display:flex;padding-left:0;list-style:none;border-radius:.25rem}.page-link{position:relative;display:block;padding:.5rem .75rem;margin-left:-1px;line-height:1.25;color:#007bff;background-color:#fff;border:1px solid #dee2e6}.page-link:hover{z-index:2;color:#0056b3;text-decoration:none;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.page-item:first-child .page-link{margin-left:0;border-top-left-radius:.25rem;border-bottom-left-radius:.25rem}.page-item:last-child .page-link{border-top-right-radius:.25rem;border-bottom-right-radius:.25rem}.page-item.active .page-link{z-index:3;color:#fff;background-color:#007bff;border-color:#007bff}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;cursor:auto;background-color:#fff;border-color:#dee2e6}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem;line-height:1.5}.pagination-lg .page-item:first-child .page-link{border-top-left-radius:.3rem;border-bottom-left-radius:.3rem}.pagination-lg .page-item:last-child .page-link{border-top-right-radius:.3rem;border-bottom-right-radius:.3rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:.875rem;line-height:1.5}.pagination-sm .page-item:first-child .page-link{border-top-left-radius:.2rem;border-bottom-left-radius:.2rem}.pagination-sm .page-item:last-child .page-link{border-top-right-radius:.2rem;border-bottom-right-radius:.2rem}.badge{display:inline-block;padding:.25em .4em;font-size:75%;font-weight:700;line-height:1;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.badge{transition:none}}a.badge:focus,a.badge:hover{text-decoration:none}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.badge-pill{padding-right:.6em;padding-left:.6em;border-radius:10rem}.badge-primary{color:#fff;background-color:#007bff}a.badge-primary:focus,a.badge-primary:hover{color:#fff;background-color:#0062cc}a.badge-primary.focus,a.badge-primary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.badge-secondary{color:#fff;background-color:#6c757d}a.badge-secondary:focus,a.badge-secondary:hover{color:#fff;background-color:#545b62}a.badge-secondary.focus,a.badge-secondary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.badge-success{color:#fff;background-color:#28a745}a.badge-success:focus,a.badge-success:hover{color:#fff;background-color:#1e7e34}a.badge-success.focus,a.badge-success:focus{outline:0;box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.badge-info{color:#fff;background-color:#17a2b8}a.badge-info:focus,a.badge-info:hover{color:#fff;background-color:#117a8b}a.badge-info.focus,a.badge-info:focus{outline:0;box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.badge-warning{color:#212529;background-color:#ffc107}a.badge-warning:focus,a.badge-warning:hover{color:#212529;background-color:#d39e00}a.badge-warning.focus,a.badge-warning:focus{outline:0;box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.badge-danger{color:#fff;background-color:#dc3545}a.badge-danger:focus,a.badge-danger:hover{color:#fff;background-color:#bd2130}a.badge-danger.focus,a.badge-danger:focus{outline:0;box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.badge-light{color:#212529;background-color:#f8f9fa}a.badge-light:focus,a.badge-light:hover{color:#212529;background-color:#dae0e5}a.badge-light.focus,a.badge-light:focus{outline:0;box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.badge-dark{color:#fff;background-color:#343a40}a.badge-dark:focus,a.badge-dark:hover{color:#fff;background-color:#1d2124}a.badge-dark.focus,a.badge-dark:focus{outline:0;box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.jumbotron{padding:2rem 1rem;margin-bottom:2rem;background-color:#e9ecef;border-radius:.3rem}@media (min-width:576px){.jumbotron{padding:4rem 2rem}}.jumbotron-fluid{padding-right:0;padding-left:0;border-radius:0}.alert{position:relative;padding:.75rem 1.25rem;margin-bottom:1rem;border:1px solid transparent;border-radius:.25rem}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-right:4rem}.alert-dismissible .close{position:absolute;top:0;right:0;padding:.75rem 1.25rem;color:inherit}.alert-primary{color:#004085;background-color:#cce5ff;border-color:#b8daff}.alert-primary hr{border-top-color:#9fcdff}.alert-primary .alert-link{color:#002752}.alert-secondary{color:#383d41;background-color:#e2e3e5;border-color:#d6d8db}.alert-secondary hr{border-top-color:#c8cbcf}.alert-secondary .alert-link{color:#202326}.alert-success{color:#155724;background-color:#d4edda;border-color:#c3e6cb}.alert-success hr{border-top-color:#b1dfbb}.alert-success .alert-link{color:#0b2e13}.alert-info{color:#0c5460;background-color:#d1ecf1;border-color:#bee5eb}.alert-info hr{border-top-color:#abdde5}.alert-info .alert-link{color:#062c33}.alert-warning{color:#856404;background-color:#fff3cd;border-color:#ffeeba}.alert-warning hr{border-top-color:#ffe8a1}.alert-warning .alert-link{color:#533f03}.alert-danger{color:#721c24;background-color:#f8d7da;border-color:#f5c6cb}.alert-danger hr{border-top-color:#f1b0b7}.alert-danger .alert-link{color:#491217}.alert-light{color:#818182;background-color:#fefefe;border-color:#fdfdfe}.alert-light hr{border-top-color:#ececf6}.alert-light .alert-link{color:#686868}.alert-dark{color:#1b1e21;background-color:#d6d8d9;border-color:#c6c8ca}.alert-dark hr{border-top-color:#b9bbbe}.alert-dark .alert-link{color:#040505}@-webkit-keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}@keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}.progress{display:-ms-flexbox;display:flex;height:1rem;overflow:hidden;line-height:0;font-size:.75rem;background-color:#e9ecef;border-radius:.25rem}.progress-bar{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;overflow:hidden;color:#fff;text-align:center;white-space:nowrap;background-color:#007bff;transition:width .6s ease}@media (prefers-reduced-motion:reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg,rgba(255,255,255,.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,.15) 50%,rgba(255,255,255,.15) 75%,transparent 75%,transparent);background-size:1rem 1rem}.progress-bar-animated{-webkit-animation:progress-bar-stripes 1s linear infinite;animation:progress-bar-stripes 1s linear infinite}@media (prefers-reduced-motion:reduce){.progress-bar-animated{-webkit-animation:none;animation:none}}.media{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start}.media-body{-ms-flex:1;flex:1}.list-group{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0;border-radius:.25rem}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:focus,.list-group-item-action:hover{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#212529;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.75rem 1.25rem;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item:first-child{border-top-left-radius:inherit;border-top-right-radius:inherit}.list-group-item:last-child{border-bottom-right-radius:inherit;border-bottom-left-radius:inherit}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#007bff;border-color:#007bff}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal>.list-group-item.active{margin-top:0}.list-group-horizontal>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}@media (min-width:576px){.list-group-horizontal-sm{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-sm>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-sm>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-sm>.list-group-item.active{margin-top:0}.list-group-horizontal-sm>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-sm>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:768px){.list-group-horizontal-md{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-md>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-md>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-md>.list-group-item.active{margin-top:0}.list-group-horizontal-md>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-md>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:992px){.list-group-horizontal-lg{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-lg>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-lg>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-lg>.list-group-item.active{margin-top:0}.list-group-horizontal-lg>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-lg>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:1200px){.list-group-horizontal-xl{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-xl>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-xl>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-xl>.list-group-item.active{margin-top:0}.list-group-horizontal-xl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}.list-group-flush{border-radius:0}.list-group-flush>.list-group-item{border-width:0 0 1px}.list-group-flush>.list-group-item:last-child{border-bottom-width:0}.list-group-item-primary{color:#004085;background-color:#b8daff}.list-group-item-primary.list-group-item-action:focus,.list-group-item-primary.list-group-item-action:hover{color:#004085;background-color:#9fcdff}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#004085;border-color:#004085}.list-group-item-secondary{color:#383d41;background-color:#d6d8db}.list-group-item-secondary.list-group-item-action:focus,.list-group-item-secondary.list-group-item-action:hover{color:#383d41;background-color:#c8cbcf}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#383d41;border-color:#383d41}.list-group-item-success{color:#155724;background-color:#c3e6cb}.list-group-item-success.list-group-item-action:focus,.list-group-item-success.list-group-item-action:hover{color:#155724;background-color:#b1dfbb}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#155724;border-color:#155724}.list-group-item-info{color:#0c5460;background-color:#bee5eb}.list-group-item-info.list-group-item-action:focus,.list-group-item-info.list-group-item-action:hover{color:#0c5460;background-color:#abdde5}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#0c5460;border-color:#0c5460}.list-group-item-warning{color:#856404;background-color:#ffeeba}.list-group-item-warning.list-group-item-action:focus,.list-group-item-warning.list-group-item-action:hover{color:#856404;background-color:#ffe8a1}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#856404;border-color:#856404}.list-group-item-danger{color:#721c24;background-color:#f5c6cb}.list-group-item-danger.list-group-item-action:focus,.list-group-item-danger.list-group-item-action:hover{color:#721c24;background-color:#f1b0b7}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#721c24;border-color:#721c24}.list-group-item-light{color:#818182;background-color:#fdfdfe}.list-group-item-light.list-group-item-action:focus,.list-group-item-light.list-group-item-action:hover{color:#818182;background-color:#ececf6}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#818182;border-color:#818182}.list-group-item-dark{color:#1b1e21;background-color:#c6c8ca}.list-group-item-dark.list-group-item-action:focus,.list-group-item-dark.list-group-item-action:hover{color:#1b1e21;background-color:#b9bbbe}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#1b1e21;border-color:#1b1e21}.close{float:right;font-size:1.5rem;font-weight:700;line-height:1;color:#000;text-shadow:0 1px 0 #fff;opacity:.5}.close:hover{color:#000;text-decoration:none}.close:not(:disabled):not(.disabled):focus,.close:not(:disabled):not(.disabled):hover{opacity:.75}button.close{padding:0;background-color:transparent;border:0}a.close.disabled{pointer-events:none}.toast{max-width:350px;overflow:hidden;font-size:.875rem;background-color:rgba(255,255,255,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .25rem .75rem rgba(0,0,0,.1);-webkit-backdrop-filter:blur(10px);backdrop-filter:blur(10px);opacity:0;border-radius:.25rem}.toast:not(:last-child){margin-bottom:.75rem}.toast.showing{opacity:1}.toast.show{display:block;opacity:1}.toast.hide{display:none}.toast-header{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.25rem .75rem;color:#6c757d;background-color:rgba(255,255,255,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05)}.toast-body{padding:.75rem}.modal-open{overflow:hidden}.modal-open .modal{overflow-x:hidden;overflow-y:auto}.modal{position:fixed;top:0;left:0;z-index:1050;display:none;width:100%;height:100%;overflow:hidden;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:-webkit-transform .3s ease-out;transition:transform .3s ease-out;transition:transform .3s ease-out,-webkit-transform .3s ease-out;-webkit-transform:translate(0,-50px);transform:translate(0,-50px)}@media (prefers-reduced-motion:reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{-webkit-transform:none;transform:none}.modal.modal-static .modal-dialog{-webkit-transform:scale(1.02);transform:scale(1.02)}.modal-dialog-scrollable{display:-ms-flexbox;display:flex;max-height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 1rem);overflow:hidden}.modal-dialog-scrollable .modal-footer,.modal-dialog-scrollable .modal-header{-ms-flex-negative:0;flex-shrink:0}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;min-height:calc(100% - 1rem)}.modal-dialog-centered::before{display:block;height:calc(100vh - 1rem);height:-webkit-min-content;height:-moz-min-content;height:min-content;content:""}.modal-dialog-centered.modal-dialog-scrollable{-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;height:100%}.modal-dialog-centered.modal-dialog-scrollable .modal-content{max-height:none}.modal-dialog-centered.modal-dialog-scrollable::before{content:none}.modal-content{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem;outline:0}.modal-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:justify;justify-content:space-between;padding:1rem 1rem;border-bottom:1px solid #dee2e6;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.modal-header .close{padding:1rem 1rem;margin:-1rem -1rem -1rem auto}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;-ms-flex:1 1 auto;flex:1 1 auto;padding:1rem}.modal-footer{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:end;justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6;border-bottom-right-radius:calc(.3rem - 1px);border-bottom-left-radius:calc(.3rem - 1px)}.modal-footer>*{margin:.25rem}.modal-scrollbar-measure{position:absolute;top:-9999px;width:50px;height:50px;overflow:scroll}@media (min-width:576px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{max-height:calc(100% - 3.5rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-dialog-centered::before{height:calc(100vh - 3.5rem);height:-webkit-min-content;height:-moz-min-content;height:min-content}.modal-sm{max-width:300px}}@media (min-width:992px){.modal-lg,.modal-xl{max-width:800px}}@media (min-width:1200px){.modal-xl{max-width:1140px}}.tooltip{position:absolute;z-index:1070;display:block;margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .arrow::before{position:absolute;content:"";border-color:transparent;border-style:solid}.bs-tooltip-auto[x-placement^=top],.bs-tooltip-top{padding:.4rem 0}.bs-tooltip-auto[x-placement^=top] .arrow,.bs-tooltip-top .arrow{bottom:0}.bs-tooltip-auto[x-placement^=top] .arrow::before,.bs-tooltip-top .arrow::before{top:0;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-auto[x-placement^=right],.bs-tooltip-right{padding:0 .4rem}.bs-tooltip-auto[x-placement^=right] .arrow,.bs-tooltip-right .arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=right] .arrow::before,.bs-tooltip-right .arrow::before{right:0;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.bs-tooltip-auto[x-placement^=bottom],.bs-tooltip-bottom{padding:.4rem 0}.bs-tooltip-auto[x-placement^=bottom] .arrow,.bs-tooltip-bottom .arrow{top:0}.bs-tooltip-auto[x-placement^=bottom] .arrow::before,.bs-tooltip-bottom .arrow::before{bottom:0;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-auto[x-placement^=left],.bs-tooltip-left{padding:0 .4rem}.bs-tooltip-auto[x-placement^=left] .arrow,.bs-tooltip-left .arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=left] .arrow::before,.bs-tooltip-left .arrow::before{left:0;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000;border-radius:.25rem}.popover{position:absolute;top:0;left:0;z-index:1060;display:block;max-width:276px;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem}.popover .arrow{position:absolute;display:block;width:1rem;height:.5rem;margin:0 .3rem}.popover .arrow::after,.popover .arrow::before{position:absolute;display:block;content:"";border-color:transparent;border-style:solid}.bs-popover-auto[x-placement^=top],.bs-popover-top{margin-bottom:.5rem}.bs-popover-auto[x-placement^=top]>.arrow,.bs-popover-top>.arrow{bottom:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=top]>.arrow::before,.bs-popover-top>.arrow::before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=top]>.arrow::after,.bs-popover-top>.arrow::after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-auto[x-placement^=right],.bs-popover-right{margin-left:.5rem}.bs-popover-auto[x-placement^=right]>.arrow,.bs-popover-right>.arrow{left:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=right]>.arrow::before,.bs-popover-right>.arrow::before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=right]>.arrow::after,.bs-popover-right>.arrow::after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.bs-popover-auto[x-placement^=bottom],.bs-popover-bottom{margin-top:.5rem}.bs-popover-auto[x-placement^=bottom]>.arrow,.bs-popover-bottom>.arrow{top:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=bottom]>.arrow::before,.bs-popover-bottom>.arrow::before{top:0;border-width:0 .5rem .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=bottom]>.arrow::after,.bs-popover-bottom>.arrow::after{top:1px;border-width:0 .5rem .5rem .5rem;border-bottom-color:#fff}.bs-popover-auto[x-placement^=bottom] .popover-header::before,.bs-popover-bottom .popover-header::before{position:absolute;top:0;left:50%;display:block;width:1rem;margin-left:-.5rem;content:"";border-bottom:1px solid #f7f7f7}.bs-popover-auto[x-placement^=left],.bs-popover-left{margin-right:.5rem}.bs-popover-auto[x-placement^=left]>.arrow,.bs-popover-left>.arrow{right:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=left]>.arrow::before,.bs-popover-left>.arrow::before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=left]>.arrow::after,.bs-popover-left>.arrow::after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.popover-header{padding:.5rem .75rem;margin-bottom:0;font-size:1rem;background-color:#f7f7f7;border-bottom:1px solid #ebebeb;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.popover-header:empty{display:none}.popover-body{padding:.5rem .75rem;color:#212529}.carousel{position:relative}.carousel.pointer-event{-ms-touch-action:pan-y;touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;-webkit-backface-visibility:hidden;backface-visibility:hidden;transition:-webkit-transform .6s ease-in-out;transition:transform .6s ease-in-out;transition:transform .6s ease-in-out,-webkit-transform .6s ease-in-out}@media (prefers-reduced-motion:reduce){.carousel-item{transition:none}}.carousel-item-next,.carousel-item-prev,.carousel-item.active{display:block}.active.carousel-item-right,.carousel-item-next:not(.carousel-item-left){-webkit-transform:translateX(100%);transform:translateX(100%)}.active.carousel-item-left,.carousel-item-prev:not(.carousel-item-right){-webkit-transform:translateX(-100%);transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;-webkit-transform:none;transform:none}.carousel-fade .carousel-item-next.carousel-item-left,.carousel-fade .carousel-item-prev.carousel-item-right,.carousel-fade .carousel-item.active{z-index:1;opacity:1}.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{z-index:0;opacity:0;transition:opacity 0s .6s}@media (prefers-reduced-motion:reduce){.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{transition:none}}.carousel-control-next,.carousel-control-prev{position:absolute;top:0;bottom:0;z-index:1;display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:15%;color:#fff;text-align:center;opacity:.5;transition:opacity .15s ease}@media (prefers-reduced-motion:reduce){.carousel-control-next,.carousel-control-prev{transition:none}}.carousel-control-next:focus,.carousel-control-next:hover,.carousel-control-prev:focus,.carousel-control-prev:hover{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-next-icon,.carousel-control-prev-icon{display:inline-block;width:20px;height:20px;background:no-repeat 50%/100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M5.25 0l-4 4 4 4 1.5-1.5L4.25 4l2.5-2.5L5.25 0z'/%3e%3c/svg%3e")}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M2.75 0l-1.5 1.5L3.75 4l-2.5 2.5L2.75 8l4-4-4-4z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:15;display:-ms-flexbox;display:flex;-ms-flex-pack:center;justify-content:center;padding-left:0;margin-right:15%;margin-left:15%;list-style:none}.carousel-indicators li{box-sizing:content-box;-ms-flex:0 1 auto;flex:0 1 auto;width:30px;height:3px;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border-top:10px solid transparent;border-bottom:10px solid transparent;opacity:.5;transition:opacity .6s ease}@media (prefers-reduced-motion:reduce){.carousel-indicators li{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:20px;left:15%;z-index:10;padding-top:20px;padding-bottom:20px;color:#fff;text-align:center}@-webkit-keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}@keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;border:.25em solid currentColor;border-right-color:transparent;border-radius:50%;-webkit-animation:spinner-border .75s linear infinite;animation:spinner-border .75s linear infinite}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@-webkit-keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1;-webkit-transform:none;transform:none}}@keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1;-webkit-transform:none;transform:none}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;background-color:currentColor;border-radius:50%;opacity:0;-webkit-animation:spinner-grow .75s linear infinite;animation:spinner-grow .75s linear infinite}.spinner-grow-sm{width:1rem;height:1rem}.align-baseline{vertical-align:baseline!important}.align-top{vertical-align:top!important}.align-middle{vertical-align:middle!important}.align-bottom{vertical-align:bottom!important}.align-text-bottom{vertical-align:text-bottom!important}.align-text-top{vertical-align:text-top!important}.bg-primary{background-color:#007bff!important}a.bg-primary:focus,a.bg-primary:hover,button.bg-primary:focus,button.bg-primary:hover{background-color:#0062cc!important}.bg-secondary{background-color:#6c757d!important}a.bg-secondary:focus,a.bg-secondary:hover,button.bg-secondary:focus,button.bg-secondary:hover{background-color:#545b62!important}.bg-success{background-color:#28a745!important}a.bg-success:focus,a.bg-success:hover,button.bg-success:focus,button.bg-success:hover{background-color:#1e7e34!important}.bg-info{background-color:#17a2b8!important}a.bg-info:focus,a.bg-info:hover,button.bg-info:focus,button.bg-info:hover{background-color:#117a8b!important}.bg-warning{background-color:#ffc107!important}a.bg-warning:focus,a.bg-warning:hover,button.bg-warning:focus,button.bg-warning:hover{background-color:#d39e00!important}.bg-danger{background-color:#dc3545!important}a.bg-danger:focus,a.bg-danger:hover,button.bg-danger:focus,button.bg-danger:hover{background-color:#bd2130!important}.bg-light{background-color:#f8f9fa!important}a.bg-light:focus,a.bg-light:hover,button.bg-light:focus,button.bg-light:hover{background-color:#dae0e5!important}.bg-dark{background-color:#343a40!important}a.bg-dark:focus,a.bg-dark:hover,button.bg-dark:focus,button.bg-dark:hover{background-color:#1d2124!important}.bg-white{background-color:#fff!important}.bg-transparent{background-color:transparent!important}.border{border:1px solid #dee2e6!important}.border-top{border-top:1px solid #dee2e6!important}.border-right{border-right:1px solid #dee2e6!important}.border-bottom{border-bottom:1px solid #dee2e6!important}.border-left{border-left:1px solid #dee2e6!important}.border-0{border:0!important}.border-top-0{border-top:0!important}.border-right-0{border-right:0!important}.border-bottom-0{border-bottom:0!important}.border-left-0{border-left:0!important}.border-primary{border-color:#007bff!important}.border-secondary{border-color:#6c757d!important}.border-success{border-color:#28a745!important}.border-info{border-color:#17a2b8!important}.border-warning{border-color:#ffc107!important}.border-danger{border-color:#dc3545!important}.border-light{border-color:#f8f9fa!important}.border-dark{border-color:#343a40!important}.border-white{border-color:#fff!important}.rounded-sm{border-radius:.2rem!important}.rounded{border-radius:.25rem!important}.rounded-top{border-top-left-radius:.25rem!important;border-top-right-radius:.25rem!important}.rounded-right{border-top-right-radius:.25rem!important;border-bottom-right-radius:.25rem!important}.rounded-bottom{border-bottom-right-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-left{border-top-left-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-lg{border-radius:.3rem!important}.rounded-circle{border-radius:50%!important}.rounded-pill{border-radius:50rem!important}.rounded-0{border-radius:0!important}.clearfix::after{display:block;clear:both;content:""}.d-none{display:none!important}.d-inline{display:inline!important}.d-inline-block{display:inline-block!important}.d-block{display:block!important}.d-table{display:table!important}.d-table-row{display:table-row!important}.d-table-cell{display:table-cell!important}.d-flex{display:-ms-flexbox!important;display:flex!important}.d-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}@media (min-width:576px){.d-sm-none{display:none!important}.d-sm-inline{display:inline!important}.d-sm-inline-block{display:inline-block!important}.d-sm-block{display:block!important}.d-sm-table{display:table!important}.d-sm-table-row{display:table-row!important}.d-sm-table-cell{display:table-cell!important}.d-sm-flex{display:-ms-flexbox!important;display:flex!important}.d-sm-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:768px){.d-md-none{display:none!important}.d-md-inline{display:inline!important}.d-md-inline-block{display:inline-block!important}.d-md-block{display:block!important}.d-md-table{display:table!important}.d-md-table-row{display:table-row!important}.d-md-table-cell{display:table-cell!important}.d-md-flex{display:-ms-flexbox!important;display:flex!important}.d-md-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:992px){.d-lg-none{display:none!important}.d-lg-inline{display:inline!important}.d-lg-inline-block{display:inline-block!important}.d-lg-block{display:block!important}.d-lg-table{display:table!important}.d-lg-table-row{display:table-row!important}.d-lg-table-cell{display:table-cell!important}.d-lg-flex{display:-ms-flexbox!important;display:flex!important}.d-lg-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:1200px){.d-xl-none{display:none!important}.d-xl-inline{display:inline!important}.d-xl-inline-block{display:inline-block!important}.d-xl-block{display:block!important}.d-xl-table{display:table!important}.d-xl-table-row{display:table-row!important}.d-xl-table-cell{display:table-cell!important}.d-xl-flex{display:-ms-flexbox!important;display:flex!important}.d-xl-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media print{.d-print-none{display:none!important}.d-print-inline{display:inline!important}.d-print-inline-block{display:inline-block!important}.d-print-block{display:block!important}.d-print-table{display:table!important}.d-print-table-row{display:table-row!important}.d-print-table-cell{display:table-cell!important}.d-print-flex{display:-ms-flexbox!important;display:flex!important}.d-print-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}.embed-responsive{position:relative;display:block;width:100%;padding:0;overflow:hidden}.embed-responsive::before{display:block;content:""}.embed-responsive .embed-responsive-item,.embed-responsive embed,.embed-responsive iframe,.embed-responsive object,.embed-responsive video{position:absolute;top:0;bottom:0;left:0;width:100%;height:100%;border:0}.embed-responsive-21by9::before{padding-top:42.857143%}.embed-responsive-16by9::before{padding-top:56.25%}.embed-responsive-4by3::before{padding-top:75%}.embed-responsive-1by1::before{padding-top:100%}.flex-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-center{-ms-flex-align:center!important;align-items:center!important}.align-items-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}@media (min-width:576px){.flex-sm-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-sm-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-sm-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-sm-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-sm-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-sm-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-sm-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-sm-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-sm-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-sm-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-sm-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-sm-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-sm-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-sm-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-sm-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-sm-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-sm-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-sm-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-sm-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-sm-center{-ms-flex-align:center!important;align-items:center!important}.align-items-sm-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-sm-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-sm-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-sm-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-sm-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-sm-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-sm-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-sm-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-sm-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-sm-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-sm-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-sm-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-sm-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-sm-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:768px){.flex-md-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-md-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-md-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-md-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-md-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-md-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-md-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-md-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-md-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-md-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-md-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-md-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-md-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-md-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-md-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-md-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-md-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-md-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-md-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-md-center{-ms-flex-align:center!important;align-items:center!important}.align-items-md-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-md-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-md-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-md-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-md-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-md-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-md-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-md-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-md-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-md-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-md-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-md-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-md-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-md-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:992px){.flex-lg-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-lg-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-lg-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-lg-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-lg-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-lg-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-lg-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-lg-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-lg-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-lg-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-lg-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-lg-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-lg-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-lg-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-lg-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-lg-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-lg-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-lg-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-lg-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-lg-center{-ms-flex-align:center!important;align-items:center!important}.align-items-lg-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-lg-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-lg-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-lg-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-lg-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-lg-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-lg-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-lg-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-lg-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-lg-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-lg-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-lg-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-lg-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-lg-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:1200px){.flex-xl-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-xl-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-xl-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-xl-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-xl-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-xl-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-xl-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-xl-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-xl-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-xl-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-xl-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-xl-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-xl-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-xl-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-xl-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-xl-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-xl-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-xl-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-xl-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-xl-center{-ms-flex-align:center!important;align-items:center!important}.align-items-xl-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-xl-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-xl-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-xl-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-xl-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-xl-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-xl-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-xl-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-xl-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-xl-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-xl-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-xl-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-xl-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-xl-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}.float-left{float:left!important}.float-right{float:right!important}.float-none{float:none!important}@media (min-width:576px){.float-sm-left{float:left!important}.float-sm-right{float:right!important}.float-sm-none{float:none!important}}@media (min-width:768px){.float-md-left{float:left!important}.float-md-right{float:right!important}.float-md-none{float:none!important}}@media (min-width:992px){.float-lg-left{float:left!important}.float-lg-right{float:right!important}.float-lg-none{float:none!important}}@media (min-width:1200px){.float-xl-left{float:left!important}.float-xl-right{float:right!important}.float-xl-none{float:none!important}}.user-select-all{-webkit-user-select:all!important;-moz-user-select:all!important;-ms-user-select:all!important;user-select:all!important}.user-select-auto{-webkit-user-select:auto!important;-moz-user-select:auto!important;-ms-user-select:auto!important;user-select:auto!important}.user-select-none{-webkit-user-select:none!important;-moz-user-select:none!important;-ms-user-select:none!important;user-select:none!important}.overflow-auto{overflow:auto!important}.overflow-hidden{overflow:hidden!important}.position-static{position:static!important}.position-relative{position:relative!important}.position-absolute{position:absolute!important}.position-fixed{position:fixed!important}.position-sticky{position:-webkit-sticky!important;position:sticky!important}.fixed-top{position:fixed;top:0;right:0;left:0;z-index:1030}.fixed-bottom{position:fixed;right:0;bottom:0;left:0;z-index:1030}@supports ((position:-webkit-sticky) or (position:sticky)){.sticky-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);white-space:nowrap;border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;overflow:visible;clip:auto;white-space:normal}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075)!important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15)!important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175)!important}.shadow-none{box-shadow:none!important}.w-25{width:25%!important}.w-50{width:50%!important}.w-75{width:75%!important}.w-100{width:100%!important}.w-auto{width:auto!important}.h-25{height:25%!important}.h-50{height:50%!important}.h-75{height:75%!important}.h-100{height:100%!important}.h-auto{height:auto!important}.mw-100{max-width:100%!important}.mh-100{max-height:100%!important}.min-vw-100{min-width:100vw!important}.min-vh-100{min-height:100vh!important}.vw-100{width:100vw!important}.vh-100{height:100vh!important}.m-0{margin:0!important}.mt-0,.my-0{margin-top:0!important}.mr-0,.mx-0{margin-right:0!important}.mb-0,.my-0{margin-bottom:0!important}.ml-0,.mx-0{margin-left:0!important}.m-1{margin:.25rem!important}.mt-1,.my-1{margin-top:.25rem!important}.mr-1,.mx-1{margin-right:.25rem!important}.mb-1,.my-1{margin-bottom:.25rem!important}.ml-1,.mx-1{margin-left:.25rem!important}.m-2{margin:.5rem!important}.mt-2,.my-2{margin-top:.5rem!important}.mr-2,.mx-2{margin-right:.5rem!important}.mb-2,.my-2{margin-bottom:.5rem!important}.ml-2,.mx-2{margin-left:.5rem!important}.m-3{margin:1rem!important}.mt-3,.my-3{margin-top:1rem!important}.mr-3,.mx-3{margin-right:1rem!important}.mb-3,.my-3{margin-bottom:1rem!important}.ml-3,.mx-3{margin-left:1rem!important}.m-4{margin:1.5rem!important}.mt-4,.my-4{margin-top:1.5rem!important}.mr-4,.mx-4{margin-right:1.5rem!important}.mb-4,.my-4{margin-bottom:1.5rem!important}.ml-4,.mx-4{margin-left:1.5rem!important}.m-5{margin:3rem!important}.mt-5,.my-5{margin-top:3rem!important}.mr-5,.mx-5{margin-right:3rem!important}.mb-5,.my-5{margin-bottom:3rem!important}.ml-5,.mx-5{margin-left:3rem!important}.p-0{padding:0!important}.pt-0,.py-0{padding-top:0!important}.pr-0,.px-0{padding-right:0!important}.pb-0,.py-0{padding-bottom:0!important}.pl-0,.px-0{padding-left:0!important}.p-1{padding:.25rem!important}.pt-1,.py-1{padding-top:.25rem!important}.pr-1,.px-1{padding-right:.25rem!important}.pb-1,.py-1{padding-bottom:.25rem!important}.pl-1,.px-1{padding-left:.25rem!important}.p-2{padding:.5rem!important}.pt-2,.py-2{padding-top:.5rem!important}.pr-2,.px-2{padding-right:.5rem!important}.pb-2,.py-2{padding-bottom:.5rem!important}.pl-2,.px-2{padding-left:.5rem!important}.p-3{padding:1rem!important}.pt-3,.py-3{padding-top:1rem!important}.pr-3,.px-3{padding-right:1rem!important}.pb-3,.py-3{padding-bottom:1rem!important}.pl-3,.px-3{padding-left:1rem!important}.p-4{padding:1.5rem!important}.pt-4,.py-4{padding-top:1.5rem!important}.pr-4,.px-4{padding-right:1.5rem!important}.pb-4,.py-4{padding-bottom:1.5rem!important}.pl-4,.px-4{padding-left:1.5rem!important}.p-5{padding:3rem!important}.pt-5,.py-5{padding-top:3rem!important}.pr-5,.px-5{padding-right:3rem!important}.pb-5,.py-5{padding-bottom:3rem!important}.pl-5,.px-5{padding-left:3rem!important}.m-n1{margin:-.25rem!important}.mt-n1,.my-n1{margin-top:-.25rem!important}.mr-n1,.mx-n1{margin-right:-.25rem!important}.mb-n1,.my-n1{margin-bottom:-.25rem!important}.ml-n1,.mx-n1{margin-left:-.25rem!important}.m-n2{margin:-.5rem!important}.mt-n2,.my-n2{margin-top:-.5rem!important}.mr-n2,.mx-n2{margin-right:-.5rem!important}.mb-n2,.my-n2{margin-bottom:-.5rem!important}.ml-n2,.mx-n2{margin-left:-.5rem!important}.m-n3{margin:-1rem!important}.mt-n3,.my-n3{margin-top:-1rem!important}.mr-n3,.mx-n3{margin-right:-1rem!important}.mb-n3,.my-n3{margin-bottom:-1rem!important}.ml-n3,.mx-n3{margin-left:-1rem!important}.m-n4{margin:-1.5rem!important}.mt-n4,.my-n4{margin-top:-1.5rem!important}.mr-n4,.mx-n4{margin-right:-1.5rem!important}.mb-n4,.my-n4{margin-bottom:-1.5rem!important}.ml-n4,.mx-n4{margin-left:-1.5rem!important}.m-n5{margin:-3rem!important}.mt-n5,.my-n5{margin-top:-3rem!important}.mr-n5,.mx-n5{margin-right:-3rem!important}.mb-n5,.my-n5{margin-bottom:-3rem!important}.ml-n5,.mx-n5{margin-left:-3rem!important}.m-auto{margin:auto!important}.mt-auto,.my-auto{margin-top:auto!important}.mr-auto,.mx-auto{margin-right:auto!important}.mb-auto,.my-auto{margin-bottom:auto!important}.ml-auto,.mx-auto{margin-left:auto!important}@media (min-width:576px){.m-sm-0{margin:0!important}.mt-sm-0,.my-sm-0{margin-top:0!important}.mr-sm-0,.mx-sm-0{margin-right:0!important}.mb-sm-0,.my-sm-0{margin-bottom:0!important}.ml-sm-0,.mx-sm-0{margin-left:0!important}.m-sm-1{margin:.25rem!important}.mt-sm-1,.my-sm-1{margin-top:.25rem!important}.mr-sm-1,.mx-sm-1{margin-right:.25rem!important}.mb-sm-1,.my-sm-1{margin-bottom:.25rem!important}.ml-sm-1,.mx-sm-1{margin-left:.25rem!important}.m-sm-2{margin:.5rem!important}.mt-sm-2,.my-sm-2{margin-top:.5rem!important}.mr-sm-2,.mx-sm-2{margin-right:.5rem!important}.mb-sm-2,.my-sm-2{margin-bottom:.5rem!important}.ml-sm-2,.mx-sm-2{margin-left:.5rem!important}.m-sm-3{margin:1rem!important}.mt-sm-3,.my-sm-3{margin-top:1rem!important}.mr-sm-3,.mx-sm-3{margin-right:1rem!important}.mb-sm-3,.my-sm-3{margin-bottom:1rem!important}.ml-sm-3,.mx-sm-3{margin-left:1rem!important}.m-sm-4{margin:1.5rem!important}.mt-sm-4,.my-sm-4{margin-top:1.5rem!important}.mr-sm-4,.mx-sm-4{margin-right:1.5rem!important}.mb-sm-4,.my-sm-4{margin-bottom:1.5rem!important}.ml-sm-4,.mx-sm-4{margin-left:1.5rem!important}.m-sm-5{margin:3rem!important}.mt-sm-5,.my-sm-5{margin-top:3rem!important}.mr-sm-5,.mx-sm-5{margin-right:3rem!important}.mb-sm-5,.my-sm-5{margin-bottom:3rem!important}.ml-sm-5,.mx-sm-5{margin-left:3rem!important}.p-sm-0{padding:0!important}.pt-sm-0,.py-sm-0{padding-top:0!important}.pr-sm-0,.px-sm-0{padding-right:0!important}.pb-sm-0,.py-sm-0{padding-bottom:0!important}.pl-sm-0,.px-sm-0{padding-left:0!important}.p-sm-1{padding:.25rem!important}.pt-sm-1,.py-sm-1{padding-top:.25rem!important}.pr-sm-1,.px-sm-1{padding-right:.25rem!important}.pb-sm-1,.py-sm-1{padding-bottom:.25rem!important}.pl-sm-1,.px-sm-1{padding-left:.25rem!important}.p-sm-2{padding:.5rem!important}.pt-sm-2,.py-sm-2{padding-top:.5rem!important}.pr-sm-2,.px-sm-2{padding-right:.5rem!important}.pb-sm-2,.py-sm-2{padding-bottom:.5rem!important}.pl-sm-2,.px-sm-2{padding-left:.5rem!important}.p-sm-3{padding:1rem!important}.pt-sm-3,.py-sm-3{padding-top:1rem!important}.pr-sm-3,.px-sm-3{padding-right:1rem!important}.pb-sm-3,.py-sm-3{padding-bottom:1rem!important}.pl-sm-3,.px-sm-3{padding-left:1rem!important}.p-sm-4{padding:1.5rem!important}.pt-sm-4,.py-sm-4{padding-top:1.5rem!important}.pr-sm-4,.px-sm-4{padding-right:1.5rem!important}.pb-sm-4,.py-sm-4{padding-bottom:1.5rem!important}.pl-sm-4,.px-sm-4{padding-left:1.5rem!important}.p-sm-5{padding:3rem!important}.pt-sm-5,.py-sm-5{padding-top:3rem!important}.pr-sm-5,.px-sm-5{padding-right:3rem!important}.pb-sm-5,.py-sm-5{padding-bottom:3rem!important}.pl-sm-5,.px-sm-5{padding-left:3rem!important}.m-sm-n1{margin:-.25rem!important}.mt-sm-n1,.my-sm-n1{margin-top:-.25rem!important}.mr-sm-n1,.mx-sm-n1{margin-right:-.25rem!important}.mb-sm-n1,.my-sm-n1{margin-bottom:-.25rem!important}.ml-sm-n1,.mx-sm-n1{margin-left:-.25rem!important}.m-sm-n2{margin:-.5rem!important}.mt-sm-n2,.my-sm-n2{margin-top:-.5rem!important}.mr-sm-n2,.mx-sm-n2{margin-right:-.5rem!important}.mb-sm-n2,.my-sm-n2{margin-bottom:-.5rem!important}.ml-sm-n2,.mx-sm-n2{margin-left:-.5rem!important}.m-sm-n3{margin:-1rem!important}.mt-sm-n3,.my-sm-n3{margin-top:-1rem!important}.mr-sm-n3,.mx-sm-n3{margin-right:-1rem!important}.mb-sm-n3,.my-sm-n3{margin-bottom:-1rem!important}.ml-sm-n3,.mx-sm-n3{margin-left:-1rem!important}.m-sm-n4{margin:-1.5rem!important}.mt-sm-n4,.my-sm-n4{margin-top:-1.5rem!important}.mr-sm-n4,.mx-sm-n4{margin-right:-1.5rem!important}.mb-sm-n4,.my-sm-n4{margin-bottom:-1.5rem!important}.ml-sm-n4,.mx-sm-n4{margin-left:-1.5rem!important}.m-sm-n5{margin:-3rem!important}.mt-sm-n5,.my-sm-n5{margin-top:-3rem!important}.mr-sm-n5,.mx-sm-n5{margin-right:-3rem!important}.mb-sm-n5,.my-sm-n5{margin-bottom:-3rem!important}.ml-sm-n5,.mx-sm-n5{margin-left:-3rem!important}.m-sm-auto{margin:auto!important}.mt-sm-auto,.my-sm-auto{margin-top:auto!important}.mr-sm-auto,.mx-sm-auto{margin-right:auto!important}.mb-sm-auto,.my-sm-auto{margin-bottom:auto!important}.ml-sm-auto,.mx-sm-auto{margin-left:auto!important}}@media (min-width:768px){.m-md-0{margin:0!important}.mt-md-0,.my-md-0{margin-top:0!important}.mr-md-0,.mx-md-0{margin-right:0!important}.mb-md-0,.my-md-0{margin-bottom:0!important}.ml-md-0,.mx-md-0{margin-left:0!important}.m-md-1{margin:.25rem!important}.mt-md-1,.my-md-1{margin-top:.25rem!important}.mr-md-1,.mx-md-1{margin-right:.25rem!important}.mb-md-1,.my-md-1{margin-bottom:.25rem!important}.ml-md-1,.mx-md-1{margin-left:.25rem!important}.m-md-2{margin:.5rem!important}.mt-md-2,.my-md-2{margin-top:.5rem!important}.mr-md-2,.mx-md-2{margin-right:.5rem!important}.mb-md-2,.my-md-2{margin-bottom:.5rem!important}.ml-md-2,.mx-md-2{margin-left:.5rem!important}.m-md-3{margin:1rem!important}.mt-md-3,.my-md-3{margin-top:1rem!important}.mr-md-3,.mx-md-3{margin-right:1rem!important}.mb-md-3,.my-md-3{margin-bottom:1rem!important}.ml-md-3,.mx-md-3{margin-left:1rem!important}.m-md-4{margin:1.5rem!important}.mt-md-4,.my-md-4{margin-top:1.5rem!important}.mr-md-4,.mx-md-4{margin-right:1.5rem!important}.mb-md-4,.my-md-4{margin-bottom:1.5rem!important}.ml-md-4,.mx-md-4{margin-left:1.5rem!important}.m-md-5{margin:3rem!important}.mt-md-5,.my-md-5{margin-top:3rem!important}.mr-md-5,.mx-md-5{margin-right:3rem!important}.mb-md-5,.my-md-5{margin-bottom:3rem!important}.ml-md-5,.mx-md-5{margin-left:3rem!important}.p-md-0{padding:0!important}.pt-md-0,.py-md-0{padding-top:0!important}.pr-md-0,.px-md-0{padding-right:0!important}.pb-md-0,.py-md-0{padding-bottom:0!important}.pl-md-0,.px-md-0{padding-left:0!important}.p-md-1{padding:.25rem!important}.pt-md-1,.py-md-1{padding-top:.25rem!important}.pr-md-1,.px-md-1{padding-right:.25rem!important}.pb-md-1,.py-md-1{padding-bottom:.25rem!important}.pl-md-1,.px-md-1{padding-left:.25rem!important}.p-md-2{padding:.5rem!important}.pt-md-2,.py-md-2{padding-top:.5rem!important}.pr-md-2,.px-md-2{padding-right:.5rem!important}.pb-md-2,.py-md-2{padding-bottom:.5rem!important}.pl-md-2,.px-md-2{padding-left:.5rem!important}.p-md-3{padding:1rem!important}.pt-md-3,.py-md-3{padding-top:1rem!important}.pr-md-3,.px-md-3{padding-right:1rem!important}.pb-md-3,.py-md-3{padding-bottom:1rem!important}.pl-md-3,.px-md-3{padding-left:1rem!important}.p-md-4{padding:1.5rem!important}.pt-md-4,.py-md-4{padding-top:1.5rem!important}.pr-md-4,.px-md-4{padding-right:1.5rem!important}.pb-md-4,.py-md-4{padding-bottom:1.5rem!important}.pl-md-4,.px-md-4{padding-left:1.5rem!important}.p-md-5{padding:3rem!important}.pt-md-5,.py-md-5{padding-top:3rem!important}.pr-md-5,.px-md-5{padding-right:3rem!important}.pb-md-5,.py-md-5{padding-bottom:3rem!important}.pl-md-5,.px-md-5{padding-left:3rem!important}.m-md-n1{margin:-.25rem!important}.mt-md-n1,.my-md-n1{margin-top:-.25rem!important}.mr-md-n1,.mx-md-n1{margin-right:-.25rem!important}.mb-md-n1,.my-md-n1{margin-bottom:-.25rem!important}.ml-md-n1,.mx-md-n1{margin-left:-.25rem!important}.m-md-n2{margin:-.5rem!important}.mt-md-n2,.my-md-n2{margin-top:-.5rem!important}.mr-md-n2,.mx-md-n2{margin-right:-.5rem!important}.mb-md-n2,.my-md-n2{margin-bottom:-.5rem!important}.ml-md-n2,.mx-md-n2{margin-left:-.5rem!important}.m-md-n3{margin:-1rem!important}.mt-md-n3,.my-md-n3{margin-top:-1rem!important}.mr-md-n3,.mx-md-n3{margin-right:-1rem!important}.mb-md-n3,.my-md-n3{margin-bottom:-1rem!important}.ml-md-n3,.mx-md-n3{margin-left:-1rem!important}.m-md-n4{margin:-1.5rem!important}.mt-md-n4,.my-md-n4{margin-top:-1.5rem!important}.mr-md-n4,.mx-md-n4{margin-right:-1.5rem!important}.mb-md-n4,.my-md-n4{margin-bottom:-1.5rem!important}.ml-md-n4,.mx-md-n4{margin-left:-1.5rem!important}.m-md-n5{margin:-3rem!important}.mt-md-n5,.my-md-n5{margin-top:-3rem!important}.mr-md-n5,.mx-md-n5{margin-right:-3rem!important}.mb-md-n5,.my-md-n5{margin-bottom:-3rem!important}.ml-md-n5,.mx-md-n5{margin-left:-3rem!important}.m-md-auto{margin:auto!important}.mt-md-auto,.my-md-auto{margin-top:auto!important}.mr-md-auto,.mx-md-auto{margin-right:auto!important}.mb-md-auto,.my-md-auto{margin-bottom:auto!important}.ml-md-auto,.mx-md-auto{margin-left:auto!important}}@media (min-width:992px){.m-lg-0{margin:0!important}.mt-lg-0,.my-lg-0{margin-top:0!important}.mr-lg-0,.mx-lg-0{margin-right:0!important}.mb-lg-0,.my-lg-0{margin-bottom:0!important}.ml-lg-0,.mx-lg-0{margin-left:0!important}.m-lg-1{margin:.25rem!important}.mt-lg-1,.my-lg-1{margin-top:.25rem!important}.mr-lg-1,.mx-lg-1{margin-right:.25rem!important}.mb-lg-1,.my-lg-1{margin-bottom:.25rem!important}.ml-lg-1,.mx-lg-1{margin-left:.25rem!important}.m-lg-2{margin:.5rem!important}.mt-lg-2,.my-lg-2{margin-top:.5rem!important}.mr-lg-2,.mx-lg-2{margin-right:.5rem!important}.mb-lg-2,.my-lg-2{margin-bottom:.5rem!important}.ml-lg-2,.mx-lg-2{margin-left:.5rem!important}.m-lg-3{margin:1rem!important}.mt-lg-3,.my-lg-3{margin-top:1rem!important}.mr-lg-3,.mx-lg-3{margin-right:1rem!important}.mb-lg-3,.my-lg-3{margin-bottom:1rem!important}.ml-lg-3,.mx-lg-3{margin-left:1rem!important}.m-lg-4{margin:1.5rem!important}.mt-lg-4,.my-lg-4{margin-top:1.5rem!important}.mr-lg-4,.mx-lg-4{margin-right:1.5rem!important}.mb-lg-4,.my-lg-4{margin-bottom:1.5rem!important}.ml-lg-4,.mx-lg-4{margin-left:1.5rem!important}.m-lg-5{margin:3rem!important}.mt-lg-5,.my-lg-5{margin-top:3rem!important}.mr-lg-5,.mx-lg-5{margin-right:3rem!important}.mb-lg-5,.my-lg-5{margin-bottom:3rem!important}.ml-lg-5,.mx-lg-5{margin-left:3rem!important}.p-lg-0{padding:0!important}.pt-lg-0,.py-lg-0{padding-top:0!important}.pr-lg-0,.px-lg-0{padding-right:0!important}.pb-lg-0,.py-lg-0{padding-bottom:0!important}.pl-lg-0,.px-lg-0{padding-left:0!important}.p-lg-1{padding:.25rem!important}.pt-lg-1,.py-lg-1{padding-top:.25rem!important}.pr-lg-1,.px-lg-1{padding-right:.25rem!important}.pb-lg-1,.py-lg-1{padding-bottom:.25rem!important}.pl-lg-1,.px-lg-1{padding-left:.25rem!important}.p-lg-2{padding:.5rem!important}.pt-lg-2,.py-lg-2{padding-top:.5rem!important}.pr-lg-2,.px-lg-2{padding-right:.5rem!important}.pb-lg-2,.py-lg-2{padding-bottom:.5rem!important}.pl-lg-2,.px-lg-2{padding-left:.5rem!important}.p-lg-3{padding:1rem!important}.pt-lg-3,.py-lg-3{padding-top:1rem!important}.pr-lg-3,.px-lg-3{padding-right:1rem!important}.pb-lg-3,.py-lg-3{padding-bottom:1rem!important}.pl-lg-3,.px-lg-3{padding-left:1rem!important}.p-lg-4{padding:1.5rem!important}.pt-lg-4,.py-lg-4{padding-top:1.5rem!important}.pr-lg-4,.px-lg-4{padding-right:1.5rem!important}.pb-lg-4,.py-lg-4{padding-bottom:1.5rem!important}.pl-lg-4,.px-lg-4{padding-left:1.5rem!important}.p-lg-5{padding:3rem!important}.pt-lg-5,.py-lg-5{padding-top:3rem!important}.pr-lg-5,.px-lg-5{padding-right:3rem!important}.pb-lg-5,.py-lg-5{padding-bottom:3rem!important}.pl-lg-5,.px-lg-5{padding-left:3rem!important}.m-lg-n1{margin:-.25rem!important}.mt-lg-n1,.my-lg-n1{margin-top:-.25rem!important}.mr-lg-n1,.mx-lg-n1{margin-right:-.25rem!important}.mb-lg-n1,.my-lg-n1{margin-bottom:-.25rem!important}.ml-lg-n1,.mx-lg-n1{margin-left:-.25rem!important}.m-lg-n2{margin:-.5rem!important}.mt-lg-n2,.my-lg-n2{margin-top:-.5rem!important}.mr-lg-n2,.mx-lg-n2{margin-right:-.5rem!important}.mb-lg-n2,.my-lg-n2{margin-bottom:-.5rem!important}.ml-lg-n2,.mx-lg-n2{margin-left:-.5rem!important}.m-lg-n3{margin:-1rem!important}.mt-lg-n3,.my-lg-n3{margin-top:-1rem!important}.mr-lg-n3,.mx-lg-n3{margin-right:-1rem!important}.mb-lg-n3,.my-lg-n3{margin-bottom:-1rem!important}.ml-lg-n3,.mx-lg-n3{margin-left:-1rem!important}.m-lg-n4{margin:-1.5rem!important}.mt-lg-n4,.my-lg-n4{margin-top:-1.5rem!important}.mr-lg-n4,.mx-lg-n4{margin-right:-1.5rem!important}.mb-lg-n4,.my-lg-n4{margin-bottom:-1.5rem!important}.ml-lg-n4,.mx-lg-n4{margin-left:-1.5rem!important}.m-lg-n5{margin:-3rem!important}.mt-lg-n5,.my-lg-n5{margin-top:-3rem!important}.mr-lg-n5,.mx-lg-n5{margin-right:-3rem!important}.mb-lg-n5,.my-lg-n5{margin-bottom:-3rem!important}.ml-lg-n5,.mx-lg-n5{margin-left:-3rem!important}.m-lg-auto{margin:auto!important}.mt-lg-auto,.my-lg-auto{margin-top:auto!important}.mr-lg-auto,.mx-lg-auto{margin-right:auto!important}.mb-lg-auto,.my-lg-auto{margin-bottom:auto!important}.ml-lg-auto,.mx-lg-auto{margin-left:auto!important}}@media (min-width:1200px){.m-xl-0{margin:0!important}.mt-xl-0,.my-xl-0{margin-top:0!important}.mr-xl-0,.mx-xl-0{margin-right:0!important}.mb-xl-0,.my-xl-0{margin-bottom:0!important}.ml-xl-0,.mx-xl-0{margin-left:0!important}.m-xl-1{margin:.25rem!important}.mt-xl-1,.my-xl-1{margin-top:.25rem!important}.mr-xl-1,.mx-xl-1{margin-right:.25rem!important}.mb-xl-1,.my-xl-1{margin-bottom:.25rem!important}.ml-xl-1,.mx-xl-1{margin-left:.25rem!important}.m-xl-2{margin:.5rem!important}.mt-xl-2,.my-xl-2{margin-top:.5rem!important}.mr-xl-2,.mx-xl-2{margin-right:.5rem!important}.mb-xl-2,.my-xl-2{margin-bottom:.5rem!important}.ml-xl-2,.mx-xl-2{margin-left:.5rem!important}.m-xl-3{margin:1rem!important}.mt-xl-3,.my-xl-3{margin-top:1rem!important}.mr-xl-3,.mx-xl-3{margin-right:1rem!important}.mb-xl-3,.my-xl-3{margin-bottom:1rem!important}.ml-xl-3,.mx-xl-3{margin-left:1rem!important}.m-xl-4{margin:1.5rem!important}.mt-xl-4,.my-xl-4{margin-top:1.5rem!important}.mr-xl-4,.mx-xl-4{margin-right:1.5rem!important}.mb-xl-4,.my-xl-4{margin-bottom:1.5rem!important}.ml-xl-4,.mx-xl-4{margin-left:1.5rem!important}.m-xl-5{margin:3rem!important}.mt-xl-5,.my-xl-5{margin-top:3rem!important}.mr-xl-5,.mx-xl-5{margin-right:3rem!important}.mb-xl-5,.my-xl-5{margin-bottom:3rem!important}.ml-xl-5,.mx-xl-5{margin-left:3rem!important}.p-xl-0{padding:0!important}.pt-xl-0,.py-xl-0{padding-top:0!important}.pr-xl-0,.px-xl-0{padding-right:0!important}.pb-xl-0,.py-xl-0{padding-bottom:0!important}.pl-xl-0,.px-xl-0{padding-left:0!important}.p-xl-1{padding:.25rem!important}.pt-xl-1,.py-xl-1{padding-top:.25rem!important}.pr-xl-1,.px-xl-1{padding-right:.25rem!important}.pb-xl-1,.py-xl-1{padding-bottom:.25rem!important}.pl-xl-1,.px-xl-1{padding-left:.25rem!important}.p-xl-2{padding:.5rem!important}.pt-xl-2,.py-xl-2{padding-top:.5rem!important}.pr-xl-2,.px-xl-2{padding-right:.5rem!important}.pb-xl-2,.py-xl-2{padding-bottom:.5rem!important}.pl-xl-2,.px-xl-2{padding-left:.5rem!important}.p-xl-3{padding:1rem!important}.pt-xl-3,.py-xl-3{padding-top:1rem!important}.pr-xl-3,.px-xl-3{padding-right:1rem!important}.pb-xl-3,.py-xl-3{padding-bottom:1rem!important}.pl-xl-3,.px-xl-3{padding-left:1rem!important}.p-xl-4{padding:1.5rem!important}.pt-xl-4,.py-xl-4{padding-top:1.5rem!important}.pr-xl-4,.px-xl-4{padding-right:1.5rem!important}.pb-xl-4,.py-xl-4{padding-bottom:1.5rem!important}.pl-xl-4,.px-xl-4{padding-left:1.5rem!important}.p-xl-5{padding:3rem!important}.pt-xl-5,.py-xl-5{padding-top:3rem!important}.pr-xl-5,.px-xl-5{padding-right:3rem!important}.pb-xl-5,.py-xl-5{padding-bottom:3rem!important}.pl-xl-5,.px-xl-5{padding-left:3rem!important}.m-xl-n1{margin:-.25rem!important}.mt-xl-n1,.my-xl-n1{margin-top:-.25rem!important}.mr-xl-n1,.mx-xl-n1{margin-right:-.25rem!important}.mb-xl-n1,.my-xl-n1{margin-bottom:-.25rem!important}.ml-xl-n1,.mx-xl-n1{margin-left:-.25rem!important}.m-xl-n2{margin:-.5rem!important}.mt-xl-n2,.my-xl-n2{margin-top:-.5rem!important}.mr-xl-n2,.mx-xl-n2{margin-right:-.5rem!important}.mb-xl-n2,.my-xl-n2{margin-bottom:-.5rem!important}.ml-xl-n2,.mx-xl-n2{margin-left:-.5rem!important}.m-xl-n3{margin:-1rem!important}.mt-xl-n3,.my-xl-n3{margin-top:-1rem!important}.mr-xl-n3,.mx-xl-n3{margin-right:-1rem!important}.mb-xl-n3,.my-xl-n3{margin-bottom:-1rem!important}.ml-xl-n3,.mx-xl-n3{margin-left:-1rem!important}.m-xl-n4{margin:-1.5rem!important}.mt-xl-n4,.my-xl-n4{margin-top:-1.5rem!important}.mr-xl-n4,.mx-xl-n4{margin-right:-1.5rem!important}.mb-xl-n4,.my-xl-n4{margin-bottom:-1.5rem!important}.ml-xl-n4,.mx-xl-n4{margin-left:-1.5rem!important}.m-xl-n5{margin:-3rem!important}.mt-xl-n5,.my-xl-n5{margin-top:-3rem!important}.mr-xl-n5,.mx-xl-n5{margin-right:-3rem!important}.mb-xl-n5,.my-xl-n5{margin-bottom:-3rem!important}.ml-xl-n5,.mx-xl-n5{margin-left:-3rem!important}.m-xl-auto{margin:auto!important}.mt-xl-auto,.my-xl-auto{margin-top:auto!important}.mr-xl-auto,.mx-xl-auto{margin-right:auto!important}.mb-xl-auto,.my-xl-auto{margin-bottom:auto!important}.ml-xl-auto,.mx-xl-auto{margin-left:auto!important}}.stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;pointer-events:auto;content:"";background-color:rgba(0,0,0,0)}.text-monospace{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace!important}.text-justify{text-align:justify!important}.text-wrap{white-space:normal!important}.text-nowrap{white-space:nowrap!important}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.text-left{text-align:left!important}.text-right{text-align:right!important}.text-center{text-align:center!important}@media (min-width:576px){.text-sm-left{text-align:left!important}.text-sm-right{text-align:right!important}.text-sm-center{text-align:center!important}}@media (min-width:768px){.text-md-left{text-align:left!important}.text-md-right{text-align:right!important}.text-md-center{text-align:center!important}}@media (min-width:992px){.text-lg-left{text-align:left!important}.text-lg-right{text-align:right!important}.text-lg-center{text-align:center!important}}@media (min-width:1200px){.text-xl-left{text-align:left!important}.text-xl-right{text-align:right!important}.text-xl-center{text-align:center!important}}.text-lowercase{text-transform:lowercase!important}.text-uppercase{text-transform:uppercase!important}.text-capitalize{text-transform:capitalize!important}.font-weight-light{font-weight:300!important}.font-weight-lighter{font-weight:lighter!important}.font-weight-normal{font-weight:400!important}.font-weight-bold{font-weight:700!important}.font-weight-bolder{font-weight:bolder!important}.font-italic{font-style:italic!important}.text-white{color:#fff!important}.text-primary{color:#007bff!important}a.text-primary:focus,a.text-primary:hover{color:#0056b3!important}.text-secondary{color:#6c757d!important}a.text-secondary:focus,a.text-secondary:hover{color:#494f54!important}.text-success{color:#28a745!important}a.text-success:focus,a.text-success:hover{color:#19692c!important}.text-info{color:#17a2b8!important}a.text-info:focus,a.text-info:hover{color:#0f6674!important}.text-warning{color:#ffc107!important}a.text-warning:focus,a.text-warning:hover{color:#ba8b00!important}.text-danger{color:#dc3545!important}a.text-danger:focus,a.text-danger:hover{color:#a71d2a!important}.text-light{color:#f8f9fa!important}a.text-light:focus,a.text-light:hover{color:#cbd3da!important}.text-dark{color:#343a40!important}a.text-dark:focus,a.text-dark:hover{color:#121416!important}.text-body{color:#212529!important}.text-muted{color:#6c757d!important}.text-black-50{color:rgba(0,0,0,.5)!important}.text-white-50{color:rgba(255,255,255,.5)!important}.text-hide{font:0/0 a;color:transparent;text-shadow:none;background-color:transparent;border:0}.text-decoration-none{text-decoration:none!important}.text-break{word-wrap:break-word!important}.text-reset{color:inherit!important}.visible{visibility:visible!important}.invisible{visibility:hidden!important}@media print{*,::after,::before{text-shadow:none!important;box-shadow:none!important}a:not(.btn){text-decoration:underline}abbr[title]::after{content:" (" attr(title) ")"}pre{white-space:pre-wrap!important}blockquote,pre{border:1px solid #adb5bd;page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}@page{size:a3}body{min-width:992px!important}.container{min-width:992px!important}.navbar{display:none}.badge{border:1px solid #000}.table{border-collapse:collapse!important}.table td,.table th{background-color:#fff!important}.table-bordered td,.table-bordered th{border:1px solid #dee2e6!important}.table-dark{color:inherit}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#dee2e6}.table .thead-dark th{color:inherit;border-color:#dee2e6}} +/*# sourceMappingURL=bootstrap.min.css.map */ \ No newline at end of file diff --git a/stylesheets/bootstrap.min.css b/stylesheets/bootstrap.min.css new file mode 100644 index 00000000..7d2a868f --- /dev/null +++ b/stylesheets/bootstrap.min.css @@ -0,0 +1,7 @@ +/*! + * Bootstrap v4.5.0 (https://getbootstrap.com/) + * Copyright 2011-2020 The Bootstrap Authors + * Copyright 2011-2020 Twitter, Inc. + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) + */:root{--blue:#007bff;--indigo:#6610f2;--purple:#6f42c1;--pink:#e83e8c;--red:#dc3545;--orange:#fd7e14;--yellow:#ffc107;--green:#28a745;--teal:#20c997;--cyan:#17a2b8;--white:#fff;--gray:#6c757d;--gray-dark:#343a40;--primary:#007bff;--secondary:#6c757d;--success:#28a745;--info:#17a2b8;--warning:#ffc107;--danger:#dc3545;--light:#f8f9fa;--dark:#343a40;--breakpoint-xs:0;--breakpoint-sm:576px;--breakpoint-md:768px;--breakpoint-lg:992px;--breakpoint-xl:1200px;--font-family-sans-serif:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";--font-family-monospace:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace}*,::after,::before{box-sizing:border-box}html{font-family:sans-serif;line-height:1.15;-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:transparent}article,aside,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}body{margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-size:1rem;font-weight:400;line-height:1.5;color:#212529;text-align:left;background-color:#fff}[tabindex="-1"]:focus:not(:focus-visible){outline:0!important}hr{box-sizing:content-box;height:0;overflow:visible}h1,h2,h3,h4,h5,h6{margin-top:0;margin-bottom:.5rem}p{margin-top:0;margin-bottom:1rem}abbr[data-original-title],abbr[title]{text-decoration:underline;-webkit-text-decoration:underline dotted;text-decoration:underline dotted;cursor:help;border-bottom:0;-webkit-text-decoration-skip-ink:none;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}dl,ol,ul{margin-top:0;margin-bottom:1rem}ol ol,ol ul,ul ol,ul ul{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem}b,strong{font-weight:bolder}small{font-size:80%}sub,sup{position:relative;font-size:75%;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}a{color:#007bff;text-decoration:none;background-color:transparent}a:hover{color:#0056b3;text-decoration:underline}a:not([href]){color:inherit;text-decoration:none}a:not([href]):hover{color:inherit;text-decoration:none}code,kbd,pre,samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;font-size:1em}pre{margin-top:0;margin-bottom:1rem;overflow:auto;-ms-overflow-style:scrollbar}figure{margin:0 0 1rem}img{vertical-align:middle;border-style:none}svg{overflow:hidden;vertical-align:middle}table{border-collapse:collapse}caption{padding-top:.75rem;padding-bottom:.75rem;color:#6c757d;text-align:left;caption-side:bottom}th{text-align:inherit}label{display:inline-block;margin-bottom:.5rem}button{border-radius:0}button:focus{outline:1px dotted;outline:5px auto -webkit-focus-ring-color}button,input,optgroup,select,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,input{overflow:visible}button,select{text-transform:none}[role=button]{cursor:pointer}select{word-wrap:normal}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled),button:not(:disabled){cursor:pointer}[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner,button::-moz-focus-inner{padding:0;border-style:none}input[type=checkbox],input[type=radio]{box-sizing:border-box;padding:0}textarea{overflow:auto;resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{display:block;width:100%;max-width:100%;padding:0;margin-bottom:.5rem;font-size:1.5rem;line-height:inherit;color:inherit;white-space:normal}progress{vertical-align:baseline}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:none}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}summary{display:list-item;cursor:pointer}template{display:none}[hidden]{display:none!important}.h1,.h2,.h3,.h4,.h5,.h6,h1,h2,h3,h4,h5,h6{margin-bottom:.5rem;font-weight:500;line-height:1.2}.h1,h1{font-size:2.5rem}.h2,h2{font-size:2rem}.h3,h3{font-size:1.75rem}.h4,h4{font-size:1.5rem}.h5,h5{font-size:1.25rem}.h6,h6{font-size:1rem}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:6rem;font-weight:300;line-height:1.2}.display-2{font-size:5.5rem;font-weight:300;line-height:1.2}.display-3{font-size:4.5rem;font-weight:300;line-height:1.2}.display-4{font-size:3.5rem;font-weight:300;line-height:1.2}hr{margin-top:1rem;margin-bottom:1rem;border:0;border-top:1px solid rgba(0,0,0,.1)}.small,small{font-size:80%;font-weight:400}.mark,mark{padding:.2em;background-color:#fcf8e3}.list-unstyled{padding-left:0;list-style:none}.list-inline{padding-left:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-right:.5rem}.initialism{font-size:90%;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote-footer{display:block;font-size:80%;color:#6c757d}.blockquote-footer::before{content:"\2014\00A0"}.img-fluid{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;border-radius:.25rem;max-width:100%;height:auto}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:90%;color:#6c757d}code{font-size:87.5%;color:#e83e8c;word-wrap:break-word}a>code{color:inherit}kbd{padding:.2rem .4rem;font-size:87.5%;color:#fff;background-color:#212529;border-radius:.2rem}kbd kbd{padding:0;font-size:100%;font-weight:700}pre{display:block;font-size:87.5%;color:#212529}pre code{font-size:inherit;color:inherit;word-break:normal}.pre-scrollable{max-height:340px;overflow-y:scroll}.container{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container{max-width:540px}}@media (min-width:768px){.container{max-width:720px}}@media (min-width:992px){.container{max-width:960px}}@media (min-width:1200px){.container{max-width:1140px}}.container-fluid,.container-lg,.container-md,.container-sm,.container-xl{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container,.container-sm{max-width:540px}}@media (min-width:768px){.container,.container-md,.container-sm{max-width:720px}}@media (min-width:992px){.container,.container-lg,.container-md,.container-sm{max-width:960px}}@media (min-width:1200px){.container,.container-lg,.container-md,.container-sm,.container-xl{max-width:1140px}}.row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-15px;margin-left:-15px}.no-gutters{margin-right:0;margin-left:0}.no-gutters>.col,.no-gutters>[class*=col-]{padding-right:0;padding-left:0}.col,.col-1,.col-10,.col-11,.col-12,.col-2,.col-3,.col-4,.col-5,.col-6,.col-7,.col-8,.col-9,.col-auto,.col-lg,.col-lg-1,.col-lg-10,.col-lg-11,.col-lg-12,.col-lg-2,.col-lg-3,.col-lg-4,.col-lg-5,.col-lg-6,.col-lg-7,.col-lg-8,.col-lg-9,.col-lg-auto,.col-md,.col-md-1,.col-md-10,.col-md-11,.col-md-12,.col-md-2,.col-md-3,.col-md-4,.col-md-5,.col-md-6,.col-md-7,.col-md-8,.col-md-9,.col-md-auto,.col-sm,.col-sm-1,.col-sm-10,.col-sm-11,.col-sm-12,.col-sm-2,.col-sm-3,.col-sm-4,.col-sm-5,.col-sm-6,.col-sm-7,.col-sm-8,.col-sm-9,.col-sm-auto,.col-xl,.col-xl-1,.col-xl-10,.col-xl-11,.col-xl-12,.col-xl-2,.col-xl-3,.col-xl-4,.col-xl-5,.col-xl-6,.col-xl-7,.col-xl-8,.col-xl-9,.col-xl-auto{position:relative;width:100%;padding-right:15px;padding-left:15px}.col{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-first{-ms-flex-order:-1;order:-1}.order-last{-ms-flex-order:13;order:13}.order-0{-ms-flex-order:0;order:0}.order-1{-ms-flex-order:1;order:1}.order-2{-ms-flex-order:2;order:2}.order-3{-ms-flex-order:3;order:3}.order-4{-ms-flex-order:4;order:4}.order-5{-ms-flex-order:5;order:5}.order-6{-ms-flex-order:6;order:6}.order-7{-ms-flex-order:7;order:7}.order-8{-ms-flex-order:8;order:8}.order-9{-ms-flex-order:9;order:9}.order-10{-ms-flex-order:10;order:10}.order-11{-ms-flex-order:11;order:11}.order-12{-ms-flex-order:12;order:12}.offset-1{margin-left:8.333333%}.offset-2{margin-left:16.666667%}.offset-3{margin-left:25%}.offset-4{margin-left:33.333333%}.offset-5{margin-left:41.666667%}.offset-6{margin-left:50%}.offset-7{margin-left:58.333333%}.offset-8{margin-left:66.666667%}.offset-9{margin-left:75%}.offset-10{margin-left:83.333333%}.offset-11{margin-left:91.666667%}@media (min-width:576px){.col-sm{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-sm-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-sm-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-sm-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-sm-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-sm-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-sm-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-sm-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-sm-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-sm-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-sm-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-sm-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-sm-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-sm-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-sm-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-sm-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-sm-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-sm-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-sm-first{-ms-flex-order:-1;order:-1}.order-sm-last{-ms-flex-order:13;order:13}.order-sm-0{-ms-flex-order:0;order:0}.order-sm-1{-ms-flex-order:1;order:1}.order-sm-2{-ms-flex-order:2;order:2}.order-sm-3{-ms-flex-order:3;order:3}.order-sm-4{-ms-flex-order:4;order:4}.order-sm-5{-ms-flex-order:5;order:5}.order-sm-6{-ms-flex-order:6;order:6}.order-sm-7{-ms-flex-order:7;order:7}.order-sm-8{-ms-flex-order:8;order:8}.order-sm-9{-ms-flex-order:9;order:9}.order-sm-10{-ms-flex-order:10;order:10}.order-sm-11{-ms-flex-order:11;order:11}.order-sm-12{-ms-flex-order:12;order:12}.offset-sm-0{margin-left:0}.offset-sm-1{margin-left:8.333333%}.offset-sm-2{margin-left:16.666667%}.offset-sm-3{margin-left:25%}.offset-sm-4{margin-left:33.333333%}.offset-sm-5{margin-left:41.666667%}.offset-sm-6{margin-left:50%}.offset-sm-7{margin-left:58.333333%}.offset-sm-8{margin-left:66.666667%}.offset-sm-9{margin-left:75%}.offset-sm-10{margin-left:83.333333%}.offset-sm-11{margin-left:91.666667%}}@media (min-width:768px){.col-md{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-md-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-md-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-md-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-md-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-md-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-md-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-md-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-md-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-md-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-md-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-md-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-md-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-md-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-md-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-md-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-md-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-md-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-md-first{-ms-flex-order:-1;order:-1}.order-md-last{-ms-flex-order:13;order:13}.order-md-0{-ms-flex-order:0;order:0}.order-md-1{-ms-flex-order:1;order:1}.order-md-2{-ms-flex-order:2;order:2}.order-md-3{-ms-flex-order:3;order:3}.order-md-4{-ms-flex-order:4;order:4}.order-md-5{-ms-flex-order:5;order:5}.order-md-6{-ms-flex-order:6;order:6}.order-md-7{-ms-flex-order:7;order:7}.order-md-8{-ms-flex-order:8;order:8}.order-md-9{-ms-flex-order:9;order:9}.order-md-10{-ms-flex-order:10;order:10}.order-md-11{-ms-flex-order:11;order:11}.order-md-12{-ms-flex-order:12;order:12}.offset-md-0{margin-left:0}.offset-md-1{margin-left:8.333333%}.offset-md-2{margin-left:16.666667%}.offset-md-3{margin-left:25%}.offset-md-4{margin-left:33.333333%}.offset-md-5{margin-left:41.666667%}.offset-md-6{margin-left:50%}.offset-md-7{margin-left:58.333333%}.offset-md-8{margin-left:66.666667%}.offset-md-9{margin-left:75%}.offset-md-10{margin-left:83.333333%}.offset-md-11{margin-left:91.666667%}}@media (min-width:992px){.col-lg{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-lg-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-lg-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-lg-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-lg-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-lg-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-lg-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-lg-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-lg-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-lg-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-lg-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-lg-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-lg-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-lg-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-lg-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-lg-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-lg-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-lg-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-lg-first{-ms-flex-order:-1;order:-1}.order-lg-last{-ms-flex-order:13;order:13}.order-lg-0{-ms-flex-order:0;order:0}.order-lg-1{-ms-flex-order:1;order:1}.order-lg-2{-ms-flex-order:2;order:2}.order-lg-3{-ms-flex-order:3;order:3}.order-lg-4{-ms-flex-order:4;order:4}.order-lg-5{-ms-flex-order:5;order:5}.order-lg-6{-ms-flex-order:6;order:6}.order-lg-7{-ms-flex-order:7;order:7}.order-lg-8{-ms-flex-order:8;order:8}.order-lg-9{-ms-flex-order:9;order:9}.order-lg-10{-ms-flex-order:10;order:10}.order-lg-11{-ms-flex-order:11;order:11}.order-lg-12{-ms-flex-order:12;order:12}.offset-lg-0{margin-left:0}.offset-lg-1{margin-left:8.333333%}.offset-lg-2{margin-left:16.666667%}.offset-lg-3{margin-left:25%}.offset-lg-4{margin-left:33.333333%}.offset-lg-5{margin-left:41.666667%}.offset-lg-6{margin-left:50%}.offset-lg-7{margin-left:58.333333%}.offset-lg-8{margin-left:66.666667%}.offset-lg-9{margin-left:75%}.offset-lg-10{margin-left:83.333333%}.offset-lg-11{margin-left:91.666667%}}@media (min-width:1200px){.col-xl{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-xl-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-xl-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-xl-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-xl-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-xl-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-xl-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-xl-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-xl-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-xl-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-xl-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-xl-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-xl-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-xl-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-xl-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-xl-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-xl-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-xl-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-xl-first{-ms-flex-order:-1;order:-1}.order-xl-last{-ms-flex-order:13;order:13}.order-xl-0{-ms-flex-order:0;order:0}.order-xl-1{-ms-flex-order:1;order:1}.order-xl-2{-ms-flex-order:2;order:2}.order-xl-3{-ms-flex-order:3;order:3}.order-xl-4{-ms-flex-order:4;order:4}.order-xl-5{-ms-flex-order:5;order:5}.order-xl-6{-ms-flex-order:6;order:6}.order-xl-7{-ms-flex-order:7;order:7}.order-xl-8{-ms-flex-order:8;order:8}.order-xl-9{-ms-flex-order:9;order:9}.order-xl-10{-ms-flex-order:10;order:10}.order-xl-11{-ms-flex-order:11;order:11}.order-xl-12{-ms-flex-order:12;order:12}.offset-xl-0{margin-left:0}.offset-xl-1{margin-left:8.333333%}.offset-xl-2{margin-left:16.666667%}.offset-xl-3{margin-left:25%}.offset-xl-4{margin-left:33.333333%}.offset-xl-5{margin-left:41.666667%}.offset-xl-6{margin-left:50%}.offset-xl-7{margin-left:58.333333%}.offset-xl-8{margin-left:66.666667%}.offset-xl-9{margin-left:75%}.offset-xl-10{margin-left:83.333333%}.offset-xl-11{margin-left:91.666667%}}.table{width:100%;margin-bottom:1rem;color:#212529}.table td,.table th{padding:.75rem;vertical-align:top;border-top:1px solid #dee2e6}.table thead th{vertical-align:bottom;border-bottom:2px solid #dee2e6}.table tbody+tbody{border-top:2px solid #dee2e6}.table-sm td,.table-sm th{padding:.3rem}.table-bordered{border:1px solid #dee2e6}.table-bordered td,.table-bordered th{border:1px solid #dee2e6}.table-bordered thead td,.table-bordered thead th{border-bottom-width:2px}.table-borderless tbody+tbody,.table-borderless td,.table-borderless th,.table-borderless thead th{border:0}.table-striped tbody tr:nth-of-type(odd){background-color:rgba(0,0,0,.05)}.table-hover tbody tr:hover{color:#212529;background-color:rgba(0,0,0,.075)}.table-primary,.table-primary>td,.table-primary>th{background-color:#b8daff}.table-primary tbody+tbody,.table-primary td,.table-primary th,.table-primary thead th{border-color:#7abaff}.table-hover .table-primary:hover{background-color:#9fcdff}.table-hover .table-primary:hover>td,.table-hover .table-primary:hover>th{background-color:#9fcdff}.table-secondary,.table-secondary>td,.table-secondary>th{background-color:#d6d8db}.table-secondary tbody+tbody,.table-secondary td,.table-secondary th,.table-secondary thead th{border-color:#b3b7bb}.table-hover .table-secondary:hover{background-color:#c8cbcf}.table-hover .table-secondary:hover>td,.table-hover .table-secondary:hover>th{background-color:#c8cbcf}.table-success,.table-success>td,.table-success>th{background-color:#c3e6cb}.table-success tbody+tbody,.table-success td,.table-success th,.table-success thead th{border-color:#8fd19e}.table-hover .table-success:hover{background-color:#b1dfbb}.table-hover .table-success:hover>td,.table-hover .table-success:hover>th{background-color:#b1dfbb}.table-info,.table-info>td,.table-info>th{background-color:#bee5eb}.table-info tbody+tbody,.table-info td,.table-info th,.table-info thead th{border-color:#86cfda}.table-hover .table-info:hover{background-color:#abdde5}.table-hover .table-info:hover>td,.table-hover .table-info:hover>th{background-color:#abdde5}.table-warning,.table-warning>td,.table-warning>th{background-color:#ffeeba}.table-warning tbody+tbody,.table-warning td,.table-warning th,.table-warning thead th{border-color:#ffdf7e}.table-hover .table-warning:hover{background-color:#ffe8a1}.table-hover .table-warning:hover>td,.table-hover .table-warning:hover>th{background-color:#ffe8a1}.table-danger,.table-danger>td,.table-danger>th{background-color:#f5c6cb}.table-danger tbody+tbody,.table-danger td,.table-danger th,.table-danger thead th{border-color:#ed969e}.table-hover .table-danger:hover{background-color:#f1b0b7}.table-hover .table-danger:hover>td,.table-hover .table-danger:hover>th{background-color:#f1b0b7}.table-light,.table-light>td,.table-light>th{background-color:#fdfdfe}.table-light tbody+tbody,.table-light td,.table-light th,.table-light thead th{border-color:#fbfcfc}.table-hover .table-light:hover{background-color:#ececf6}.table-hover .table-light:hover>td,.table-hover .table-light:hover>th{background-color:#ececf6}.table-dark,.table-dark>td,.table-dark>th{background-color:#c6c8ca}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#95999c}.table-hover .table-dark:hover{background-color:#b9bbbe}.table-hover .table-dark:hover>td,.table-hover .table-dark:hover>th{background-color:#b9bbbe}.table-active,.table-active>td,.table-active>th{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover>td,.table-hover .table-active:hover>th{background-color:rgba(0,0,0,.075)}.table .thead-dark th{color:#fff;background-color:#343a40;border-color:#454d55}.table .thead-light th{color:#495057;background-color:#e9ecef;border-color:#dee2e6}.table-dark{color:#fff;background-color:#343a40}.table-dark td,.table-dark th,.table-dark thead th{border-color:#454d55}.table-dark.table-bordered{border:0}.table-dark.table-striped tbody tr:nth-of-type(odd){background-color:rgba(255,255,255,.05)}.table-dark.table-hover tbody tr:hover{color:#fff;background-color:rgba(255,255,255,.075)}@media (max-width:575.98px){.table-responsive-sm{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-sm>.table-bordered{border:0}}@media (max-width:767.98px){.table-responsive-md{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-md>.table-bordered{border:0}}@media (max-width:991.98px){.table-responsive-lg{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-lg>.table-bordered{border:0}}@media (max-width:1199.98px){.table-responsive-xl{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-xl>.table-bordered{border:0}}.table-responsive{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive>.table-bordered{border:0}.form-control{display:block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control{transition:none}}.form-control::-ms-expand{background-color:transparent;border:0}.form-control:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.form-control:focus{color:#495057;background-color:#fff;border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.form-control::-webkit-input-placeholder{color:#6c757d;opacity:1}.form-control::-moz-placeholder{color:#6c757d;opacity:1}.form-control:-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}input[type=date].form-control,input[type=datetime-local].form-control,input[type=month].form-control,input[type=time].form-control{-webkit-appearance:none;-moz-appearance:none;appearance:none}select.form-control:focus::-ms-value{color:#495057;background-color:#fff}.form-control-file,.form-control-range{display:block;width:100%}.col-form-label{padding-top:calc(.375rem + 1px);padding-bottom:calc(.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(.5rem + 1px);padding-bottom:calc(.5rem + 1px);font-size:1.25rem;line-height:1.5}.col-form-label-sm{padding-top:calc(.25rem + 1px);padding-bottom:calc(.25rem + 1px);font-size:.875rem;line-height:1.5}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;font-size:1rem;line-height:1.5;color:#212529;background-color:transparent;border:solid transparent;border-width:1px 0}.form-control-plaintext.form-control-lg,.form-control-plaintext.form-control-sm{padding-right:0;padding-left:0}.form-control-sm{height:calc(1.5em + .5rem + 2px);padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.form-control-lg{height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}select.form-control[multiple],select.form-control[size]{height:auto}textarea.form-control{height:auto}.form-group{margin-bottom:1rem}.form-text{display:block;margin-top:.25rem}.form-row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-5px;margin-left:-5px}.form-row>.col,.form-row>[class*=col-]{padding-right:5px;padding-left:5px}.form-check{position:relative;display:block;padding-left:1.25rem}.form-check-input{position:absolute;margin-top:.3rem;margin-left:-1.25rem}.form-check-input:disabled~.form-check-label,.form-check-input[disabled]~.form-check-label{color:#6c757d}.form-check-label{margin-bottom:0}.form-check-inline{display:-ms-inline-flexbox;display:inline-flex;-ms-flex-align:center;align-items:center;padding-left:0;margin-right:.75rem}.form-check-inline .form-check-input{position:static;margin-top:0;margin-right:.3125rem;margin-left:0}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#28a745}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(40,167,69,.9);border-radius:.25rem}.is-valid~.valid-feedback,.is-valid~.valid-tooltip,.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip{display:block}.form-control.is-valid,.was-validated .form-control:valid{border-color:#28a745;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-valid:focus,.was-validated .form-control:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-valid,.was-validated .custom-select:valid{border-color:#28a745;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-valid:focus,.was-validated .custom-select:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.form-check-input.is-valid~.form-check-label,.was-validated .form-check-input:valid~.form-check-label{color:#28a745}.form-check-input.is-valid~.valid-feedback,.form-check-input.is-valid~.valid-tooltip,.was-validated .form-check-input:valid~.valid-feedback,.was-validated .form-check-input:valid~.valid-tooltip{display:block}.custom-control-input.is-valid~.custom-control-label,.was-validated .custom-control-input:valid~.custom-control-label{color:#28a745}.custom-control-input.is-valid~.custom-control-label::before,.was-validated .custom-control-input:valid~.custom-control-label::before{border-color:#28a745}.custom-control-input.is-valid:checked~.custom-control-label::before,.was-validated .custom-control-input:valid:checked~.custom-control-label::before{border-color:#34ce57;background-color:#34ce57}.custom-control-input.is-valid:focus~.custom-control-label::before,.was-validated .custom-control-input:valid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.custom-control-input.is-valid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:valid:focus:not(:checked)~.custom-control-label::before{border-color:#28a745}.custom-file-input.is-valid~.custom-file-label,.was-validated .custom-file-input:valid~.custom-file-label{border-color:#28a745}.custom-file-input.is-valid:focus~.custom-file-label,.was-validated .custom-file-input:valid:focus~.custom-file-label{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#dc3545}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(220,53,69,.9);border-radius:.25rem}.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip,.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip{display:block}.form-control.is-invalid,.was-validated .form-control:invalid{border-color:#dc3545;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-invalid:focus,.was-validated .form-control:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-invalid,.was-validated .custom-select:invalid{border-color:#dc3545;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-invalid:focus,.was-validated .custom-select:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-check-input.is-invalid~.form-check-label,.was-validated .form-check-input:invalid~.form-check-label{color:#dc3545}.form-check-input.is-invalid~.invalid-feedback,.form-check-input.is-invalid~.invalid-tooltip,.was-validated .form-check-input:invalid~.invalid-feedback,.was-validated .form-check-input:invalid~.invalid-tooltip{display:block}.custom-control-input.is-invalid~.custom-control-label,.was-validated .custom-control-input:invalid~.custom-control-label{color:#dc3545}.custom-control-input.is-invalid~.custom-control-label::before,.was-validated .custom-control-input:invalid~.custom-control-label::before{border-color:#dc3545}.custom-control-input.is-invalid:checked~.custom-control-label::before,.was-validated .custom-control-input:invalid:checked~.custom-control-label::before{border-color:#e4606d;background-color:#e4606d}.custom-control-input.is-invalid:focus~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.custom-control-input.is-invalid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus:not(:checked)~.custom-control-label::before{border-color:#dc3545}.custom-file-input.is-invalid~.custom-file-label,.was-validated .custom-file-input:invalid~.custom-file-label{border-color:#dc3545}.custom-file-input.is-invalid:focus~.custom-file-label,.was-validated .custom-file-input:invalid:focus~.custom-file-label{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-inline{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center}.form-inline .form-check{width:100%}@media (min-width:576px){.form-inline label{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;margin-bottom:0}.form-inline .form-group{display:-ms-flexbox;display:flex;-ms-flex:0 0 auto;flex:0 0 auto;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center;margin-bottom:0}.form-inline .form-control{display:inline-block;width:auto;vertical-align:middle}.form-inline .form-control-plaintext{display:inline-block}.form-inline .custom-select,.form-inline .input-group{width:auto}.form-inline .form-check{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:auto;padding-left:0}.form-inline .form-check-input{position:relative;-ms-flex-negative:0;flex-shrink:0;margin-top:0;margin-right:.25rem;margin-left:0}.form-inline .custom-control{-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center}.form-inline .custom-control-label{margin-bottom:0}}.btn{display:inline-block;font-weight:400;color:#212529;text-align:center;vertical-align:middle;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;background-color:transparent;border:1px solid transparent;padding:.375rem .75rem;font-size:1rem;line-height:1.5;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.btn{transition:none}}.btn:hover{color:#212529;text-decoration:none}.btn.focus,.btn:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.btn.disabled,.btn:disabled{opacity:.65}.btn:not(:disabled):not(.disabled){cursor:pointer}a.btn.disabled,fieldset:disabled a.btn{pointer-events:none}.btn-primary{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary:hover{color:#fff;background-color:#0069d9;border-color:#0062cc}.btn-primary.focus,.btn-primary:focus{color:#fff;background-color:#0069d9;border-color:#0062cc;box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-primary.disabled,.btn-primary:disabled{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary:not(:disabled):not(.disabled).active,.btn-primary:not(:disabled):not(.disabled):active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#0062cc;border-color:#005cbf}.btn-primary:not(:disabled):not(.disabled).active:focus,.btn-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-secondary{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:hover{color:#fff;background-color:#5a6268;border-color:#545b62}.btn-secondary.focus,.btn-secondary:focus{color:#fff;background-color:#5a6268;border-color:#545b62;box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-secondary.disabled,.btn-secondary:disabled{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:not(:disabled):not(.disabled).active,.btn-secondary:not(:disabled):not(.disabled):active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#545b62;border-color:#4e555b}.btn-secondary:not(:disabled):not(.disabled).active:focus,.btn-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-success{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:hover{color:#fff;background-color:#218838;border-color:#1e7e34}.btn-success.focus,.btn-success:focus{color:#fff;background-color:#218838;border-color:#1e7e34;box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-success.disabled,.btn-success:disabled{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:not(:disabled):not(.disabled).active,.btn-success:not(:disabled):not(.disabled):active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#1e7e34;border-color:#1c7430}.btn-success:not(:disabled):not(.disabled).active:focus,.btn-success:not(:disabled):not(.disabled):active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-info{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:hover{color:#fff;background-color:#138496;border-color:#117a8b}.btn-info.focus,.btn-info:focus{color:#fff;background-color:#138496;border-color:#117a8b;box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-info.disabled,.btn-info:disabled{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:not(:disabled):not(.disabled).active,.btn-info:not(:disabled):not(.disabled):active,.show>.btn-info.dropdown-toggle{color:#fff;background-color:#117a8b;border-color:#10707f}.btn-info:not(:disabled):not(.disabled).active:focus,.btn-info:not(:disabled):not(.disabled):active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-warning{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:hover{color:#212529;background-color:#e0a800;border-color:#d39e00}.btn-warning.focus,.btn-warning:focus{color:#212529;background-color:#e0a800;border-color:#d39e00;box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-warning.disabled,.btn-warning:disabled{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:not(:disabled):not(.disabled).active,.btn-warning:not(:disabled):not(.disabled):active,.show>.btn-warning.dropdown-toggle{color:#212529;background-color:#d39e00;border-color:#c69500}.btn-warning:not(:disabled):not(.disabled).active:focus,.btn-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-danger{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:hover{color:#fff;background-color:#c82333;border-color:#bd2130}.btn-danger.focus,.btn-danger:focus{color:#fff;background-color:#c82333;border-color:#bd2130;box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-danger.disabled,.btn-danger:disabled{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:not(:disabled):not(.disabled).active,.btn-danger:not(:disabled):not(.disabled):active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#bd2130;border-color:#b21f2d}.btn-danger:not(:disabled):not(.disabled).active:focus,.btn-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-light{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:hover{color:#212529;background-color:#e2e6ea;border-color:#dae0e5}.btn-light.focus,.btn-light:focus{color:#212529;background-color:#e2e6ea;border-color:#dae0e5;box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-light.disabled,.btn-light:disabled{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:not(:disabled):not(.disabled).active,.btn-light:not(:disabled):not(.disabled):active,.show>.btn-light.dropdown-toggle{color:#212529;background-color:#dae0e5;border-color:#d3d9df}.btn-light:not(:disabled):not(.disabled).active:focus,.btn-light:not(:disabled):not(.disabled):active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-dark{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:hover{color:#fff;background-color:#23272b;border-color:#1d2124}.btn-dark.focus,.btn-dark:focus{color:#fff;background-color:#23272b;border-color:#1d2124;box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-dark.disabled,.btn-dark:disabled{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:not(:disabled):not(.disabled).active,.btn-dark:not(:disabled):not(.disabled):active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#1d2124;border-color:#171a1d}.btn-dark:not(:disabled):not(.disabled).active:focus,.btn-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-outline-primary{color:#007bff;border-color:#007bff}.btn-outline-primary:hover{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary.focus,.btn-outline-primary:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-primary.disabled,.btn-outline-primary:disabled{color:#007bff;background-color:transparent}.btn-outline-primary:not(:disabled):not(.disabled).active,.btn-outline-primary:not(:disabled):not(.disabled):active,.show>.btn-outline-primary.dropdown-toggle{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary:not(:disabled):not(.disabled).active:focus,.btn-outline-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-secondary{color:#6c757d;border-color:#6c757d}.btn-outline-secondary:hover{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary.focus,.btn-outline-secondary:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-secondary.disabled,.btn-outline-secondary:disabled{color:#6c757d;background-color:transparent}.btn-outline-secondary:not(:disabled):not(.disabled).active,.btn-outline-secondary:not(:disabled):not(.disabled):active,.show>.btn-outline-secondary.dropdown-toggle{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary:not(:disabled):not(.disabled).active:focus,.btn-outline-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-success{color:#28a745;border-color:#28a745}.btn-outline-success:hover{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success.focus,.btn-outline-success:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-success.disabled,.btn-outline-success:disabled{color:#28a745;background-color:transparent}.btn-outline-success:not(:disabled):not(.disabled).active,.btn-outline-success:not(:disabled):not(.disabled):active,.show>.btn-outline-success.dropdown-toggle{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success:not(:disabled):not(.disabled).active:focus,.btn-outline-success:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-info{color:#17a2b8;border-color:#17a2b8}.btn-outline-info:hover{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info.focus,.btn-outline-info:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-info.disabled,.btn-outline-info:disabled{color:#17a2b8;background-color:transparent}.btn-outline-info:not(:disabled):not(.disabled).active,.btn-outline-info:not(:disabled):not(.disabled):active,.show>.btn-outline-info.dropdown-toggle{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info:not(:disabled):not(.disabled).active:focus,.btn-outline-info:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-warning{color:#ffc107;border-color:#ffc107}.btn-outline-warning:hover{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning.focus,.btn-outline-warning:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-warning.disabled,.btn-outline-warning:disabled{color:#ffc107;background-color:transparent}.btn-outline-warning:not(:disabled):not(.disabled).active,.btn-outline-warning:not(:disabled):not(.disabled):active,.show>.btn-outline-warning.dropdown-toggle{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning:not(:disabled):not(.disabled).active:focus,.btn-outline-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-danger{color:#dc3545;border-color:#dc3545}.btn-outline-danger:hover{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger.focus,.btn-outline-danger:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-danger.disabled,.btn-outline-danger:disabled{color:#dc3545;background-color:transparent}.btn-outline-danger:not(:disabled):not(.disabled).active,.btn-outline-danger:not(:disabled):not(.disabled):active,.show>.btn-outline-danger.dropdown-toggle{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger:not(:disabled):not(.disabled).active:focus,.btn-outline-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:hover{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light.focus,.btn-outline-light:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-light.disabled,.btn-outline-light:disabled{color:#f8f9fa;background-color:transparent}.btn-outline-light:not(:disabled):not(.disabled).active,.btn-outline-light:not(:disabled):not(.disabled):active,.show>.btn-outline-light.dropdown-toggle{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:not(:disabled):not(.disabled).active:focus,.btn-outline-light:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-dark{color:#343a40;border-color:#343a40}.btn-outline-dark:hover{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark.focus,.btn-outline-dark:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-outline-dark.disabled,.btn-outline-dark:disabled{color:#343a40;background-color:transparent}.btn-outline-dark:not(:disabled):not(.disabled).active,.btn-outline-dark:not(:disabled):not(.disabled):active,.show>.btn-outline-dark.dropdown-toggle{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark:not(:disabled):not(.disabled).active:focus,.btn-outline-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-link{font-weight:400;color:#007bff;text-decoration:none}.btn-link:hover{color:#0056b3;text-decoration:underline}.btn-link.focus,.btn-link:focus{text-decoration:underline}.btn-link.disabled,.btn-link:disabled{color:#6c757d;pointer-events:none}.btn-group-lg>.btn,.btn-lg{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.btn-group-sm>.btn,.btn-sm{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.btn-block{display:block;width:100%}.btn-block+.btn-block{margin-top:.5rem}input[type=button].btn-block,input[type=reset].btn-block,input[type=submit].btn-block{width:100%}.fade{transition:opacity .15s linear}@media (prefers-reduced-motion:reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{position:relative;height:0;overflow:hidden;transition:height .35s ease}@media (prefers-reduced-motion:reduce){.collapsing{transition:none}}.dropdown,.dropleft,.dropright,.dropup{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid transparent;border-bottom:0;border-left:.3em solid transparent}.dropdown-toggle:empty::after{margin-left:0}.dropdown-menu{position:absolute;top:100%;left:0;z-index:1000;display:none;float:left;min-width:10rem;padding:.5rem 0;margin:.125rem 0 0;font-size:1rem;color:#212529;text-align:left;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15);border-radius:.25rem}.dropdown-menu-left{right:auto;left:0}.dropdown-menu-right{right:0;left:auto}@media (min-width:576px){.dropdown-menu-sm-left{right:auto;left:0}.dropdown-menu-sm-right{right:0;left:auto}}@media (min-width:768px){.dropdown-menu-md-left{right:auto;left:0}.dropdown-menu-md-right{right:0;left:auto}}@media (min-width:992px){.dropdown-menu-lg-left{right:auto;left:0}.dropdown-menu-lg-right{right:0;left:auto}}@media (min-width:1200px){.dropdown-menu-xl-left{right:auto;left:0}.dropdown-menu-xl-right{right:0;left:auto}}.dropup .dropdown-menu{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid transparent;border-bottom:.3em solid;border-left:.3em solid transparent}.dropup .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-menu{top:0;right:auto;left:100%;margin-top:0;margin-left:.125rem}.dropright .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:0;border-bottom:.3em solid transparent;border-left:.3em solid}.dropright .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-toggle::after{vertical-align:0}.dropleft .dropdown-menu{top:0;right:100%;left:auto;margin-top:0;margin-right:.125rem}.dropleft .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:""}.dropleft .dropdown-toggle::after{display:none}.dropleft .dropdown-toggle::before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:.3em solid;border-bottom:.3em solid transparent}.dropleft .dropdown-toggle:empty::after{margin-left:0}.dropleft .dropdown-toggle::before{vertical-align:0}.dropdown-menu[x-placement^=bottom],.dropdown-menu[x-placement^=left],.dropdown-menu[x-placement^=right],.dropdown-menu[x-placement^=top]{right:auto;bottom:auto}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid #e9ecef}.dropdown-item{display:block;width:100%;padding:.25rem 1.5rem;clear:both;font-weight:400;color:#212529;text-align:inherit;white-space:nowrap;background-color:transparent;border:0}.dropdown-item:focus,.dropdown-item:hover{color:#16181b;text-decoration:none;background-color:#f8f9fa}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#007bff}.dropdown-item.disabled,.dropdown-item:disabled{color:#6c757d;pointer-events:none;background-color:transparent}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1.5rem;margin-bottom:0;font-size:.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1.5rem;color:#212529}.btn-group,.btn-group-vertical{position:relative;display:-ms-inline-flexbox;display:inline-flex;vertical-align:middle}.btn-group-vertical>.btn,.btn-group>.btn{position:relative;-ms-flex:1 1 auto;flex:1 1 auto}.btn-group-vertical>.btn:hover,.btn-group>.btn:hover{z-index:1}.btn-group-vertical>.btn.active,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn:focus,.btn-group>.btn.active,.btn-group>.btn:active,.btn-group>.btn:focus{z-index:1}.btn-toolbar{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-pack:start;justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn-group:not(:first-child),.btn-group>.btn:not(:first-child){margin-left:-1px}.btn-group>.btn-group:not(:last-child)>.btn,.btn-group>.btn:not(:last-child):not(.dropdown-toggle){border-top-right-radius:0;border-bottom-right-radius:0}.btn-group>.btn-group:not(:first-child)>.btn,.btn-group>.btn:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split::after,.dropright .dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after{margin-left:0}.dropleft .dropdown-toggle-split::before{margin-right:0}.btn-group-sm>.btn+.dropdown-toggle-split,.btn-sm+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-group-lg>.btn+.dropdown-toggle-split,.btn-lg+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{-ms-flex-direction:column;flex-direction:column;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:center;justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn-group:not(:first-child),.btn-group-vertical>.btn:not(:first-child){margin-top:-1px}.btn-group-vertical>.btn-group:not(:last-child)>.btn,.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle){border-bottom-right-radius:0;border-bottom-left-radius:0}.btn-group-vertical>.btn-group:not(:first-child)>.btn,.btn-group-vertical>.btn:not(:first-child){border-top-left-radius:0;border-top-right-radius:0}.btn-group-toggle>.btn,.btn-group-toggle>.btn-group>.btn{margin-bottom:0}.btn-group-toggle>.btn input[type=checkbox],.btn-group-toggle>.btn input[type=radio],.btn-group-toggle>.btn-group>.btn input[type=checkbox],.btn-group-toggle>.btn-group>.btn input[type=radio]{position:absolute;clip:rect(0,0,0,0);pointer-events:none}.input-group{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:stretch;align-items:stretch;width:100%}.input-group>.custom-file,.input-group>.custom-select,.input-group>.form-control,.input-group>.form-control-plaintext{position:relative;-ms-flex:1 1 auto;flex:1 1 auto;width:1%;min-width:0;margin-bottom:0}.input-group>.custom-file+.custom-file,.input-group>.custom-file+.custom-select,.input-group>.custom-file+.form-control,.input-group>.custom-select+.custom-file,.input-group>.custom-select+.custom-select,.input-group>.custom-select+.form-control,.input-group>.form-control+.custom-file,.input-group>.form-control+.custom-select,.input-group>.form-control+.form-control,.input-group>.form-control-plaintext+.custom-file,.input-group>.form-control-plaintext+.custom-select,.input-group>.form-control-plaintext+.form-control{margin-left:-1px}.input-group>.custom-file .custom-file-input:focus~.custom-file-label,.input-group>.custom-select:focus,.input-group>.form-control:focus{z-index:3}.input-group>.custom-file .custom-file-input:focus{z-index:4}.input-group>.custom-select:not(:last-child),.input-group>.form-control:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-select:not(:first-child),.input-group>.form-control:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.input-group>.custom-file{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center}.input-group>.custom-file:not(:last-child) .custom-file-label,.input-group>.custom-file:not(:last-child) .custom-file-label::after{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-file:not(:first-child) .custom-file-label{border-top-left-radius:0;border-bottom-left-radius:0}.input-group-append,.input-group-prepend{display:-ms-flexbox;display:flex}.input-group-append .btn,.input-group-prepend .btn{position:relative;z-index:2}.input-group-append .btn:focus,.input-group-prepend .btn:focus{z-index:3}.input-group-append .btn+.btn,.input-group-append .btn+.input-group-text,.input-group-append .input-group-text+.btn,.input-group-append .input-group-text+.input-group-text,.input-group-prepend .btn+.btn,.input-group-prepend .btn+.input-group-text,.input-group-prepend .input-group-text+.btn,.input-group-prepend .input-group-text+.input-group-text{margin-left:-1px}.input-group-prepend{margin-right:-1px}.input-group-append{margin-left:-1px}.input-group-text{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.375rem .75rem;margin-bottom:0;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da;border-radius:.25rem}.input-group-text input[type=checkbox],.input-group-text input[type=radio]{margin-top:0}.input-group-lg>.custom-select,.input-group-lg>.form-control:not(textarea){height:calc(1.5em + 1rem + 2px)}.input-group-lg>.custom-select,.input-group-lg>.form-control,.input-group-lg>.input-group-append>.btn,.input-group-lg>.input-group-append>.input-group-text,.input-group-lg>.input-group-prepend>.btn,.input-group-lg>.input-group-prepend>.input-group-text{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.input-group-sm>.custom-select,.input-group-sm>.form-control:not(textarea){height:calc(1.5em + .5rem + 2px)}.input-group-sm>.custom-select,.input-group-sm>.form-control,.input-group-sm>.input-group-append>.btn,.input-group-sm>.input-group-append>.input-group-text,.input-group-sm>.input-group-prepend>.btn,.input-group-sm>.input-group-prepend>.input-group-text{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.input-group-lg>.custom-select,.input-group-sm>.custom-select{padding-right:1.75rem}.input-group>.input-group-append:last-child>.btn:not(:last-child):not(.dropdown-toggle),.input-group>.input-group-append:last-child>.input-group-text:not(:last-child),.input-group>.input-group-append:not(:last-child)>.btn,.input-group>.input-group-append:not(:last-child)>.input-group-text,.input-group>.input-group-prepend>.btn,.input-group>.input-group-prepend>.input-group-text{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.input-group-append>.btn,.input-group>.input-group-append>.input-group-text,.input-group>.input-group-prepend:first-child>.btn:not(:first-child),.input-group>.input-group-prepend:first-child>.input-group-text:not(:first-child),.input-group>.input-group-prepend:not(:first-child)>.btn,.input-group>.input-group-prepend:not(:first-child)>.input-group-text{border-top-left-radius:0;border-bottom-left-radius:0}.custom-control{position:relative;display:block;min-height:1.5rem;padding-left:1.5rem}.custom-control-inline{display:-ms-inline-flexbox;display:inline-flex;margin-right:1rem}.custom-control-input{position:absolute;left:0;z-index:-1;width:1rem;height:1.25rem;opacity:0}.custom-control-input:checked~.custom-control-label::before{color:#fff;border-color:#007bff;background-color:#007bff}.custom-control-input:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-control-input:focus:not(:checked)~.custom-control-label::before{border-color:#80bdff}.custom-control-input:not(:disabled):active~.custom-control-label::before{color:#fff;background-color:#b3d7ff;border-color:#b3d7ff}.custom-control-input:disabled~.custom-control-label,.custom-control-input[disabled]~.custom-control-label{color:#6c757d}.custom-control-input:disabled~.custom-control-label::before,.custom-control-input[disabled]~.custom-control-label::before{background-color:#e9ecef}.custom-control-label{position:relative;margin-bottom:0;vertical-align:top}.custom-control-label::before{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;pointer-events:none;content:"";background-color:#fff;border:#adb5bd solid 1px}.custom-control-label::after{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;content:"";background:no-repeat 50%/50% 50%}.custom-checkbox .custom-control-label::before{border-radius:.25rem}.custom-checkbox .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%23fff' d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26l2.974 2.99L8 2.193z'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::before{border-color:#007bff;background-color:#007bff}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='4' viewBox='0 0 4 4'%3e%3cpath stroke='%23fff' d='M0 2h4'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-checkbox .custom-control-input:disabled:indeterminate~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-radio .custom-control-label::before{border-radius:50%}.custom-radio .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.custom-radio .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-switch{padding-left:2.25rem}.custom-switch .custom-control-label::before{left:-2.25rem;width:1.75rem;pointer-events:all;border-radius:.5rem}.custom-switch .custom-control-label::after{top:calc(.25rem + 2px);left:calc(-2.25rem + 2px);width:calc(1rem - 4px);height:calc(1rem - 4px);background-color:#adb5bd;border-radius:.5rem;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-switch .custom-control-label::after{transition:none}}.custom-switch .custom-control-input:checked~.custom-control-label::after{background-color:#fff;-webkit-transform:translateX(.75rem);transform:translateX(.75rem)}.custom-switch .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-select{display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem 1.75rem .375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;vertical-align:middle;background:#fff url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px;border:1px solid #ced4da;border-radius:.25rem;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-select:focus{border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-select:focus::-ms-value{color:#495057;background-color:#fff}.custom-select[multiple],.custom-select[size]:not([size="1"]){height:auto;padding-right:.75rem;background-image:none}.custom-select:disabled{color:#6c757d;background-color:#e9ecef}.custom-select::-ms-expand{display:none}.custom-select:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.custom-select-sm{height:calc(1.5em + .5rem + 2px);padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:.875rem}.custom-select-lg{height:calc(1.5em + 1rem + 2px);padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem}.custom-file{position:relative;display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);margin-bottom:0}.custom-file-input{position:relative;z-index:2;width:100%;height:calc(1.5em + .75rem + 2px);margin:0;opacity:0}.custom-file-input:focus~.custom-file-label{border-color:#80bdff;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-file-input:disabled~.custom-file-label,.custom-file-input[disabled]~.custom-file-label{background-color:#e9ecef}.custom-file-input:lang(en)~.custom-file-label::after{content:"Browse"}.custom-file-input~.custom-file-label[data-browse]::after{content:attr(data-browse)}.custom-file-label{position:absolute;top:0;right:0;left:0;z-index:1;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;border:1px solid #ced4da;border-radius:.25rem}.custom-file-label::after{position:absolute;top:0;right:0;bottom:0;z-index:3;display:block;height:calc(1.5em + .75rem);padding:.375rem .75rem;line-height:1.5;color:#495057;content:"Browse";background-color:#e9ecef;border-left:inherit;border-radius:0 .25rem .25rem 0}.custom-range{width:100%;height:1.4rem;padding:0;background-color:transparent;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-range:focus{outline:0}.custom-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-ms-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range::-moz-focus-outer{border:0}.custom-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-.25rem;background-color:#007bff;border:0;border-radius:1rem;-webkit-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-webkit-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-webkit-slider-thumb{-webkit-transition:none;transition:none}}.custom-range::-webkit-slider-thumb:active{background-color:#b3d7ff}.custom-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#007bff;border:0;border-radius:1rem;-moz-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-moz-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-moz-range-thumb{-moz-transition:none;transition:none}}.custom-range::-moz-range-thumb:active{background-color:#b3d7ff}.custom-range::-moz-range-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-ms-thumb{width:1rem;height:1rem;margin-top:0;margin-right:.2rem;margin-left:.2rem;background-color:#007bff;border:0;border-radius:1rem;-ms-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-ms-thumb{-ms-transition:none;transition:none}}.custom-range::-ms-thumb:active{background-color:#b3d7ff}.custom-range::-ms-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:transparent;border-color:transparent;border-width:.5rem}.custom-range::-ms-fill-lower{background-color:#dee2e6;border-radius:1rem}.custom-range::-ms-fill-upper{margin-right:15px;background-color:#dee2e6;border-radius:1rem}.custom-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.custom-range:disabled::-webkit-slider-runnable-track{cursor:default}.custom-range:disabled::-moz-range-thumb{background-color:#adb5bd}.custom-range:disabled::-moz-range-track{cursor:default}.custom-range:disabled::-ms-thumb{background-color:#adb5bd}.custom-control-label::before,.custom-file-label,.custom-select{transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-control-label::before,.custom-file-label,.custom-select{transition:none}}.nav{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem}.nav-link:focus,.nav-link:hover{text-decoration:none}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-item{margin-bottom:-1px}.nav-tabs .nav-link{border:1px solid transparent;border-top-left-radius:.25rem;border-top-right-radius:.25rem}.nav-tabs .nav-link:focus,.nav-tabs .nav-link:hover{border-color:#e9ecef #e9ecef #dee2e6}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:transparent;border-color:transparent}.nav-tabs .nav-item.show .nav-link,.nav-tabs .nav-link.active{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px;border-top-left-radius:0;border-top-right-radius:0}.nav-pills .nav-link{border-radius:.25rem}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#007bff}.nav-fill .nav-item{-ms-flex:1 1 auto;flex:1 1 auto;text-align:center}.nav-justified .nav-item{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;text-align:center}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between;padding:.5rem 1rem}.navbar .container,.navbar .container-fluid,.navbar .container-lg,.navbar .container-md,.navbar .container-sm,.navbar .container-xl{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between}.navbar-brand{display:inline-block;padding-top:.3125rem;padding-bottom:.3125rem;margin-right:1rem;font-size:1.25rem;line-height:inherit;white-space:nowrap}.navbar-brand:focus,.navbar-brand:hover{text-decoration:none}.navbar-nav{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-right:0;padding-left:0}.navbar-nav .dropdown-menu{position:static;float:none}.navbar-text{display:inline-block;padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{-ms-flex-preferred-size:100%;flex-basis:100%;-ms-flex-positive:1;flex-grow:1;-ms-flex-align:center;align-items:center}.navbar-toggler{padding:.25rem .75rem;font-size:1.25rem;line-height:1;background-color:transparent;border:1px solid transparent;border-radius:.25rem}.navbar-toggler:focus,.navbar-toggler:hover{text-decoration:none}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;content:"";background:no-repeat center center;background-size:100% 100%}@media (max-width:575.98px){.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{padding-right:0;padding-left:0}}@media (min-width:576px){.navbar-expand-sm{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-sm .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-sm .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}}@media (max-width:767.98px){.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{padding-right:0;padding-left:0}}@media (min-width:768px){.navbar-expand-md{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-md .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-md .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}}@media (max-width:991.98px){.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{padding-right:0;padding-left:0}}@media (min-width:992px){.navbar-expand-lg{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-lg .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-lg .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}}@media (max-width:1199.98px){.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{padding-right:0;padding-left:0}}@media (min-width:1200px){.navbar-expand-xl{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-xl .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-xl .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}}.navbar-expand{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{padding-right:0;padding-left:0}.navbar-expand .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-light .navbar-brand{color:rgba(0,0,0,.9)}.navbar-light .navbar-brand:focus,.navbar-light .navbar-brand:hover{color:rgba(0,0,0,.9)}.navbar-light .navbar-nav .nav-link{color:rgba(0,0,0,.5)}.navbar-light .navbar-nav .nav-link:focus,.navbar-light .navbar-nav .nav-link:hover{color:rgba(0,0,0,.7)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(0,0,0,.3)}.navbar-light .navbar-nav .active>.nav-link,.navbar-light .navbar-nav .nav-link.active,.navbar-light .navbar-nav .nav-link.show,.navbar-light .navbar-nav .show>.nav-link{color:rgba(0,0,0,.9)}.navbar-light .navbar-toggler{color:rgba(0,0,0,.5);border-color:rgba(0,0,0,.1)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%280, 0, 0, 0.5%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-light .navbar-text{color:rgba(0,0,0,.5)}.navbar-light .navbar-text a{color:rgba(0,0,0,.9)}.navbar-light .navbar-text a:focus,.navbar-light .navbar-text a:hover{color:rgba(0,0,0,.9)}.navbar-dark .navbar-brand{color:#fff}.navbar-dark .navbar-brand:focus,.navbar-dark .navbar-brand:hover{color:#fff}.navbar-dark .navbar-nav .nav-link{color:rgba(255,255,255,.5)}.navbar-dark .navbar-nav .nav-link:focus,.navbar-dark .navbar-nav .nav-link:hover{color:rgba(255,255,255,.75)}.navbar-dark .navbar-nav .nav-link.disabled{color:rgba(255,255,255,.25)}.navbar-dark .navbar-nav .active>.nav-link,.navbar-dark .navbar-nav .nav-link.active,.navbar-dark .navbar-nav .nav-link.show,.navbar-dark .navbar-nav .show>.nav-link{color:#fff}.navbar-dark .navbar-toggler{color:rgba(255,255,255,.5);border-color:rgba(255,255,255,.1)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%28255, 255, 255, 0.5%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-dark .navbar-text{color:rgba(255,255,255,.5)}.navbar-dark .navbar-text a{color:#fff}.navbar-dark .navbar-text a:focus,.navbar-dark .navbar-text a:hover{color:#fff}.card{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125);border-radius:.25rem}.card>hr{margin-right:0;margin-left:0}.card>.list-group{border-top:inherit;border-bottom:inherit}.card>.list-group:first-child{border-top-width:0;border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card>.list-group:last-child{border-bottom-width:0;border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-body{-ms-flex:1 1 auto;flex:1 1 auto;min-height:1px;padding:1.25rem}.card-title{margin-bottom:.75rem}.card-subtitle{margin-top:-.375rem;margin-bottom:0}.card-text:last-child{margin-bottom:0}.card-link:hover{text-decoration:none}.card-link+.card-link{margin-left:1.25rem}.card-header{padding:.75rem 1.25rem;margin-bottom:0;background-color:rgba(0,0,0,.03);border-bottom:1px solid rgba(0,0,0,.125)}.card-header:first-child{border-radius:calc(.25rem - 1px) calc(.25rem - 1px) 0 0}.card-header+.list-group .list-group-item:first-child{border-top:0}.card-footer{padding:.75rem 1.25rem;background-color:rgba(0,0,0,.03);border-top:1px solid rgba(0,0,0,.125)}.card-footer:last-child{border-radius:0 0 calc(.25rem - 1px) calc(.25rem - 1px)}.card-header-tabs{margin-right:-.625rem;margin-bottom:-.75rem;margin-left:-.625rem;border-bottom:0}.card-header-pills{margin-right:-.625rem;margin-left:-.625rem}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1.25rem}.card-img,.card-img-bottom,.card-img-top{-ms-flex-negative:0;flex-shrink:0;width:100%}.card-img,.card-img-top{border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom{border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-deck .card{margin-bottom:15px}@media (min-width:576px){.card-deck{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;margin-right:-15px;margin-left:-15px}.card-deck .card{-ms-flex:1 0 0%;flex:1 0 0%;margin-right:15px;margin-bottom:0;margin-left:15px}}.card-group>.card{margin-bottom:15px}@media (min-width:576px){.card-group{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap}.card-group>.card{-ms-flex:1 0 0%;flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}.card-group>.card:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.card-group>.card:not(:last-child) .card-header,.card-group>.card:not(:last-child) .card-img-top{border-top-right-radius:0}.card-group>.card:not(:last-child) .card-footer,.card-group>.card:not(:last-child) .card-img-bottom{border-bottom-right-radius:0}.card-group>.card:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.card-group>.card:not(:first-child) .card-header,.card-group>.card:not(:first-child) .card-img-top{border-top-left-radius:0}.card-group>.card:not(:first-child) .card-footer,.card-group>.card:not(:first-child) .card-img-bottom{border-bottom-left-radius:0}}.card-columns .card{margin-bottom:.75rem}@media (min-width:576px){.card-columns{-webkit-column-count:3;-moz-column-count:3;column-count:3;-webkit-column-gap:1.25rem;-moz-column-gap:1.25rem;column-gap:1.25rem;orphans:1;widows:1}.card-columns .card{display:inline-block;width:100%}}.accordion>.card{overflow:hidden}.accordion>.card:not(:last-of-type){border-bottom:0;border-bottom-right-radius:0;border-bottom-left-radius:0}.accordion>.card:not(:first-of-type){border-top-left-radius:0;border-top-right-radius:0}.accordion>.card>.card-header{border-radius:0;margin-bottom:-1px}.breadcrumb{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding:.75rem 1rem;margin-bottom:1rem;list-style:none;background-color:#e9ecef;border-radius:.25rem}.breadcrumb-item{display:-ms-flexbox;display:flex}.breadcrumb-item+.breadcrumb-item{padding-left:.5rem}.breadcrumb-item+.breadcrumb-item::before{display:inline-block;padding-right:.5rem;color:#6c757d;content:"/"}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:underline}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:none}.breadcrumb-item.active{color:#6c757d}.pagination{display:-ms-flexbox;display:flex;padding-left:0;list-style:none;border-radius:.25rem}.page-link{position:relative;display:block;padding:.5rem .75rem;margin-left:-1px;line-height:1.25;color:#007bff;background-color:#fff;border:1px solid #dee2e6}.page-link:hover{z-index:2;color:#0056b3;text-decoration:none;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.page-item:first-child .page-link{margin-left:0;border-top-left-radius:.25rem;border-bottom-left-radius:.25rem}.page-item:last-child .page-link{border-top-right-radius:.25rem;border-bottom-right-radius:.25rem}.page-item.active .page-link{z-index:3;color:#fff;background-color:#007bff;border-color:#007bff}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;cursor:auto;background-color:#fff;border-color:#dee2e6}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem;line-height:1.5}.pagination-lg .page-item:first-child .page-link{border-top-left-radius:.3rem;border-bottom-left-radius:.3rem}.pagination-lg .page-item:last-child .page-link{border-top-right-radius:.3rem;border-bottom-right-radius:.3rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:.875rem;line-height:1.5}.pagination-sm .page-item:first-child .page-link{border-top-left-radius:.2rem;border-bottom-left-radius:.2rem}.pagination-sm .page-item:last-child .page-link{border-top-right-radius:.2rem;border-bottom-right-radius:.2rem}.badge{display:inline-block;padding:.25em .4em;font-size:75%;font-weight:700;line-height:1;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.badge{transition:none}}a.badge:focus,a.badge:hover{text-decoration:none}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.badge-pill{padding-right:.6em;padding-left:.6em;border-radius:10rem}.badge-primary{color:#fff;background-color:#007bff}a.badge-primary:focus,a.badge-primary:hover{color:#fff;background-color:#0062cc}a.badge-primary.focus,a.badge-primary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.badge-secondary{color:#fff;background-color:#6c757d}a.badge-secondary:focus,a.badge-secondary:hover{color:#fff;background-color:#545b62}a.badge-secondary.focus,a.badge-secondary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.badge-success{color:#fff;background-color:#28a745}a.badge-success:focus,a.badge-success:hover{color:#fff;background-color:#1e7e34}a.badge-success.focus,a.badge-success:focus{outline:0;box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.badge-info{color:#fff;background-color:#17a2b8}a.badge-info:focus,a.badge-info:hover{color:#fff;background-color:#117a8b}a.badge-info.focus,a.badge-info:focus{outline:0;box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.badge-warning{color:#212529;background-color:#ffc107}a.badge-warning:focus,a.badge-warning:hover{color:#212529;background-color:#d39e00}a.badge-warning.focus,a.badge-warning:focus{outline:0;box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.badge-danger{color:#fff;background-color:#dc3545}a.badge-danger:focus,a.badge-danger:hover{color:#fff;background-color:#bd2130}a.badge-danger.focus,a.badge-danger:focus{outline:0;box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.badge-light{color:#212529;background-color:#f8f9fa}a.badge-light:focus,a.badge-light:hover{color:#212529;background-color:#dae0e5}a.badge-light.focus,a.badge-light:focus{outline:0;box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.badge-dark{color:#fff;background-color:#343a40}a.badge-dark:focus,a.badge-dark:hover{color:#fff;background-color:#1d2124}a.badge-dark.focus,a.badge-dark:focus{outline:0;box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.jumbotron{padding:2rem 1rem;margin-bottom:2rem;background-color:#e9ecef;border-radius:.3rem}@media (min-width:576px){.jumbotron{padding:4rem 2rem}}.jumbotron-fluid{padding-right:0;padding-left:0;border-radius:0}.alert{position:relative;padding:.75rem 1.25rem;margin-bottom:1rem;border:1px solid transparent;border-radius:.25rem}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-right:4rem}.alert-dismissible .close{position:absolute;top:0;right:0;padding:.75rem 1.25rem;color:inherit}.alert-primary{color:#004085;background-color:#cce5ff;border-color:#b8daff}.alert-primary hr{border-top-color:#9fcdff}.alert-primary .alert-link{color:#002752}.alert-secondary{color:#383d41;background-color:#e2e3e5;border-color:#d6d8db}.alert-secondary hr{border-top-color:#c8cbcf}.alert-secondary .alert-link{color:#202326}.alert-success{color:#155724;background-color:#d4edda;border-color:#c3e6cb}.alert-success hr{border-top-color:#b1dfbb}.alert-success .alert-link{color:#0b2e13}.alert-info{color:#0c5460;background-color:#d1ecf1;border-color:#bee5eb}.alert-info hr{border-top-color:#abdde5}.alert-info .alert-link{color:#062c33}.alert-warning{color:#856404;background-color:#fff3cd;border-color:#ffeeba}.alert-warning hr{border-top-color:#ffe8a1}.alert-warning .alert-link{color:#533f03}.alert-danger{color:#721c24;background-color:#f8d7da;border-color:#f5c6cb}.alert-danger hr{border-top-color:#f1b0b7}.alert-danger .alert-link{color:#491217}.alert-light{color:#818182;background-color:#fefefe;border-color:#fdfdfe}.alert-light hr{border-top-color:#ececf6}.alert-light .alert-link{color:#686868}.alert-dark{color:#1b1e21;background-color:#d6d8d9;border-color:#c6c8ca}.alert-dark hr{border-top-color:#b9bbbe}.alert-dark .alert-link{color:#040505}@-webkit-keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}@keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}.progress{display:-ms-flexbox;display:flex;height:1rem;overflow:hidden;line-height:0;font-size:.75rem;background-color:#e9ecef;border-radius:.25rem}.progress-bar{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;overflow:hidden;color:#fff;text-align:center;white-space:nowrap;background-color:#007bff;transition:width .6s ease}@media (prefers-reduced-motion:reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg,rgba(255,255,255,.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,.15) 50%,rgba(255,255,255,.15) 75%,transparent 75%,transparent);background-size:1rem 1rem}.progress-bar-animated{-webkit-animation:progress-bar-stripes 1s linear infinite;animation:progress-bar-stripes 1s linear infinite}@media (prefers-reduced-motion:reduce){.progress-bar-animated{-webkit-animation:none;animation:none}}.media{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start}.media-body{-ms-flex:1;flex:1}.list-group{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0;border-radius:.25rem}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:focus,.list-group-item-action:hover{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#212529;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.75rem 1.25rem;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item:first-child{border-top-left-radius:inherit;border-top-right-radius:inherit}.list-group-item:last-child{border-bottom-right-radius:inherit;border-bottom-left-radius:inherit}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#007bff;border-color:#007bff}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal>.list-group-item.active{margin-top:0}.list-group-horizontal>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}@media (min-width:576px){.list-group-horizontal-sm{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-sm>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-sm>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-sm>.list-group-item.active{margin-top:0}.list-group-horizontal-sm>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-sm>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:768px){.list-group-horizontal-md{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-md>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-md>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-md>.list-group-item.active{margin-top:0}.list-group-horizontal-md>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-md>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:992px){.list-group-horizontal-lg{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-lg>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-lg>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-lg>.list-group-item.active{margin-top:0}.list-group-horizontal-lg>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-lg>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:1200px){.list-group-horizontal-xl{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-xl>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-xl>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-xl>.list-group-item.active{margin-top:0}.list-group-horizontal-xl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}.list-group-flush{border-radius:0}.list-group-flush>.list-group-item{border-width:0 0 1px}.list-group-flush>.list-group-item:last-child{border-bottom-width:0}.list-group-item-primary{color:#004085;background-color:#b8daff}.list-group-item-primary.list-group-item-action:focus,.list-group-item-primary.list-group-item-action:hover{color:#004085;background-color:#9fcdff}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#004085;border-color:#004085}.list-group-item-secondary{color:#383d41;background-color:#d6d8db}.list-group-item-secondary.list-group-item-action:focus,.list-group-item-secondary.list-group-item-action:hover{color:#383d41;background-color:#c8cbcf}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#383d41;border-color:#383d41}.list-group-item-success{color:#155724;background-color:#c3e6cb}.list-group-item-success.list-group-item-action:focus,.list-group-item-success.list-group-item-action:hover{color:#155724;background-color:#b1dfbb}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#155724;border-color:#155724}.list-group-item-info{color:#0c5460;background-color:#bee5eb}.list-group-item-info.list-group-item-action:focus,.list-group-item-info.list-group-item-action:hover{color:#0c5460;background-color:#abdde5}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#0c5460;border-color:#0c5460}.list-group-item-warning{color:#856404;background-color:#ffeeba}.list-group-item-warning.list-group-item-action:focus,.list-group-item-warning.list-group-item-action:hover{color:#856404;background-color:#ffe8a1}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#856404;border-color:#856404}.list-group-item-danger{color:#721c24;background-color:#f5c6cb}.list-group-item-danger.list-group-item-action:focus,.list-group-item-danger.list-group-item-action:hover{color:#721c24;background-color:#f1b0b7}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#721c24;border-color:#721c24}.list-group-item-light{color:#818182;background-color:#fdfdfe}.list-group-item-light.list-group-item-action:focus,.list-group-item-light.list-group-item-action:hover{color:#818182;background-color:#ececf6}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#818182;border-color:#818182}.list-group-item-dark{color:#1b1e21;background-color:#c6c8ca}.list-group-item-dark.list-group-item-action:focus,.list-group-item-dark.list-group-item-action:hover{color:#1b1e21;background-color:#b9bbbe}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#1b1e21;border-color:#1b1e21}.close{float:right;font-size:1.5rem;font-weight:700;line-height:1;color:#000;text-shadow:0 1px 0 #fff;opacity:.5}.close:hover{color:#000;text-decoration:none}.close:not(:disabled):not(.disabled):focus,.close:not(:disabled):not(.disabled):hover{opacity:.75}button.close{padding:0;background-color:transparent;border:0}a.close.disabled{pointer-events:none}.toast{max-width:350px;overflow:hidden;font-size:.875rem;background-color:rgba(255,255,255,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .25rem .75rem rgba(0,0,0,.1);-webkit-backdrop-filter:blur(10px);backdrop-filter:blur(10px);opacity:0;border-radius:.25rem}.toast:not(:last-child){margin-bottom:.75rem}.toast.showing{opacity:1}.toast.show{display:block;opacity:1}.toast.hide{display:none}.toast-header{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.25rem .75rem;color:#6c757d;background-color:rgba(255,255,255,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05)}.toast-body{padding:.75rem}.modal-open{overflow:hidden}.modal-open .modal{overflow-x:hidden;overflow-y:auto}.modal{position:fixed;top:0;left:0;z-index:1050;display:none;width:100%;height:100%;overflow:hidden;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:-webkit-transform .3s ease-out;transition:transform .3s ease-out;transition:transform .3s ease-out,-webkit-transform .3s ease-out;-webkit-transform:translate(0,-50px);transform:translate(0,-50px)}@media (prefers-reduced-motion:reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{-webkit-transform:none;transform:none}.modal.modal-static .modal-dialog{-webkit-transform:scale(1.02);transform:scale(1.02)}.modal-dialog-scrollable{display:-ms-flexbox;display:flex;max-height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 1rem);overflow:hidden}.modal-dialog-scrollable .modal-footer,.modal-dialog-scrollable .modal-header{-ms-flex-negative:0;flex-shrink:0}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;min-height:calc(100% - 1rem)}.modal-dialog-centered::before{display:block;height:calc(100vh - 1rem);height:-webkit-min-content;height:-moz-min-content;height:min-content;content:""}.modal-dialog-centered.modal-dialog-scrollable{-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;height:100%}.modal-dialog-centered.modal-dialog-scrollable .modal-content{max-height:none}.modal-dialog-centered.modal-dialog-scrollable::before{content:none}.modal-content{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem;outline:0}.modal-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:justify;justify-content:space-between;padding:1rem 1rem;border-bottom:1px solid #dee2e6;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.modal-header .close{padding:1rem 1rem;margin:-1rem -1rem -1rem auto}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;-ms-flex:1 1 auto;flex:1 1 auto;padding:1rem}.modal-footer{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:end;justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6;border-bottom-right-radius:calc(.3rem - 1px);border-bottom-left-radius:calc(.3rem - 1px)}.modal-footer>*{margin:.25rem}.modal-scrollbar-measure{position:absolute;top:-9999px;width:50px;height:50px;overflow:scroll}@media (min-width:576px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{max-height:calc(100% - 3.5rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-dialog-centered::before{height:calc(100vh - 3.5rem);height:-webkit-min-content;height:-moz-min-content;height:min-content}.modal-sm{max-width:300px}}@media (min-width:992px){.modal-lg,.modal-xl{max-width:800px}}@media (min-width:1200px){.modal-xl{max-width:1140px}}.tooltip{position:absolute;z-index:1070;display:block;margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .arrow::before{position:absolute;content:"";border-color:transparent;border-style:solid}.bs-tooltip-auto[x-placement^=top],.bs-tooltip-top{padding:.4rem 0}.bs-tooltip-auto[x-placement^=top] .arrow,.bs-tooltip-top .arrow{bottom:0}.bs-tooltip-auto[x-placement^=top] .arrow::before,.bs-tooltip-top .arrow::before{top:0;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-auto[x-placement^=right],.bs-tooltip-right{padding:0 .4rem}.bs-tooltip-auto[x-placement^=right] .arrow,.bs-tooltip-right .arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=right] .arrow::before,.bs-tooltip-right .arrow::before{right:0;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.bs-tooltip-auto[x-placement^=bottom],.bs-tooltip-bottom{padding:.4rem 0}.bs-tooltip-auto[x-placement^=bottom] .arrow,.bs-tooltip-bottom .arrow{top:0}.bs-tooltip-auto[x-placement^=bottom] .arrow::before,.bs-tooltip-bottom .arrow::before{bottom:0;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-auto[x-placement^=left],.bs-tooltip-left{padding:0 .4rem}.bs-tooltip-auto[x-placement^=left] .arrow,.bs-tooltip-left .arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=left] .arrow::before,.bs-tooltip-left .arrow::before{left:0;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000;border-radius:.25rem}.popover{position:absolute;top:0;left:0;z-index:1060;display:block;max-width:276px;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem}.popover .arrow{position:absolute;display:block;width:1rem;height:.5rem;margin:0 .3rem}.popover .arrow::after,.popover .arrow::before{position:absolute;display:block;content:"";border-color:transparent;border-style:solid}.bs-popover-auto[x-placement^=top],.bs-popover-top{margin-bottom:.5rem}.bs-popover-auto[x-placement^=top]>.arrow,.bs-popover-top>.arrow{bottom:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=top]>.arrow::before,.bs-popover-top>.arrow::before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=top]>.arrow::after,.bs-popover-top>.arrow::after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-auto[x-placement^=right],.bs-popover-right{margin-left:.5rem}.bs-popover-auto[x-placement^=right]>.arrow,.bs-popover-right>.arrow{left:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=right]>.arrow::before,.bs-popover-right>.arrow::before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=right]>.arrow::after,.bs-popover-right>.arrow::after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.bs-popover-auto[x-placement^=bottom],.bs-popover-bottom{margin-top:.5rem}.bs-popover-auto[x-placement^=bottom]>.arrow,.bs-popover-bottom>.arrow{top:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=bottom]>.arrow::before,.bs-popover-bottom>.arrow::before{top:0;border-width:0 .5rem .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=bottom]>.arrow::after,.bs-popover-bottom>.arrow::after{top:1px;border-width:0 .5rem .5rem .5rem;border-bottom-color:#fff}.bs-popover-auto[x-placement^=bottom] .popover-header::before,.bs-popover-bottom .popover-header::before{position:absolute;top:0;left:50%;display:block;width:1rem;margin-left:-.5rem;content:"";border-bottom:1px solid #f7f7f7}.bs-popover-auto[x-placement^=left],.bs-popover-left{margin-right:.5rem}.bs-popover-auto[x-placement^=left]>.arrow,.bs-popover-left>.arrow{right:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=left]>.arrow::before,.bs-popover-left>.arrow::before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=left]>.arrow::after,.bs-popover-left>.arrow::after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.popover-header{padding:.5rem .75rem;margin-bottom:0;font-size:1rem;background-color:#f7f7f7;border-bottom:1px solid #ebebeb;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.popover-header:empty{display:none}.popover-body{padding:.5rem .75rem;color:#212529}.carousel{position:relative}.carousel.pointer-event{-ms-touch-action:pan-y;touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;-webkit-backface-visibility:hidden;backface-visibility:hidden;transition:-webkit-transform .6s ease-in-out;transition:transform .6s ease-in-out;transition:transform .6s ease-in-out,-webkit-transform .6s ease-in-out}@media (prefers-reduced-motion:reduce){.carousel-item{transition:none}}.carousel-item-next,.carousel-item-prev,.carousel-item.active{display:block}.active.carousel-item-right,.carousel-item-next:not(.carousel-item-left){-webkit-transform:translateX(100%);transform:translateX(100%)}.active.carousel-item-left,.carousel-item-prev:not(.carousel-item-right){-webkit-transform:translateX(-100%);transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;-webkit-transform:none;transform:none}.carousel-fade .carousel-item-next.carousel-item-left,.carousel-fade .carousel-item-prev.carousel-item-right,.carousel-fade .carousel-item.active{z-index:1;opacity:1}.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{z-index:0;opacity:0;transition:opacity 0s .6s}@media (prefers-reduced-motion:reduce){.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{transition:none}}.carousel-control-next,.carousel-control-prev{position:absolute;top:0;bottom:0;z-index:1;display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:15%;color:#fff;text-align:center;opacity:.5;transition:opacity .15s ease}@media (prefers-reduced-motion:reduce){.carousel-control-next,.carousel-control-prev{transition:none}}.carousel-control-next:focus,.carousel-control-next:hover,.carousel-control-prev:focus,.carousel-control-prev:hover{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-next-icon,.carousel-control-prev-icon{display:inline-block;width:20px;height:20px;background:no-repeat 50%/100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M5.25 0l-4 4 4 4 1.5-1.5L4.25 4l2.5-2.5L5.25 0z'/%3e%3c/svg%3e")}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M2.75 0l-1.5 1.5L3.75 4l-2.5 2.5L2.75 8l4-4-4-4z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:15;display:-ms-flexbox;display:flex;-ms-flex-pack:center;justify-content:center;padding-left:0;margin-right:15%;margin-left:15%;list-style:none}.carousel-indicators li{box-sizing:content-box;-ms-flex:0 1 auto;flex:0 1 auto;width:30px;height:3px;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border-top:10px solid transparent;border-bottom:10px solid transparent;opacity:.5;transition:opacity .6s ease}@media (prefers-reduced-motion:reduce){.carousel-indicators li{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:20px;left:15%;z-index:10;padding-top:20px;padding-bottom:20px;color:#fff;text-align:center}@-webkit-keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}@keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;border:.25em solid currentColor;border-right-color:transparent;border-radius:50%;-webkit-animation:spinner-border .75s linear infinite;animation:spinner-border .75s linear infinite}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@-webkit-keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1;-webkit-transform:none;transform:none}}@keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1;-webkit-transform:none;transform:none}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;background-color:currentColor;border-radius:50%;opacity:0;-webkit-animation:spinner-grow .75s linear infinite;animation:spinner-grow .75s linear infinite}.spinner-grow-sm{width:1rem;height:1rem}.align-baseline{vertical-align:baseline!important}.align-top{vertical-align:top!important}.align-middle{vertical-align:middle!important}.align-bottom{vertical-align:bottom!important}.align-text-bottom{vertical-align:text-bottom!important}.align-text-top{vertical-align:text-top!important}.bg-primary{background-color:#007bff!important}a.bg-primary:focus,a.bg-primary:hover,button.bg-primary:focus,button.bg-primary:hover{background-color:#0062cc!important}.bg-secondary{background-color:#6c757d!important}a.bg-secondary:focus,a.bg-secondary:hover,button.bg-secondary:focus,button.bg-secondary:hover{background-color:#545b62!important}.bg-success{background-color:#28a745!important}a.bg-success:focus,a.bg-success:hover,button.bg-success:focus,button.bg-success:hover{background-color:#1e7e34!important}.bg-info{background-color:#17a2b8!important}a.bg-info:focus,a.bg-info:hover,button.bg-info:focus,button.bg-info:hover{background-color:#117a8b!important}.bg-warning{background-color:#ffc107!important}a.bg-warning:focus,a.bg-warning:hover,button.bg-warning:focus,button.bg-warning:hover{background-color:#d39e00!important}.bg-danger{background-color:#dc3545!important}a.bg-danger:focus,a.bg-danger:hover,button.bg-danger:focus,button.bg-danger:hover{background-color:#bd2130!important}.bg-light{background-color:#f8f9fa!important}a.bg-light:focus,a.bg-light:hover,button.bg-light:focus,button.bg-light:hover{background-color:#dae0e5!important}.bg-dark{background-color:#343a40!important}a.bg-dark:focus,a.bg-dark:hover,button.bg-dark:focus,button.bg-dark:hover{background-color:#1d2124!important}.bg-white{background-color:#fff!important}.bg-transparent{background-color:transparent!important}.border{border:1px solid #dee2e6!important}.border-top{border-top:1px solid #dee2e6!important}.border-right{border-right:1px solid #dee2e6!important}.border-bottom{border-bottom:1px solid #dee2e6!important}.border-left{border-left:1px solid #dee2e6!important}.border-0{border:0!important}.border-top-0{border-top:0!important}.border-right-0{border-right:0!important}.border-bottom-0{border-bottom:0!important}.border-left-0{border-left:0!important}.border-primary{border-color:#007bff!important}.border-secondary{border-color:#6c757d!important}.border-success{border-color:#28a745!important}.border-info{border-color:#17a2b8!important}.border-warning{border-color:#ffc107!important}.border-danger{border-color:#dc3545!important}.border-light{border-color:#f8f9fa!important}.border-dark{border-color:#343a40!important}.border-white{border-color:#fff!important}.rounded-sm{border-radius:.2rem!important}.rounded{border-radius:.25rem!important}.rounded-top{border-top-left-radius:.25rem!important;border-top-right-radius:.25rem!important}.rounded-right{border-top-right-radius:.25rem!important;border-bottom-right-radius:.25rem!important}.rounded-bottom{border-bottom-right-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-left{border-top-left-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-lg{border-radius:.3rem!important}.rounded-circle{border-radius:50%!important}.rounded-pill{border-radius:50rem!important}.rounded-0{border-radius:0!important}.clearfix::after{display:block;clear:both;content:""}.d-none{display:none!important}.d-inline{display:inline!important}.d-inline-block{display:inline-block!important}.d-block{display:block!important}.d-table{display:table!important}.d-table-row{display:table-row!important}.d-table-cell{display:table-cell!important}.d-flex{display:-ms-flexbox!important;display:flex!important}.d-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}@media (min-width:576px){.d-sm-none{display:none!important}.d-sm-inline{display:inline!important}.d-sm-inline-block{display:inline-block!important}.d-sm-block{display:block!important}.d-sm-table{display:table!important}.d-sm-table-row{display:table-row!important}.d-sm-table-cell{display:table-cell!important}.d-sm-flex{display:-ms-flexbox!important;display:flex!important}.d-sm-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:768px){.d-md-none{display:none!important}.d-md-inline{display:inline!important}.d-md-inline-block{display:inline-block!important}.d-md-block{display:block!important}.d-md-table{display:table!important}.d-md-table-row{display:table-row!important}.d-md-table-cell{display:table-cell!important}.d-md-flex{display:-ms-flexbox!important;display:flex!important}.d-md-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:992px){.d-lg-none{display:none!important}.d-lg-inline{display:inline!important}.d-lg-inline-block{display:inline-block!important}.d-lg-block{display:block!important}.d-lg-table{display:table!important}.d-lg-table-row{display:table-row!important}.d-lg-table-cell{display:table-cell!important}.d-lg-flex{display:-ms-flexbox!important;display:flex!important}.d-lg-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:1200px){.d-xl-none{display:none!important}.d-xl-inline{display:inline!important}.d-xl-inline-block{display:inline-block!important}.d-xl-block{display:block!important}.d-xl-table{display:table!important}.d-xl-table-row{display:table-row!important}.d-xl-table-cell{display:table-cell!important}.d-xl-flex{display:-ms-flexbox!important;display:flex!important}.d-xl-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media print{.d-print-none{display:none!important}.d-print-inline{display:inline!important}.d-print-inline-block{display:inline-block!important}.d-print-block{display:block!important}.d-print-table{display:table!important}.d-print-table-row{display:table-row!important}.d-print-table-cell{display:table-cell!important}.d-print-flex{display:-ms-flexbox!important;display:flex!important}.d-print-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}.embed-responsive{position:relative;display:block;width:100%;padding:0;overflow:hidden}.embed-responsive::before{display:block;content:""}.embed-responsive .embed-responsive-item,.embed-responsive embed,.embed-responsive iframe,.embed-responsive object,.embed-responsive video{position:absolute;top:0;bottom:0;left:0;width:100%;height:100%;border:0}.embed-responsive-21by9::before{padding-top:42.857143%}.embed-responsive-16by9::before{padding-top:56.25%}.embed-responsive-4by3::before{padding-top:75%}.embed-responsive-1by1::before{padding-top:100%}.flex-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-center{-ms-flex-align:center!important;align-items:center!important}.align-items-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}@media (min-width:576px){.flex-sm-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-sm-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-sm-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-sm-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-sm-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-sm-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-sm-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-sm-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-sm-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-sm-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-sm-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-sm-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-sm-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-sm-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-sm-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-sm-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-sm-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-sm-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-sm-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-sm-center{-ms-flex-align:center!important;align-items:center!important}.align-items-sm-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-sm-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-sm-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-sm-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-sm-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-sm-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-sm-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-sm-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-sm-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-sm-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-sm-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-sm-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-sm-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-sm-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:768px){.flex-md-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-md-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-md-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-md-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-md-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-md-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-md-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-md-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-md-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-md-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-md-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-md-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-md-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-md-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-md-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-md-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-md-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-md-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-md-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-md-center{-ms-flex-align:center!important;align-items:center!important}.align-items-md-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-md-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-md-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-md-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-md-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-md-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-md-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-md-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-md-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-md-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-md-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-md-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-md-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-md-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:992px){.flex-lg-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-lg-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-lg-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-lg-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-lg-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-lg-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-lg-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-lg-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-lg-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-lg-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-lg-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-lg-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-lg-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-lg-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-lg-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-lg-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-lg-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-lg-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-lg-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-lg-center{-ms-flex-align:center!important;align-items:center!important}.align-items-lg-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-lg-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-lg-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-lg-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-lg-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-lg-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-lg-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-lg-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-lg-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-lg-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-lg-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-lg-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-lg-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-lg-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:1200px){.flex-xl-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-xl-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-xl-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-xl-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-xl-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-xl-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-xl-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-xl-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-xl-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-xl-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-xl-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-xl-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-xl-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-xl-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-xl-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-xl-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-xl-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-xl-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-xl-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-xl-center{-ms-flex-align:center!important;align-items:center!important}.align-items-xl-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-xl-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-xl-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-xl-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-xl-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-xl-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-xl-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-xl-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-xl-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-xl-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-xl-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-xl-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-xl-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-xl-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}.float-left{float:left!important}.float-right{float:right!important}.float-none{float:none!important}@media (min-width:576px){.float-sm-left{float:left!important}.float-sm-right{float:right!important}.float-sm-none{float:none!important}}@media (min-width:768px){.float-md-left{float:left!important}.float-md-right{float:right!important}.float-md-none{float:none!important}}@media (min-width:992px){.float-lg-left{float:left!important}.float-lg-right{float:right!important}.float-lg-none{float:none!important}}@media (min-width:1200px){.float-xl-left{float:left!important}.float-xl-right{float:right!important}.float-xl-none{float:none!important}}.user-select-all{-webkit-user-select:all!important;-moz-user-select:all!important;-ms-user-select:all!important;user-select:all!important}.user-select-auto{-webkit-user-select:auto!important;-moz-user-select:auto!important;-ms-user-select:auto!important;user-select:auto!important}.user-select-none{-webkit-user-select:none!important;-moz-user-select:none!important;-ms-user-select:none!important;user-select:none!important}.overflow-auto{overflow:auto!important}.overflow-hidden{overflow:hidden!important}.position-static{position:static!important}.position-relative{position:relative!important}.position-absolute{position:absolute!important}.position-fixed{position:fixed!important}.position-sticky{position:-webkit-sticky!important;position:sticky!important}.fixed-top{position:fixed;top:0;right:0;left:0;z-index:1030}.fixed-bottom{position:fixed;right:0;bottom:0;left:0;z-index:1030}@supports ((position:-webkit-sticky) or (position:sticky)){.sticky-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);white-space:nowrap;border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;overflow:visible;clip:auto;white-space:normal}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075)!important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15)!important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175)!important}.shadow-none{box-shadow:none!important}.w-25{width:25%!important}.w-50{width:50%!important}.w-75{width:75%!important}.w-100{width:100%!important}.w-auto{width:auto!important}.h-25{height:25%!important}.h-50{height:50%!important}.h-75{height:75%!important}.h-100{height:100%!important}.h-auto{height:auto!important}.mw-100{max-width:100%!important}.mh-100{max-height:100%!important}.min-vw-100{min-width:100vw!important}.min-vh-100{min-height:100vh!important}.vw-100{width:100vw!important}.vh-100{height:100vh!important}.m-0{margin:0!important}.mt-0,.my-0{margin-top:0!important}.mr-0,.mx-0{margin-right:0!important}.mb-0,.my-0{margin-bottom:0!important}.ml-0,.mx-0{margin-left:0!important}.m-1{margin:.25rem!important}.mt-1,.my-1{margin-top:.25rem!important}.mr-1,.mx-1{margin-right:.25rem!important}.mb-1,.my-1{margin-bottom:.25rem!important}.ml-1,.mx-1{margin-left:.25rem!important}.m-2{margin:.5rem!important}.mt-2,.my-2{margin-top:.5rem!important}.mr-2,.mx-2{margin-right:.5rem!important}.mb-2,.my-2{margin-bottom:.5rem!important}.ml-2,.mx-2{margin-left:.5rem!important}.m-3{margin:1rem!important}.mt-3,.my-3{margin-top:1rem!important}.mr-3,.mx-3{margin-right:1rem!important}.mb-3,.my-3{margin-bottom:1rem!important}.ml-3,.mx-3{margin-left:1rem!important}.m-4{margin:1.5rem!important}.mt-4,.my-4{margin-top:1.5rem!important}.mr-4,.mx-4{margin-right:1.5rem!important}.mb-4,.my-4{margin-bottom:1.5rem!important}.ml-4,.mx-4{margin-left:1.5rem!important}.m-5{margin:3rem!important}.mt-5,.my-5{margin-top:3rem!important}.mr-5,.mx-5{margin-right:3rem!important}.mb-5,.my-5{margin-bottom:3rem!important}.ml-5,.mx-5{margin-left:3rem!important}.p-0{padding:0!important}.pt-0,.py-0{padding-top:0!important}.pr-0,.px-0{padding-right:0!important}.pb-0,.py-0{padding-bottom:0!important}.pl-0,.px-0{padding-left:0!important}.p-1{padding:.25rem!important}.pt-1,.py-1{padding-top:.25rem!important}.pr-1,.px-1{padding-right:.25rem!important}.pb-1,.py-1{padding-bottom:.25rem!important}.pl-1,.px-1{padding-left:.25rem!important}.p-2{padding:.5rem!important}.pt-2,.py-2{padding-top:.5rem!important}.pr-2,.px-2{padding-right:.5rem!important}.pb-2,.py-2{padding-bottom:.5rem!important}.pl-2,.px-2{padding-left:.5rem!important}.p-3{padding:1rem!important}.pt-3,.py-3{padding-top:1rem!important}.pr-3,.px-3{padding-right:1rem!important}.pb-3,.py-3{padding-bottom:1rem!important}.pl-3,.px-3{padding-left:1rem!important}.p-4{padding:1.5rem!important}.pt-4,.py-4{padding-top:1.5rem!important}.pr-4,.px-4{padding-right:1.5rem!important}.pb-4,.py-4{padding-bottom:1.5rem!important}.pl-4,.px-4{padding-left:1.5rem!important}.p-5{padding:3rem!important}.pt-5,.py-5{padding-top:3rem!important}.pr-5,.px-5{padding-right:3rem!important}.pb-5,.py-5{padding-bottom:3rem!important}.pl-5,.px-5{padding-left:3rem!important}.m-n1{margin:-.25rem!important}.mt-n1,.my-n1{margin-top:-.25rem!important}.mr-n1,.mx-n1{margin-right:-.25rem!important}.mb-n1,.my-n1{margin-bottom:-.25rem!important}.ml-n1,.mx-n1{margin-left:-.25rem!important}.m-n2{margin:-.5rem!important}.mt-n2,.my-n2{margin-top:-.5rem!important}.mr-n2,.mx-n2{margin-right:-.5rem!important}.mb-n2,.my-n2{margin-bottom:-.5rem!important}.ml-n2,.mx-n2{margin-left:-.5rem!important}.m-n3{margin:-1rem!important}.mt-n3,.my-n3{margin-top:-1rem!important}.mr-n3,.mx-n3{margin-right:-1rem!important}.mb-n3,.my-n3{margin-bottom:-1rem!important}.ml-n3,.mx-n3{margin-left:-1rem!important}.m-n4{margin:-1.5rem!important}.mt-n4,.my-n4{margin-top:-1.5rem!important}.mr-n4,.mx-n4{margin-right:-1.5rem!important}.mb-n4,.my-n4{margin-bottom:-1.5rem!important}.ml-n4,.mx-n4{margin-left:-1.5rem!important}.m-n5{margin:-3rem!important}.mt-n5,.my-n5{margin-top:-3rem!important}.mr-n5,.mx-n5{margin-right:-3rem!important}.mb-n5,.my-n5{margin-bottom:-3rem!important}.ml-n5,.mx-n5{margin-left:-3rem!important}.m-auto{margin:auto!important}.mt-auto,.my-auto{margin-top:auto!important}.mr-auto,.mx-auto{margin-right:auto!important}.mb-auto,.my-auto{margin-bottom:auto!important}.ml-auto,.mx-auto{margin-left:auto!important}@media (min-width:576px){.m-sm-0{margin:0!important}.mt-sm-0,.my-sm-0{margin-top:0!important}.mr-sm-0,.mx-sm-0{margin-right:0!important}.mb-sm-0,.my-sm-0{margin-bottom:0!important}.ml-sm-0,.mx-sm-0{margin-left:0!important}.m-sm-1{margin:.25rem!important}.mt-sm-1,.my-sm-1{margin-top:.25rem!important}.mr-sm-1,.mx-sm-1{margin-right:.25rem!important}.mb-sm-1,.my-sm-1{margin-bottom:.25rem!important}.ml-sm-1,.mx-sm-1{margin-left:.25rem!important}.m-sm-2{margin:.5rem!important}.mt-sm-2,.my-sm-2{margin-top:.5rem!important}.mr-sm-2,.mx-sm-2{margin-right:.5rem!important}.mb-sm-2,.my-sm-2{margin-bottom:.5rem!important}.ml-sm-2,.mx-sm-2{margin-left:.5rem!important}.m-sm-3{margin:1rem!important}.mt-sm-3,.my-sm-3{margin-top:1rem!important}.mr-sm-3,.mx-sm-3{margin-right:1rem!important}.mb-sm-3,.my-sm-3{margin-bottom:1rem!important}.ml-sm-3,.mx-sm-3{margin-left:1rem!important}.m-sm-4{margin:1.5rem!important}.mt-sm-4,.my-sm-4{margin-top:1.5rem!important}.mr-sm-4,.mx-sm-4{margin-right:1.5rem!important}.mb-sm-4,.my-sm-4{margin-bottom:1.5rem!important}.ml-sm-4,.mx-sm-4{margin-left:1.5rem!important}.m-sm-5{margin:3rem!important}.mt-sm-5,.my-sm-5{margin-top:3rem!important}.mr-sm-5,.mx-sm-5{margin-right:3rem!important}.mb-sm-5,.my-sm-5{margin-bottom:3rem!important}.ml-sm-5,.mx-sm-5{margin-left:3rem!important}.p-sm-0{padding:0!important}.pt-sm-0,.py-sm-0{padding-top:0!important}.pr-sm-0,.px-sm-0{padding-right:0!important}.pb-sm-0,.py-sm-0{padding-bottom:0!important}.pl-sm-0,.px-sm-0{padding-left:0!important}.p-sm-1{padding:.25rem!important}.pt-sm-1,.py-sm-1{padding-top:.25rem!important}.pr-sm-1,.px-sm-1{padding-right:.25rem!important}.pb-sm-1,.py-sm-1{padding-bottom:.25rem!important}.pl-sm-1,.px-sm-1{padding-left:.25rem!important}.p-sm-2{padding:.5rem!important}.pt-sm-2,.py-sm-2{padding-top:.5rem!important}.pr-sm-2,.px-sm-2{padding-right:.5rem!important}.pb-sm-2,.py-sm-2{padding-bottom:.5rem!important}.pl-sm-2,.px-sm-2{padding-left:.5rem!important}.p-sm-3{padding:1rem!important}.pt-sm-3,.py-sm-3{padding-top:1rem!important}.pr-sm-3,.px-sm-3{padding-right:1rem!important}.pb-sm-3,.py-sm-3{padding-bottom:1rem!important}.pl-sm-3,.px-sm-3{padding-left:1rem!important}.p-sm-4{padding:1.5rem!important}.pt-sm-4,.py-sm-4{padding-top:1.5rem!important}.pr-sm-4,.px-sm-4{padding-right:1.5rem!important}.pb-sm-4,.py-sm-4{padding-bottom:1.5rem!important}.pl-sm-4,.px-sm-4{padding-left:1.5rem!important}.p-sm-5{padding:3rem!important}.pt-sm-5,.py-sm-5{padding-top:3rem!important}.pr-sm-5,.px-sm-5{padding-right:3rem!important}.pb-sm-5,.py-sm-5{padding-bottom:3rem!important}.pl-sm-5,.px-sm-5{padding-left:3rem!important}.m-sm-n1{margin:-.25rem!important}.mt-sm-n1,.my-sm-n1{margin-top:-.25rem!important}.mr-sm-n1,.mx-sm-n1{margin-right:-.25rem!important}.mb-sm-n1,.my-sm-n1{margin-bottom:-.25rem!important}.ml-sm-n1,.mx-sm-n1{margin-left:-.25rem!important}.m-sm-n2{margin:-.5rem!important}.mt-sm-n2,.my-sm-n2{margin-top:-.5rem!important}.mr-sm-n2,.mx-sm-n2{margin-right:-.5rem!important}.mb-sm-n2,.my-sm-n2{margin-bottom:-.5rem!important}.ml-sm-n2,.mx-sm-n2{margin-left:-.5rem!important}.m-sm-n3{margin:-1rem!important}.mt-sm-n3,.my-sm-n3{margin-top:-1rem!important}.mr-sm-n3,.mx-sm-n3{margin-right:-1rem!important}.mb-sm-n3,.my-sm-n3{margin-bottom:-1rem!important}.ml-sm-n3,.mx-sm-n3{margin-left:-1rem!important}.m-sm-n4{margin:-1.5rem!important}.mt-sm-n4,.my-sm-n4{margin-top:-1.5rem!important}.mr-sm-n4,.mx-sm-n4{margin-right:-1.5rem!important}.mb-sm-n4,.my-sm-n4{margin-bottom:-1.5rem!important}.ml-sm-n4,.mx-sm-n4{margin-left:-1.5rem!important}.m-sm-n5{margin:-3rem!important}.mt-sm-n5,.my-sm-n5{margin-top:-3rem!important}.mr-sm-n5,.mx-sm-n5{margin-right:-3rem!important}.mb-sm-n5,.my-sm-n5{margin-bottom:-3rem!important}.ml-sm-n5,.mx-sm-n5{margin-left:-3rem!important}.m-sm-auto{margin:auto!important}.mt-sm-auto,.my-sm-auto{margin-top:auto!important}.mr-sm-auto,.mx-sm-auto{margin-right:auto!important}.mb-sm-auto,.my-sm-auto{margin-bottom:auto!important}.ml-sm-auto,.mx-sm-auto{margin-left:auto!important}}@media (min-width:768px){.m-md-0{margin:0!important}.mt-md-0,.my-md-0{margin-top:0!important}.mr-md-0,.mx-md-0{margin-right:0!important}.mb-md-0,.my-md-0{margin-bottom:0!important}.ml-md-0,.mx-md-0{margin-left:0!important}.m-md-1{margin:.25rem!important}.mt-md-1,.my-md-1{margin-top:.25rem!important}.mr-md-1,.mx-md-1{margin-right:.25rem!important}.mb-md-1,.my-md-1{margin-bottom:.25rem!important}.ml-md-1,.mx-md-1{margin-left:.25rem!important}.m-md-2{margin:.5rem!important}.mt-md-2,.my-md-2{margin-top:.5rem!important}.mr-md-2,.mx-md-2{margin-right:.5rem!important}.mb-md-2,.my-md-2{margin-bottom:.5rem!important}.ml-md-2,.mx-md-2{margin-left:.5rem!important}.m-md-3{margin:1rem!important}.mt-md-3,.my-md-3{margin-top:1rem!important}.mr-md-3,.mx-md-3{margin-right:1rem!important}.mb-md-3,.my-md-3{margin-bottom:1rem!important}.ml-md-3,.mx-md-3{margin-left:1rem!important}.m-md-4{margin:1.5rem!important}.mt-md-4,.my-md-4{margin-top:1.5rem!important}.mr-md-4,.mx-md-4{margin-right:1.5rem!important}.mb-md-4,.my-md-4{margin-bottom:1.5rem!important}.ml-md-4,.mx-md-4{margin-left:1.5rem!important}.m-md-5{margin:3rem!important}.mt-md-5,.my-md-5{margin-top:3rem!important}.mr-md-5,.mx-md-5{margin-right:3rem!important}.mb-md-5,.my-md-5{margin-bottom:3rem!important}.ml-md-5,.mx-md-5{margin-left:3rem!important}.p-md-0{padding:0!important}.pt-md-0,.py-md-0{padding-top:0!important}.pr-md-0,.px-md-0{padding-right:0!important}.pb-md-0,.py-md-0{padding-bottom:0!important}.pl-md-0,.px-md-0{padding-left:0!important}.p-md-1{padding:.25rem!important}.pt-md-1,.py-md-1{padding-top:.25rem!important}.pr-md-1,.px-md-1{padding-right:.25rem!important}.pb-md-1,.py-md-1{padding-bottom:.25rem!important}.pl-md-1,.px-md-1{padding-left:.25rem!important}.p-md-2{padding:.5rem!important}.pt-md-2,.py-md-2{padding-top:.5rem!important}.pr-md-2,.px-md-2{padding-right:.5rem!important}.pb-md-2,.py-md-2{padding-bottom:.5rem!important}.pl-md-2,.px-md-2{padding-left:.5rem!important}.p-md-3{padding:1rem!important}.pt-md-3,.py-md-3{padding-top:1rem!important}.pr-md-3,.px-md-3{padding-right:1rem!important}.pb-md-3,.py-md-3{padding-bottom:1rem!important}.pl-md-3,.px-md-3{padding-left:1rem!important}.p-md-4{padding:1.5rem!important}.pt-md-4,.py-md-4{padding-top:1.5rem!important}.pr-md-4,.px-md-4{padding-right:1.5rem!important}.pb-md-4,.py-md-4{padding-bottom:1.5rem!important}.pl-md-4,.px-md-4{padding-left:1.5rem!important}.p-md-5{padding:3rem!important}.pt-md-5,.py-md-5{padding-top:3rem!important}.pr-md-5,.px-md-5{padding-right:3rem!important}.pb-md-5,.py-md-5{padding-bottom:3rem!important}.pl-md-5,.px-md-5{padding-left:3rem!important}.m-md-n1{margin:-.25rem!important}.mt-md-n1,.my-md-n1{margin-top:-.25rem!important}.mr-md-n1,.mx-md-n1{margin-right:-.25rem!important}.mb-md-n1,.my-md-n1{margin-bottom:-.25rem!important}.ml-md-n1,.mx-md-n1{margin-left:-.25rem!important}.m-md-n2{margin:-.5rem!important}.mt-md-n2,.my-md-n2{margin-top:-.5rem!important}.mr-md-n2,.mx-md-n2{margin-right:-.5rem!important}.mb-md-n2,.my-md-n2{margin-bottom:-.5rem!important}.ml-md-n2,.mx-md-n2{margin-left:-.5rem!important}.m-md-n3{margin:-1rem!important}.mt-md-n3,.my-md-n3{margin-top:-1rem!important}.mr-md-n3,.mx-md-n3{margin-right:-1rem!important}.mb-md-n3,.my-md-n3{margin-bottom:-1rem!important}.ml-md-n3,.mx-md-n3{margin-left:-1rem!important}.m-md-n4{margin:-1.5rem!important}.mt-md-n4,.my-md-n4{margin-top:-1.5rem!important}.mr-md-n4,.mx-md-n4{margin-right:-1.5rem!important}.mb-md-n4,.my-md-n4{margin-bottom:-1.5rem!important}.ml-md-n4,.mx-md-n4{margin-left:-1.5rem!important}.m-md-n5{margin:-3rem!important}.mt-md-n5,.my-md-n5{margin-top:-3rem!important}.mr-md-n5,.mx-md-n5{margin-right:-3rem!important}.mb-md-n5,.my-md-n5{margin-bottom:-3rem!important}.ml-md-n5,.mx-md-n5{margin-left:-3rem!important}.m-md-auto{margin:auto!important}.mt-md-auto,.my-md-auto{margin-top:auto!important}.mr-md-auto,.mx-md-auto{margin-right:auto!important}.mb-md-auto,.my-md-auto{margin-bottom:auto!important}.ml-md-auto,.mx-md-auto{margin-left:auto!important}}@media (min-width:992px){.m-lg-0{margin:0!important}.mt-lg-0,.my-lg-0{margin-top:0!important}.mr-lg-0,.mx-lg-0{margin-right:0!important}.mb-lg-0,.my-lg-0{margin-bottom:0!important}.ml-lg-0,.mx-lg-0{margin-left:0!important}.m-lg-1{margin:.25rem!important}.mt-lg-1,.my-lg-1{margin-top:.25rem!important}.mr-lg-1,.mx-lg-1{margin-right:.25rem!important}.mb-lg-1,.my-lg-1{margin-bottom:.25rem!important}.ml-lg-1,.mx-lg-1{margin-left:.25rem!important}.m-lg-2{margin:.5rem!important}.mt-lg-2,.my-lg-2{margin-top:.5rem!important}.mr-lg-2,.mx-lg-2{margin-right:.5rem!important}.mb-lg-2,.my-lg-2{margin-bottom:.5rem!important}.ml-lg-2,.mx-lg-2{margin-left:.5rem!important}.m-lg-3{margin:1rem!important}.mt-lg-3,.my-lg-3{margin-top:1rem!important}.mr-lg-3,.mx-lg-3{margin-right:1rem!important}.mb-lg-3,.my-lg-3{margin-bottom:1rem!important}.ml-lg-3,.mx-lg-3{margin-left:1rem!important}.m-lg-4{margin:1.5rem!important}.mt-lg-4,.my-lg-4{margin-top:1.5rem!important}.mr-lg-4,.mx-lg-4{margin-right:1.5rem!important}.mb-lg-4,.my-lg-4{margin-bottom:1.5rem!important}.ml-lg-4,.mx-lg-4{margin-left:1.5rem!important}.m-lg-5{margin:3rem!important}.mt-lg-5,.my-lg-5{margin-top:3rem!important}.mr-lg-5,.mx-lg-5{margin-right:3rem!important}.mb-lg-5,.my-lg-5{margin-bottom:3rem!important}.ml-lg-5,.mx-lg-5{margin-left:3rem!important}.p-lg-0{padding:0!important}.pt-lg-0,.py-lg-0{padding-top:0!important}.pr-lg-0,.px-lg-0{padding-right:0!important}.pb-lg-0,.py-lg-0{padding-bottom:0!important}.pl-lg-0,.px-lg-0{padding-left:0!important}.p-lg-1{padding:.25rem!important}.pt-lg-1,.py-lg-1{padding-top:.25rem!important}.pr-lg-1,.px-lg-1{padding-right:.25rem!important}.pb-lg-1,.py-lg-1{padding-bottom:.25rem!important}.pl-lg-1,.px-lg-1{padding-left:.25rem!important}.p-lg-2{padding:.5rem!important}.pt-lg-2,.py-lg-2{padding-top:.5rem!important}.pr-lg-2,.px-lg-2{padding-right:.5rem!important}.pb-lg-2,.py-lg-2{padding-bottom:.5rem!important}.pl-lg-2,.px-lg-2{padding-left:.5rem!important}.p-lg-3{padding:1rem!important}.pt-lg-3,.py-lg-3{padding-top:1rem!important}.pr-lg-3,.px-lg-3{padding-right:1rem!important}.pb-lg-3,.py-lg-3{padding-bottom:1rem!important}.pl-lg-3,.px-lg-3{padding-left:1rem!important}.p-lg-4{padding:1.5rem!important}.pt-lg-4,.py-lg-4{padding-top:1.5rem!important}.pr-lg-4,.px-lg-4{padding-right:1.5rem!important}.pb-lg-4,.py-lg-4{padding-bottom:1.5rem!important}.pl-lg-4,.px-lg-4{padding-left:1.5rem!important}.p-lg-5{padding:3rem!important}.pt-lg-5,.py-lg-5{padding-top:3rem!important}.pr-lg-5,.px-lg-5{padding-right:3rem!important}.pb-lg-5,.py-lg-5{padding-bottom:3rem!important}.pl-lg-5,.px-lg-5{padding-left:3rem!important}.m-lg-n1{margin:-.25rem!important}.mt-lg-n1,.my-lg-n1{margin-top:-.25rem!important}.mr-lg-n1,.mx-lg-n1{margin-right:-.25rem!important}.mb-lg-n1,.my-lg-n1{margin-bottom:-.25rem!important}.ml-lg-n1,.mx-lg-n1{margin-left:-.25rem!important}.m-lg-n2{margin:-.5rem!important}.mt-lg-n2,.my-lg-n2{margin-top:-.5rem!important}.mr-lg-n2,.mx-lg-n2{margin-right:-.5rem!important}.mb-lg-n2,.my-lg-n2{margin-bottom:-.5rem!important}.ml-lg-n2,.mx-lg-n2{margin-left:-.5rem!important}.m-lg-n3{margin:-1rem!important}.mt-lg-n3,.my-lg-n3{margin-top:-1rem!important}.mr-lg-n3,.mx-lg-n3{margin-right:-1rem!important}.mb-lg-n3,.my-lg-n3{margin-bottom:-1rem!important}.ml-lg-n3,.mx-lg-n3{margin-left:-1rem!important}.m-lg-n4{margin:-1.5rem!important}.mt-lg-n4,.my-lg-n4{margin-top:-1.5rem!important}.mr-lg-n4,.mx-lg-n4{margin-right:-1.5rem!important}.mb-lg-n4,.my-lg-n4{margin-bottom:-1.5rem!important}.ml-lg-n4,.mx-lg-n4{margin-left:-1.5rem!important}.m-lg-n5{margin:-3rem!important}.mt-lg-n5,.my-lg-n5{margin-top:-3rem!important}.mr-lg-n5,.mx-lg-n5{margin-right:-3rem!important}.mb-lg-n5,.my-lg-n5{margin-bottom:-3rem!important}.ml-lg-n5,.mx-lg-n5{margin-left:-3rem!important}.m-lg-auto{margin:auto!important}.mt-lg-auto,.my-lg-auto{margin-top:auto!important}.mr-lg-auto,.mx-lg-auto{margin-right:auto!important}.mb-lg-auto,.my-lg-auto{margin-bottom:auto!important}.ml-lg-auto,.mx-lg-auto{margin-left:auto!important}}@media (min-width:1200px){.m-xl-0{margin:0!important}.mt-xl-0,.my-xl-0{margin-top:0!important}.mr-xl-0,.mx-xl-0{margin-right:0!important}.mb-xl-0,.my-xl-0{margin-bottom:0!important}.ml-xl-0,.mx-xl-0{margin-left:0!important}.m-xl-1{margin:.25rem!important}.mt-xl-1,.my-xl-1{margin-top:.25rem!important}.mr-xl-1,.mx-xl-1{margin-right:.25rem!important}.mb-xl-1,.my-xl-1{margin-bottom:.25rem!important}.ml-xl-1,.mx-xl-1{margin-left:.25rem!important}.m-xl-2{margin:.5rem!important}.mt-xl-2,.my-xl-2{margin-top:.5rem!important}.mr-xl-2,.mx-xl-2{margin-right:.5rem!important}.mb-xl-2,.my-xl-2{margin-bottom:.5rem!important}.ml-xl-2,.mx-xl-2{margin-left:.5rem!important}.m-xl-3{margin:1rem!important}.mt-xl-3,.my-xl-3{margin-top:1rem!important}.mr-xl-3,.mx-xl-3{margin-right:1rem!important}.mb-xl-3,.my-xl-3{margin-bottom:1rem!important}.ml-xl-3,.mx-xl-3{margin-left:1rem!important}.m-xl-4{margin:1.5rem!important}.mt-xl-4,.my-xl-4{margin-top:1.5rem!important}.mr-xl-4,.mx-xl-4{margin-right:1.5rem!important}.mb-xl-4,.my-xl-4{margin-bottom:1.5rem!important}.ml-xl-4,.mx-xl-4{margin-left:1.5rem!important}.m-xl-5{margin:3rem!important}.mt-xl-5,.my-xl-5{margin-top:3rem!important}.mr-xl-5,.mx-xl-5{margin-right:3rem!important}.mb-xl-5,.my-xl-5{margin-bottom:3rem!important}.ml-xl-5,.mx-xl-5{margin-left:3rem!important}.p-xl-0{padding:0!important}.pt-xl-0,.py-xl-0{padding-top:0!important}.pr-xl-0,.px-xl-0{padding-right:0!important}.pb-xl-0,.py-xl-0{padding-bottom:0!important}.pl-xl-0,.px-xl-0{padding-left:0!important}.p-xl-1{padding:.25rem!important}.pt-xl-1,.py-xl-1{padding-top:.25rem!important}.pr-xl-1,.px-xl-1{padding-right:.25rem!important}.pb-xl-1,.py-xl-1{padding-bottom:.25rem!important}.pl-xl-1,.px-xl-1{padding-left:.25rem!important}.p-xl-2{padding:.5rem!important}.pt-xl-2,.py-xl-2{padding-top:.5rem!important}.pr-xl-2,.px-xl-2{padding-right:.5rem!important}.pb-xl-2,.py-xl-2{padding-bottom:.5rem!important}.pl-xl-2,.px-xl-2{padding-left:.5rem!important}.p-xl-3{padding:1rem!important}.pt-xl-3,.py-xl-3{padding-top:1rem!important}.pr-xl-3,.px-xl-3{padding-right:1rem!important}.pb-xl-3,.py-xl-3{padding-bottom:1rem!important}.pl-xl-3,.px-xl-3{padding-left:1rem!important}.p-xl-4{padding:1.5rem!important}.pt-xl-4,.py-xl-4{padding-top:1.5rem!important}.pr-xl-4,.px-xl-4{padding-right:1.5rem!important}.pb-xl-4,.py-xl-4{padding-bottom:1.5rem!important}.pl-xl-4,.px-xl-4{padding-left:1.5rem!important}.p-xl-5{padding:3rem!important}.pt-xl-5,.py-xl-5{padding-top:3rem!important}.pr-xl-5,.px-xl-5{padding-right:3rem!important}.pb-xl-5,.py-xl-5{padding-bottom:3rem!important}.pl-xl-5,.px-xl-5{padding-left:3rem!important}.m-xl-n1{margin:-.25rem!important}.mt-xl-n1,.my-xl-n1{margin-top:-.25rem!important}.mr-xl-n1,.mx-xl-n1{margin-right:-.25rem!important}.mb-xl-n1,.my-xl-n1{margin-bottom:-.25rem!important}.ml-xl-n1,.mx-xl-n1{margin-left:-.25rem!important}.m-xl-n2{margin:-.5rem!important}.mt-xl-n2,.my-xl-n2{margin-top:-.5rem!important}.mr-xl-n2,.mx-xl-n2{margin-right:-.5rem!important}.mb-xl-n2,.my-xl-n2{margin-bottom:-.5rem!important}.ml-xl-n2,.mx-xl-n2{margin-left:-.5rem!important}.m-xl-n3{margin:-1rem!important}.mt-xl-n3,.my-xl-n3{margin-top:-1rem!important}.mr-xl-n3,.mx-xl-n3{margin-right:-1rem!important}.mb-xl-n3,.my-xl-n3{margin-bottom:-1rem!important}.ml-xl-n3,.mx-xl-n3{margin-left:-1rem!important}.m-xl-n4{margin:-1.5rem!important}.mt-xl-n4,.my-xl-n4{margin-top:-1.5rem!important}.mr-xl-n4,.mx-xl-n4{margin-right:-1.5rem!important}.mb-xl-n4,.my-xl-n4{margin-bottom:-1.5rem!important}.ml-xl-n4,.mx-xl-n4{margin-left:-1.5rem!important}.m-xl-n5{margin:-3rem!important}.mt-xl-n5,.my-xl-n5{margin-top:-3rem!important}.mr-xl-n5,.mx-xl-n5{margin-right:-3rem!important}.mb-xl-n5,.my-xl-n5{margin-bottom:-3rem!important}.ml-xl-n5,.mx-xl-n5{margin-left:-3rem!important}.m-xl-auto{margin:auto!important}.mt-xl-auto,.my-xl-auto{margin-top:auto!important}.mr-xl-auto,.mx-xl-auto{margin-right:auto!important}.mb-xl-auto,.my-xl-auto{margin-bottom:auto!important}.ml-xl-auto,.mx-xl-auto{margin-left:auto!important}}.stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;pointer-events:auto;content:"";background-color:rgba(0,0,0,0)}.text-monospace{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace!important}.text-justify{text-align:justify!important}.text-wrap{white-space:normal!important}.text-nowrap{white-space:nowrap!important}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.text-left{text-align:left!important}.text-right{text-align:right!important}.text-center{text-align:center!important}@media (min-width:576px){.text-sm-left{text-align:left!important}.text-sm-right{text-align:right!important}.text-sm-center{text-align:center!important}}@media (min-width:768px){.text-md-left{text-align:left!important}.text-md-right{text-align:right!important}.text-md-center{text-align:center!important}}@media (min-width:992px){.text-lg-left{text-align:left!important}.text-lg-right{text-align:right!important}.text-lg-center{text-align:center!important}}@media (min-width:1200px){.text-xl-left{text-align:left!important}.text-xl-right{text-align:right!important}.text-xl-center{text-align:center!important}}.text-lowercase{text-transform:lowercase!important}.text-uppercase{text-transform:uppercase!important}.text-capitalize{text-transform:capitalize!important}.font-weight-light{font-weight:300!important}.font-weight-lighter{font-weight:lighter!important}.font-weight-normal{font-weight:400!important}.font-weight-bold{font-weight:700!important}.font-weight-bolder{font-weight:bolder!important}.font-italic{font-style:italic!important}.text-white{color:#fff!important}.text-primary{color:#007bff!important}a.text-primary:focus,a.text-primary:hover{color:#0056b3!important}.text-secondary{color:#6c757d!important}a.text-secondary:focus,a.text-secondary:hover{color:#494f54!important}.text-success{color:#28a745!important}a.text-success:focus,a.text-success:hover{color:#19692c!important}.text-info{color:#17a2b8!important}a.text-info:focus,a.text-info:hover{color:#0f6674!important}.text-warning{color:#ffc107!important}a.text-warning:focus,a.text-warning:hover{color:#ba8b00!important}.text-danger{color:#dc3545!important}a.text-danger:focus,a.text-danger:hover{color:#a71d2a!important}.text-light{color:#f8f9fa!important}a.text-light:focus,a.text-light:hover{color:#cbd3da!important}.text-dark{color:#343a40!important}a.text-dark:focus,a.text-dark:hover{color:#121416!important}.text-body{color:#212529!important}.text-muted{color:#6c757d!important}.text-black-50{color:rgba(0,0,0,.5)!important}.text-white-50{color:rgba(255,255,255,.5)!important}.text-hide{font:0/0 a;color:transparent;text-shadow:none;background-color:transparent;border:0}.text-decoration-none{text-decoration:none!important}.text-break{word-wrap:break-word!important}.text-reset{color:inherit!important}.visible{visibility:visible!important}.invisible{visibility:hidden!important}@media print{*,::after,::before{text-shadow:none!important;box-shadow:none!important}a:not(.btn){text-decoration:underline}abbr[title]::after{content:" (" attr(title) ")"}pre{white-space:pre-wrap!important}blockquote,pre{border:1px solid #adb5bd;page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}@page{size:a3}body{min-width:992px!important}.container{min-width:992px!important}.navbar{display:none}.badge{border:1px solid #000}.table{border-collapse:collapse!important}.table td,.table th{background-color:#fff!important}.table-bordered td,.table-bordered th{border:1px solid #dee2e6!important}.table-dark{color:inherit}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#dee2e6}.table .thead-dark th{color:inherit;border-color:#dee2e6}} +/*# sourceMappingURL=bootstrap.min.css.map */ \ No newline at end of file diff --git a/stylesheets/extra.css b/stylesheets/extra.css new file mode 100644 index 00000000..329510a5 --- /dev/null +++ b/stylesheets/extra.css @@ -0,0 +1,11 @@ +/* + Time-stamp: + Extra CSS configuration of the ULHPC Technical Documentation website + */ + +.md-typeset ul li li{ + list-style: circle; +} +.md-typeset ul li li li{ + list-style: square; +} diff --git a/support/index.html b/support/index.html new file mode 100644 index 00000000..d8dc1a4c --- /dev/null +++ b/support/index.html @@ -0,0 +1,2992 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Support

    +

    ULHPC strives to support in a user friendly way your [super]computing needs. +Note however that we are not here to make your PhD at your place ;)

    +

    Service Now HPC Support Portal

    +

    FAQ/Troubleshooting

    + +

    Read the Friendly Manual

    +

    We have always maintained an extensive documentation and tutorials available online, which aims at being the most up-to-date and comprehensive.

    +

    So please, read the documentation first if you have a question of problem -- we probably provide detailed instructions here

    +

    Help Desk

    +

    The online help desk Service is the preferred +method for contacting ULHPC.

    +
    +

    Tips

    +

    Before reporting a problem or and issue, kindly remember that:

    +
      +
    1. Your issue is probably documented here on the ULHPC Technical documentation
    2. +
    3. An event may be on-going: check the ULHPC Live status page
        +
      • Planned maintenance are announced at least 2 weeks in advance - -- see Maintenance and Downtime Policy
      • +
      • The proper SSH banner is displayed during planned downtime
      • +
      +
    4. +
    5. check the state of your nodes and jobs +
    6. +
    +
    +

    Service Now HPC Support Portal

    +

    You can make code snippets, shell outputs, etc in your ticket much more readable by inserting a line with: +

    [code]<pre>
    +
    +before the snippet, and another line with: +
    </pre>[/code]
    +
    +after it. For a full list of formatting options, see this ServiceNow article.

    +
    +

    Be as precise and complete as possible

    +

    ULHPC team handle thousands of support requests per year. +In order to ensure efficient timely resolution of issues, ensure that:

    +
      +
    1. you select the appropriate category (left menu)
    2. +
    3. you include as much of the following as possible when making a request:
        +
      • Who? - Name and user id (login), eventually project name
      • +
      • When? - When did the problem occur?
      • +
      • Where? - Which cluster ? Which node ? Which job ?
          +
        • Really include Job IDs
        • +
        • Location of relevant files
            +
          • input/output, job launcher scripts, source code, executables etc.
          • +
          +
        • +
        +
      • +
      • What? - What happened? What exactly were you doing or trying to do ?
          +
        • include Error messages - kindly report system or software messages literally and exactly.
        • +
        • output of module list
        • +
        • any steps you have tried
        • +
        • Steps to reproduce
        • +
        +
      • +
      • Any part of this technical documentation you checked before opening the ticket
      • +
      +
    4. +
    +
    +

    Access to the online help system requires logging in with your Uni.lu username, password, and eventually one-time password. +If you are an existing user unable to log in, you can send us an email.

    +
    +

    Availability and Response Time

    +

    HPC support is provided on a volunteer basis by UL HPC staff and associated UL experts working at normal business hours. We offer no guarantee on response time except with paid support contracts.

    +
    +

    Email support

    +

    You can contact us by mail to the ULHPC Team Email (ONLY if you cannot login/access the HPC Support helpdesk portal : hpc-team@uni.lu

    +

    You may also ask the help of other ULHPC users using the HPC User community mailing list: (moderated): hpc-users@uni.lu

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/systems/aion/BullSequanaXH2000_Features_Atos_supercomputers.pdf b/systems/aion/BullSequanaXH2000_Features_Atos_supercomputers.pdf new file mode 100644 index 00000000..f24fef58 Binary files /dev/null and b/systems/aion/BullSequanaXH2000_Features_Atos_supercomputers.pdf differ diff --git a/systems/aion/compute/index.html b/systems/aion/compute/index.html new file mode 100644 index 00000000..619524ab --- /dev/null +++ b/systems/aion/compute/index.html @@ -0,0 +1,3006 @@ + + + + + + + + + + + + + + + + + + + + + + + + Compute Nodes - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Aion Compute Nodes

    +

    Aion is a cluster of x86-64 AMD-based compute nodes. +More precisely, Aion consists of 354 "regular" computational nodes named aion-[0001-0354] as follows:

    + + + + + + + + + + + + + + + + + + + + + +
    Hostname (#Nodes)#corestypeProcessorsRAMR_\text{peak}
    [TFlops]
    aion-[0001-0354] (354)45312Regular Epyc2 AMD Epyc ROME 7H12 @ 2.6 GHz [64c/280W]256 GB5.32 TF
    +

    Aion compute nodes compute nodes MUST be seen as 8 (virtual) processors of 16 cores each, even if physically the nodes are hosting 2 physical sockets of AMD Epyc ROME 7H12 processors having 64 cores each (total: 128 cores per node).

    +
      +
    • As will be highlighted in the slurm resource allocation, that means that targetting a full node utilization assumes that you use the following format attributes to your jobs: {sbatch|srun|si|salloc} [-N <N>] --ntasks-per-node <8n> --ntasks-per-socket <n> -c <thread> where
        +
      • you want to ensure that <n>\times<thread>= 16 on aion
      • +
      • this will bring a total of <N>\times 8\times<n> tasks, each on <thread> threads
      • +
      • Ex: -N 2 --ntasks-per-node 32 --ntasks-per-socket 4 -c 4 (Total: 64 tasks)
      • +
      +
    • +
    +

    Processor Performance

    +

    Each Aion node rely on an AMD Epyc Rome processor architecture which is binary compatible with the x86-64 architecture. +Each processor has the following performance:

    + + + + + + + + + + + + + + + + + + + + + +
    Processor Model#coresTDP(*)CPU Freq.R_\text{peak}
    [TFlops]
    R_\text{max}
    [TFlops]
    AMD Epyc ROME 7H1264280W2.6 GHz2.66 TF2.13 TF
    +

    (*) The Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload.

    +
    Theoretical R_\text{peak} vs. Maximum R_\text{max} Performance for AMD Epyc

    The AMD Epyc processors carry on 16 Double Precision (DP) ops/cycle. +Thus the reported R_\text{peak} performance is computed as follows: +R_\text{peak} = ops/cycle \times Freq. \times \#Cores

    +

    With regards the estimation of the Maximum Performance R_\text{max}, an efficiency factor of 80% is applied. +It is computed from the expected performance runs during the HPL benchmark workload.

    +
    +

    Regular Dual-CPU Nodes

    +

    These nodes are packaged within BullSequana X2410 AMD compute blades.

    +

    +

    Each blade contains 3 dual-socket AMD Rome nodes side-by-side, connected to the BullSequana XH2000 local interconnect network through HDR100 ports which is done through a mezzanine board. +The BullSequana AMD blade is built upon a cold plate which cools all components by direct contact, except DIMMS for which custom heat spreaders evacuate the heat to the cold plate -- see exploded view +Characteristics of each blade and associated compute nodes are depicted in the below table

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    BullSequana X2410 AMD blade
    Form Factor1U blade comprising 3 compute nodes side-by-side
    #Nodes per blade3
    Processors per node2x AMD Epyc ROME 7H12 @ 2.6 GHz [64c/280W]
    ArchitectureAMD SP3 Platform: 3x1 motherboard
    Memory per node256 GB DDR4 3200MT/s (8x16 GB DIMMs per socket, 16 DIMMs per node)
    Network (per node)InfiniBand HDR100 single port mezzanine
    Storage (per node)1x 480 GB SSD
    Power supplyPSU shelves on top of XH2000 cabinet
    CoolingCooling by direct contact
    Physical specs. (HxWxD)44.45 x 600 x 540 mm
    +

    The four compute racks of Aion (one XH2000 Cell) holds a total of 118 blades i.e., 354 AMD Epyc compute nodes, totalling 45312 computing core -- see Aion configuration.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/systems/aion/images/aion-timeline-part-I.png b/systems/aion/images/aion-timeline-part-I.png new file mode 100644 index 00000000..b3812eeb Binary files /dev/null and b/systems/aion/images/aion-timeline-part-I.png differ diff --git a/systems/aion/images/aion-timeline-part-II.png b/systems/aion/images/aion-timeline-part-II.png new file mode 100644 index 00000000..81e41aa0 Binary files /dev/null and b/systems/aion/images/aion-timeline-part-II.png differ diff --git a/systems/aion/images/aion_DLC_blade_splitted_view.png b/systems/aion/images/aion_DLC_blade_splitted_view.png new file mode 100644 index 00000000..2a396ecc Binary files /dev/null and b/systems/aion/images/aion_DLC_blade_splitted_view.png differ diff --git a/systems/aion/images/aion_WH40_DLC_quantum-switch__splitted_view.png b/systems/aion/images/aion_WH40_DLC_quantum-switch__splitted_view.png new file mode 100644 index 00000000..b0faa8b3 Binary files /dev/null and b/systems/aion/images/aion_WH40_DLC_quantum-switch__splitted_view.png differ diff --git a/systems/aion/images/aion_XH2000_cell_IB_topology.png b/systems/aion/images/aion_XH2000_cell_IB_topology.png new file mode 100644 index 00000000..2527f40b Binary files /dev/null and b/systems/aion/images/aion_XH2000_cell_IB_topology.png differ diff --git a/systems/aion/images/aion_compute_racks.jpg b/systems/aion/images/aion_compute_racks.jpg new file mode 100644 index 00000000..cc01ebfb Binary files /dev/null and b/systems/aion/images/aion_compute_racks.jpg differ diff --git a/systems/aion/images/aion_compute_racks.png b/systems/aion/images/aion_compute_racks.png new file mode 100644 index 00000000..3b5a2122 Binary files /dev/null and b/systems/aion/images/aion_compute_racks.png differ diff --git a/systems/aion/images/aion_compute_racks_original.jpg b/systems/aion/images/aion_compute_racks_original.jpg new file mode 100644 index 00000000..ee7fb053 Binary files /dev/null and b/systems/aion/images/aion_compute_racks_original.jpg differ diff --git a/systems/aion/images/aion_rear_rack_opened.jpg b/systems/aion/images/aion_rear_rack_opened.jpg new file mode 100644 index 00000000..9e32af55 Binary files /dev/null and b/systems/aion/images/aion_rear_rack_opened.jpg differ diff --git a/systems/aion/images/aion_side_panel.jpg b/systems/aion/images/aion_side_panel.jpg new file mode 100644 index 00000000..bddd880e Binary files /dev/null and b/systems/aion/images/aion_side_panel.jpg differ diff --git a/systems/aion/images/aion_x2410_AMD_blade.png b/systems/aion/images/aion_x2410_AMD_blade.png new file mode 100644 index 00000000..4cc5455c Binary files /dev/null and b/systems/aion/images/aion_x2410_AMD_blade.png differ diff --git a/systems/aion/index.html b/systems/aion/index.html new file mode 100644 index 00000000..3c05743a --- /dev/null +++ b/systems/aion/index.html @@ -0,0 +1,3107 @@ + + + + + + + + + + + + + + + + + + + + + + + + Aion System - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Aion Overview

    +

    +

    Aion is a Atos/Bull/AMD supercomputer which consists of 354 compute nodes, totaling 45312 compute cores and 90624 GB RAM, +with a peak performance of about 1,88 PetaFLOP/s.

    +

    All nodes are interconnected through a Fast InfiniBand (IB) HDR100 network1, configured over a Fat-Tree Topology (blocking factor 1:2). +Aion nodes are equipped with AMD Epyc ROME 7H12 processors.

    +

    Two global high-performance clustered file systems are available on all ULHPC computational systems: one based on GPFS/SpectrumScale, one on Lustre.

    +

    Aion Compute Aion Interconnect Global Storage

    +

    The cluster runs a Red Hat Linux operating system. +The ULHPC Team supplies on all clusters a large variety of HPC utilities, scientific applications and programming libraries to its user community. +The user software environment is generated using Easybuild (EB) and is made available as environment modules from the compute nodes only.

    +

    Slurm is the Resource and Job Management Systems (RJMS) which provides computing resources allocations and job execution. +For more information: see ULHPC slurm docs.

    +

    Cluster Organization

    +

    Data Center Configuration

    +

    The Aion cluster is based on a cell made of 4 BullSequana XH2000 adjacent racks installed in the CDC (Centre de Calcul) data center of the University within one of the DLC-enabled server room (CDC S-02-004) adjacent to the room hosting the Iris cluster and the global storage.

    +

    Each rack has the following dimensions: HxWxD (mm) = 2030x750x1270 (Depth is 1350mm with aesthetic doors). +The full solution with 4 racks (total dimension: dimensions: HxWxD (mm) = 2030x3000x1270) with the following characteristics:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Rack 1Rack 2Rack 3Rack 4TOTAL
    Weight [kg]1872,41830,21830,21824,27357 kg
    #X2410 Rome Blade30292930118
    #Compute Nodes90878790354
    #Compute Cores1152011136111361152045312
    R_\text{peak} [TFlops]479,23 TF463,25 TF463,25 TF479,23 TF1884.96 TF
    +

    For more details: BullSequana XH2000 SpecSheet (PDF)

    +

    Cooling

    +

    The BullSequana XH2000 is a fan less innovative cooling solution which is ultra-energy-efficient (targeting a PUE very close to 1) using an enhanced version of the Bull Direct Liquid Cooling (DLC) technology. +A separate hot-water circuit minimizes the total energy consumption of a system. For more information: see [Direct] Liquid Cooling.

    +

    The illustration on the right shows an exploded view of a compute blade with the cold plate and heat spreaders. + +The DLC1 components in the rack are:

    +
      +
    • Compute nodes (CPU, Memory, Drives, GPU)
    • +
    • High Speed Interconnect: HDR
    • +
    • Management network: Ethernet management switches
    • +
    • Power Supply Unit: DLC shelves
    • +
    +

    The cooling area in the rack is composed of:

    +
      +
    • 3 Hydraulic chassis (HYCs) for 2+1 redundancy at the bottom of the cabinet, 10.5U height.
    • +
    • Each HYCs dissipates at a maximum of 240W in the air.
    • +
    • A primary manifold system connects the University hot-water loop to the HYCs primary water inlets
    • +
    • A secondary manifold system connects HYCs outlets to each blade in the compute cabinet
    • +
    +

    Login/Access servers

    +
      +
    • Aion has 2 access servers (256 GB of memory each, general access) access[1-2]
    • +
    • Each login node has two sockets, each socket is populated with an AMD EPYC 7452 processor (2.2 GHz, 32 cores)
    • +
    +
    +

    Access servers are not meant for compute!

    +
      +
    • The module command is not available on the access servers, only on the compute nodes
    • +
    • you MUST NOT run any computing process on the access servers.
    • +
    +
    +

    Rack Cabinets

    +

    The Aion cluster (management compute and interconnect) is installed across the two adjacent server rooms in the premises of the Centre de Calcul (CDC), in the CDC-S02-005 server room.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Server RoomRack IDPurposeTypeDescription
    CDC-S02-005D02NetworkInterconnect equipment
    CDC-S02-005A04ManagementManagement servers, Interconnect
    CDC-S02-004A01Computeregularaion-[0001-0084,0319-0324], interconnect
    CDC-S02-004A02Computeregularaion-[0085-0162,0325-0333], interconnect
    CDC-S02-004A03Computeregularaion-[0163-0240,0334-0342], interconnect
    CDC-S02-004A04Computeregularaion-[0241-0318,0343-0354], interconnect
    +

    In addition, the global storage equipment (GPFS/SpectrumScale and Lustre, common to both Iris and Aion clusters) is installed in another row of cabinets of the same server room.

    +
    +
    +
      +
    1. +

      All DLC components are built on a cold plate which cools all components by direct contact, except DIMMS for which custom heat spreaders evacuate the heat to the cold plate. 

      +
    2. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/systems/aion/interconnect/index.html b/systems/aion/interconnect/index.html new file mode 100644 index 00000000..6602214b --- /dev/null +++ b/systems/aion/interconnect/index.html @@ -0,0 +1,2932 @@ + + + + + + + + + + + + + + + + + + + + + + + + Fast Local Interconnect - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Fast Local Interconnect Network

    +

    The Fast local interconnect network implemented within Aion relies on the Mellanox Infiniband (IB) HDR1 technology. +For more details, see Introduction to +High-Speed InfiniBand Interconnect.

    +

    IB Network Topology

    +

    One of the most significant differentiators between HPC systems and lesser performing systems is, apart from the interconnect technology deployed, the supporting topology. There are several topologies commonly used in large-scale HPC deployments (Fat-Tree, 3D-Torus, Dragonfly+ etc.).

    +

    +Aion (like Iris) is part of an Island which employs a "Fat-Tree" Topology2 which remains the widely used topology in HPC clusters due to its versatility, high bisection bandwidth and well understood routing.

    +

    Aion IB HDR switched fabric relies on Mellanox WH40 DLC Quantum Switches located at the back of each BullSequana XH2000 racks. +Each DLC cooled (see splitted view on the right) HDR switch has the following characteristics:

    +

    +
      +
    • 80 X HDR100 100Gb/s ports in a 1U switch (40 X HDR 200Gb/s ports if used in full HDR mode)
    • +
    • 16Tb/s aggregate switch throughput
    • +
    • Up to 15.8 billion messages-per-second
    • +
    • 90ns switch latency
    • +
    +

    Aion 2-Level 1:2 Fat-Tree is composed of:

    +
      +
    • 12x Infiniband HDR1 switches (40 HDR ports / 80 HDR100 ports)
        +
      • 8x Leaf IB (LIB) switches (L1), each with 12 HDR L1-L2 interlinks (2 on each rack)
      • +
      • 4x Spine IB (SIB) switches (L2), with up to 16 HDR100 uplinks (12 used, total: 48 links) used for the interconnexion with the Iris Cluster
      • +
      +
    • +
    • Up to 48 compute nodes HDR100 connection per L1 switch using 24 HDR ports with Y-cable
        +
      • 4 available HDR connections for Service, Access or Gateway node per L1 switch
      • +
      +
    • +
    +

    The following illustration show HDRtopology within the BullSequana XH2000 cell schematically:

    +

    +

    For more details: + ULHPC Fast IB Interconnect

    +

    Routing Algorithm

    +

    The IB Subnet Managers in Aion are configured with the Up/Down InfiniBand Routing Algorithm +Up-Down is a super-set of Fat-Tree with a tracker mode that allow each node to have dedicated route. This is well adapted to IO traffic patterns, and would be used within Aion for Gateway nodes, Lustre OSS, and GPFS/SpectrumScale NSD servers.

    +

    For more details: + Understanding Up/Down InfiniBand Routing Algorithm

    +
    +
    +
      +
    1. +

      High Data Rate (HDR) – 200 Gb/s throughput with a very low latency, typically below 0,6\mus. The HDR100 technology allows one 200Gbps HDR port (aggregation 4x 50Gbps) to be divided into 2 HDR100 ports with 100Gbps (2x 50Gbps) bandwidth using an [optical] "splitter" cable

      +
    2. +
    3. +

      with blocking factor 1:2. 

      +
    4. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/systems/aion/timeline/index.html b/systems/aion/timeline/index.html new file mode 100644 index 00000000..9ee0dc65 --- /dev/null +++ b/systems/aion/timeline/index.html @@ -0,0 +1,3347 @@ + + + + + + + + + + + + + + + + + + + + + + + + Timeline - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Aion Timeline

    +

    This page records a brief timeline of significant events and user environment changes on Aion.

    +

    +

    +

    Details are provided below.

    +

    2019

    +

    September 2019

    +
      +
    • Official Public release of Aion cluster tenders on TED European tender and PMP Portal (Portail des Marchés Publics) on Sept 11, 2019
        +
      • RFP 190027: Tender for the acquisition of a Complementary High Performance Computing (HPC) cluster 'Aion' for the University of Luxembourg. +
      • +
      +
    • +
    +
    +

    The RFP is composed of the following Lots:

    +
      +
    • Lot 1: DLC Computing cluster aion.
        +
      • includes cabinets, cooling units (together with any and all required piping and cooling fluids), compute node enclosures and compute nodes blades (dual- socket, relying on x86_64 processor architecture), interconnect elements (Ethernet and InfiniBand) and management servers for the HPC services (including but not limited to operational lifecycle management, Operating System and services deployment, monitoring) tied to this new cluster
      • +
      • implementation within one of the University Computer Centre (CDC) server rooms (CDC S-02-004), adjacent to the server room hosting the Iris cluster and its associated storage system), specialized for hosting compute equipment supporting direct liquid cooling through a separate high temperature water circuit, thus guaranteeing unprecedented energy efficiency and equipment density. In the first phase of operation, the system will be connected to the existing cold-water circuit and must be able to operate under these conditions
      • +
      • [...]
      • +
      +
    • +
    • Lot 2: Adaptation and extension of the existing High-Performance Storage systems
        +
      • includes extension of the existing primary high-performance storage solution featuring a SpectrumScale/GPFS filesystem (based on a DDN GridScaler solution installed as per attribution of the RFP 160019) hosting the user home and project directories, to enable the utilisation of the GPFS filesystem from both existing Iris and new Aion clusters while
      • +
      • enabling access to the Lustre-based SCRATCH filesystem (based on a DDN ExaScaler solution installed as per attribution of the RFP 170035) from the new compute cluster is considered a plus. Enhancing and adapting the InfiniBand interconnection to guarantee current performance characteristics while under load from all clients (existing and new compute clusters) is considered a plus.
      • +
      • [...]
      • +
      +
    • +
    • Lot 3: Adaptation of the network (Ethernet and IB)
        +
      • integration of the new cluster within the existing Ethernet-based data and management networks, which involves the extension and consolidation of the actual Ethernet topology
      • +
      • adaptation and extension of the existing InfiniBand (IB) topology to allow for bridging the two networks (Iris "island" and Aion "island")
      • +
      • [...]
      • +
      +
    • +
    +
    +

    October-November 2019

    +
      +
    • Bids Opening for both RFPs on October 29, 2019.
        +
      • Starting offers analysis by the ULHPC team, together with the procurement and legal department of the University
      • +
      +
    • +
    +

    December 2019

    +
      +
    • Awarding notification to the vendors
        +
      • RFP 190027 attributed to the Atos to provide:
      • +
      • Lot 1: the new DLC aion supercomputer, composed by 318 AMD compute nodes hosted within a compute cell made of 4 BullSequana XH2000 adjacent racks
          +
        • Fast Interconnect: HDR Infiniband Fabric in a Fat tree topology (2:1 blocking)
        • +
        • Associated servers and management stack
        • +
        +
      • +
      • Lot 2: Adaptation and extension of the existing High-Performance Storage systems.
          +
        • In particular, the usable storage capacity of the existing primary high-performance storage solution (SpectrumScale/GPFS filesystem) will be extended by 1720TB/1560TiB to reach a total of 4.41 PB
        • +
        +
      • +
      • Lot 3:Adaptation of the network (Ethernet and IB)
      • +
      +
    • +
    +

    See also Atos Press Release + Aion Supercomputer Overview

    +

    2020

    +

    January 2020

    +
      +
    • Kickoff meeting -- see UL Newsletter
        +
      • planning for a production release of the new cluster in May 2020
      • +
      +
    • +
    +

    February-March 2020: Start of global COVID-19 crisis

    +
      +
    • COVID-19 Impact on HPC Activities
        +
      • all operations tied to the preparation and installation of the new aion cluster are postponed
      • +
      • ULHPC systems remain operational, technical and non-technical staff are working remotely from home
      • +
      • Global worldwide delays on hardware equipment production and shipment
      • +
      +
    • +
    +

    July 2020

    +
      +
    • +

      Start Phase 3 of the deconfinement as per UL policy

      +
        +
      • Preparation work within the CDC server room by the UL external partners slowly restarted
          +
        • target finalization of the CDC-S02-004 server room by end of September
        • +
        +
      • +
      • Assembly and factory Burn tests completed + *Lot 1: DLC ready for shipment to University
          +
        • Target date: Sept 14, 2020 in practice postponed above Oct 19, 2020 to allow for the CDC preparation work to be completed by the University and its patners.
        • +
        +
      • +
      • +

        ULHPC maintenance with physical intervention of external expert support team by DDN

        +
          +
        • preparation work for iris storage (HW upgrade, GPFS/SpectrumScale Metadata pool extension, Lustre upgrade)
        • +
        +
      • +
      • +

        Start and complete the first Statement of Work for DDN Lot 2 installation

        +
      • +
      +
    • +
    +

    Aug 2020

    +
      +
    • +

      Consolidated work by ULHPC team on Slurm configuration

      +
        +
      • Updated model for Fairshare, Account Hierarchy and limits
      • +
      +
    • +
    • +

      Pre-shipment of [Part of] Ethernet network equipment (Lot 3)

      +
    • +
    +

    Sept - Oct 2020

    +
      +
    • +

      Delivery Lot 1 (Aion DLC) and Lot 3 (Ethernet) equipment

      +
        +
      • Ethernet network installation done by ULHPC between Sept 3 - 24, 2020
      • +
      +
    • +
    • +

      CDC S02-004 preparation work (hydraulic part)

      +
        +
      • supposed to be completed by Sept 15, 2020 has been delayed and was finally completed on Oct 19, 2020
      • +
      +
    • +
    +

    Nov 2020

    +
      +
    • Partial Delivery of equipment (servers, core switches)
        +
      • Service servers and remaining network equipments were racked by ULHPC team
      • +
      +
    • +
    +

    Dec 2020

    +
      +
    • Confinement restriction lifted in France, allowing for a french team from Atos to come onsite
    • +
    • Delivery of remaining equipment (incl. Lot 1 sequana racks and compute nodes)
    • +
    • Compute rack (Lot 1 DLC) installation start
    • +
    +

    2021

    +

    Jan - Feb 2021

    +
      +
    • +

      The 4 DDN expansion enclosure shipped with the lifting tools and pressure tools

      +
        +
      • +

        Lot 1: Sequana racks and compute nodes finally postionned and internal Infiniband cabling done

        +
      • +
      • +

        Lot 2: DDN disk enclosure racked

        +
          +
        • the rack was adapted to be able to close the rear door
        • +
        +
      • +
      • +

        Lot 3: Ethernet and IB Network

        +
          +
        • ULHPC cables were used to cable service servers to make progress on the software configuration
        • +
        +
      • +
      +
    • +
    • +

      Service servers and compute nodes deployment start remotely

      +
    • +
    +

    Mar - Apr 2021

    +
      +
    • Start GS7990 and NAS server installation (Lot 2)
    • +
    • Start installtion of Lot 3 (ethernet side)
    • +
    +

    May - June 2021

    +
      +
    • IB EDR cables delivered and installed
    • +
    • Merge of the Iris/Aion Infiniband island
    • +
    +

    Jul - Aug - Sept 2021

    +
      +
    • Slurm Federation between both clusters Iris and Aion
    • +
    • Benchmark performance results submitted (HPL, HPCG, Green500, Graph500, IOR, IO500)
    • +
    • Pre-Acceptance validated and release of the Aion supercomputer for beta testers
    • +
    +

    Oct - Nov 2021

    + +

    + + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/systems/index.html b/systems/index.html new file mode 100644 index 00000000..909b16ad --- /dev/null +++ b/systems/index.html @@ -0,0 +1,2965 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    HPC @ Uni.lu

    +

    +

    +

    For more details, see the reference ULHPC Article:

    +
    +

    ACM Reference Format | ORBilu entry | ULHPC blog post | slides:
    +Sebastien Varrette, Hyacinthe Cartiaux, Sarah Peter, Emmanuel Kieffer, Teddy Valette, and Abatcha Olloh. 2022. Management of an Academic HPC & Research Computing Facility: The ULHPC Experience 2.0. In 6th High Performance Computing and Cluster Technologies Conference (HPCCT 2022), July 08-10, 2022, Fuzhou, China. ACM, New York, NY, USA, 14 pages. +https://doi.org/10.1145/3560442.3560445

    +
    +

    Chronological Evolution

    +

    With the advent of the technological revolution and the digital transformation that made all scientific disciplines becoming computational nowadays, High-Performance Computing (HPC) is increasingly identified as a strategic asset and enabler to accelerate the research performed in all areas requiring intensive computing and large-scale Big Data analytic capabilities.

    +

    The University of Luxembourg (UL) operates since 2007 a large academic HPC facility that remained the reference HPC implementation within the country until 2021, offering a cutting-edge research infrastructure to Luxembourg public research while serving as edge access to the Euro-HPC Luxembourg supercomputer operated by LuxProvide and more focused at serving the private sector. +Special focus was laid for the ULHPC facility on the development of large computing power combined with huge data storage capacity to accelerate the research performed in intensive computing and large-scale data analytic (Big Data). +This was made possible through an ambitious funding strategy enabled from the early stage of the HPC developments, which was supported at the rectorate level to establish the HPC strategy as transversal to all research domains.

    +

    For more details: hpc.uni.lu

    +

    Capacity evolution

    +

    The historically first production system installed in 2007 has been Chaos with a final theoretical peak performance of 14.5 TFlop/s. +Gaia was then launched in 2011 as a replacement to reach a theoretical peak performance of 145.5 TFlops. It was the first computing cluster introducing GPU accelerators to our users. +Both systems were kept running until their decommissioning in 2019.

    +
    +

    Info

    +

    Currently, Iris (R_\text{peak}=1071 TFlops) and Aion (R_\text{peak}=1693 TFlops) are our production systems sharing the same High Performance Storage solutions, when Aion is our flagship supercomputer until 2024.

    +
    +

    The below figures illustrates the evolution of the computing and storage capacity of the ULHPC facility over the last years.

    +

    +

    +

    Experimental systems

    +

    We maintain (or used to maintain) several experimental systems in parallel (nyx, a testing cluster, pyro, an OpenStack-based cluster, viridis, a low-power ARM-based cluster). As of now, only our experimental Grid'5000 clusters are still maintained.

    +

    Usage

    +

    The below figure outline the cumulative usage (in CPU Years) of the production clusters within the ULHPC facility for the time period 2015-2019.

    +
      +
    • During their lifetime, Gaia and Chaos processed respectively 4.5 million and 1.7 million jobs, cumulating 13835 Years of CPU Time usage.
    • +
    +

    +

    Naming conventions

    +

    Our clusters and supercomputers are named from Greek primordial deities or Greek mythology while keeping a name as short as possible.

    +
      +
    • chaos was, according to Hesiod's Theogony, the first thing to exist and thus looked as appropriate. "Hesiod's Chaos has been interpreted as either "the gaping void above the Earth created when Earth and Sky are separated from their primordial unity"
    • +
    • gaia is the personification of the Earth and the ancestral mother of all life. It sounded pertinent for our first system installed in Belval to serve the growing life-science community and the newly created LCSB system bio-medicine Interdiciplinary center.
    • +
    • iris is the personification and goddess of the rainbow and messenger of the gods.
    • +
    • aion is a Hellenistic deity associated with time, the orb or circle encompassing the universe, and the zodiac.
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/systems/iris/compute/index.html b/systems/iris/compute/index.html new file mode 100644 index 00000000..5bad3ac1 --- /dev/null +++ b/systems/iris/compute/index.html @@ -0,0 +1,3214 @@ + + + + + + + + + + + + + + + + + + + + + + + + Compute Nodes - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    + +
    +
    + + +
    +
    + + + + + + + + + + +

    Iris Compute Nodes

    +

    Iris is a cluster of x86-64 Intel-based compute nodes. +More precisely, Iris consists of 196 computational nodes named iris-[001-196] and features 3 types of computing resources:

    +
      +
    • 168 "regular" nodes, Dual Intel Xeon Broadwell or Skylake CPU (28 cores), 128 GB of RAM
    • +
    • 24 "gpu" nodes, Dual Intel Xeon Skylake CPU (28 cores), 4 Nvidia Tesla V100 SXM2 GPU accelerators (16 or 32 GB), 768 GB RAM
    • +
    • 4 "bigmem" nodes: Quad-Intel Xeon Skylake CPU (112 cores), 3072 GB RAM
    • +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Hostname (#Nodes)Node typeProcessorRAM
    iris-[001-108] (108)Regular Broadwell2 Xeon E5-2680v4 @ 2.4GHz [14c/120W]128 GB
    iris-[109-168] (60)Regular Skylake2 Xeon Gold 6132 @ 2.6GHz [14c/140W]128 GB
    iris-[169-186] (18)Multi-GPU
    Skylake
    2 Xeon Gold 6132 @ 2.6GHz [14c/140W]
    4x Tesla V100 SXM2 16G
    768 GB
    iris-[191-196] (6)Multi-GPU
    Skylake
    2 Xeon Gold 6132 @ 2.6GHz [14c/140W]
    4x Tesla V100 SXM2 32G
    768 GB
    iris-[187-190] (4)Large Memory
    Skylake
    4 Xeon Platinum 8180M @ 2.5GHz [28c/205W]3072 GB
    +

    Processors Performance

    +

    Each Iris node rely on an Intel x86_64 processor architecture with the following performance:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Processor Model#coreTDP(*)CPU Freq.
    (AVX-512 T.Freq.)
    R_\text{peak}
    [TFlops]
    R_\text{max}
    [TFlops]
    Xeon E5-2680v4
    (Broadwell)
    14120W2.4GHz
    (n/a)
    0.538 TF0.46 TF
    Xeon Gold 6132
    (Skylake)
    14140W2.6GHz
    (2.3GHz)
    1.03 TF0.88 TF
    Xeon Platinum 8180M
    (Skylake)
    28205W2.5GHz
    (2.3GHz)
    2.06 TF1.75 TF
    +

    (*) The Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload.

    +
    Theoretical R_\text{peak} vs. Maximum R_\text{max} Performance for Intel Broadwell/Skylake

    The reported R_\text{peak} performance is computed as follows for the above processors:

    +
      +
    • The Broadwell processors carry on 16 Double Precision (DP) ops/cycle and support AVX2/FMA3.
    • +
    • The selected Skylake Gold processors have two AVX512 units, thus they are capable of performing 32 DP ops/cycle YET only upon AVX-512 Turbo Frequency (i.e., the maximum all-core frequency in turbo mode) in place of the base non-AVX core frequency. The reported values are extracted from the Reference Intel Specification documentation.
    • +
    +

    Then R_\text{peak} = ops/cycle \times Freq. \times \#Cores with the appropriate frequency (2.3 GHz instead of 2.6 for our Skylake processors).

    +

    With regards the estimation of the Maximum Performance R_\text{max}, an efficiency factor of 85% is applied. +It is computed from the expected performance runs during the HPL benchmark workload.

    +
    +

    Accelerators Performance

    +

    Iris is equipped with 96 NVIDIA Tesla V100-SXM2 GPU Accelerators with 16 or 32 GB of GPU memory, interconnected within each node through NVLink which provides higher bandwidth and improved scalability for multi-GPU system configurations.

    +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    NVidia GPU Model#CUDA core#Tensor corePowerInterconnect
    Bandwidth
    GPU MemoryR_\text{peak}
    [TFlops]
    V100-SXM25120640300W300 GB/s16GB7.8 TF
    V100-SXM25120640300W300 GB/s32GB7.8 TF
    +

    Regular Dual-CPU Nodes

    +

    These nodes are packaged within Dell PowerEdge C6300 chassis, each hosting 4 PowerEdge C6320 blade servers.

    +

    +

    Broadwell Compute Nodes

    +

    Iris comprises 108 Dell C6320 "regular" compute nodes iris-001-108 relying on Broadwell Xeon processor generation, totalling 3024 computing cores.

    +
      +
    • Each node is configured as follows:
        +
      • 2 Intel Xeon E5-2680v4 @ 2.4GHz [14c/120W]
      • +
      • RAM: 128 GB DDR4 2400MT/s (4x16 GB DIMMs per socket, 8 DIMMs per node)
      • +
      • SSD 120GB
      • +
      • InfiniBand (IB) EDR ConnectX-4 Single Port
      • +
      • Theoretical Peak Performance per Node: R_\text{peak} 1.075 TF (see processor performance)
      • +
      +
    • +
    +
    +

    Reserving a Broadwell node

    +

    If you want to specifically reserve a broadwell node (iris-[001-108]), you should use the feature -C broadwell on the batch partition: {sbatch|srun|salloc} -p batch -C broadwell [...]

    +
    +

    Skylake Compute Nodes

    +

    Iris also features 60 Dell C6320 "regular" compute nodes iris-109-168 relying on Skylake Xeon processor generation, totalling 1680 computing cores.

    +
      +
    • Each node is configured as follows:
        +
      • 2 Intel Xeon Gold 6132 @ 2.6GHz [14c/140W]
      • +
      • RAM: 128 GB DDR4 2400MT/s (4x16 GB DIMMs per socket, 8 DIMMs per node)
      • +
      • SSD 120GB
      • +
      • InfiniBand (IB) EDR ConnectX-4 Single Port
      • +
      • Theoretical Peak Performance per Node: R_\text{peak} 2.061 TF (see processor performance)
      • +
      +
    • +
    +
    +

    Reserving a Regular Skylake node

    +

    If you want to specifically reserve a regular skylake node (iris-[109-168]), you should use the feature -C skylake on the batch partition: {sbatch|srun|salloc} -p batch -C skylake [...]

    +
    +

    Multi-GPU Compute Nodes

    +

    Iris includes 24 Dell PowerEdge C4140 "gpu" compute nodes embedding on total 96 NVIDIA Tesla V100-SXM2 GPU Accelerators.

    +
      +
    • Each node is configured as follows:
        +
      • 2 Intel Xeon Gold 6132 @ 2.6GHz [14c/140W]
      • +
      • RAM: 768 GB DDR4 2666MT/s (12x 32 GB DIMMs per socket, 24 DIMMs per node)
      • +
      • 1 Dell NVMe 1.6TB
      • +
      • InfiniBand (IB) EDR ConnectX-4 Dual Port
      • +
      • 4x NVIDIA Tesla V100-SXM2 GPU Accelerators over NVLink
          +
        • iris-[169-186] feature 16G GPU memory - use -C volta as slurm feature
        • +
        • iris-[191-196] feature 32G GPU memory - use -C volta32 as slurm feature
        • +
        +
      • +
      • Theoretical Peak Performance per Node: R_\text{peak} 33.26 TF (see processor performance and accelerators performance)
      • +
      +
    • +
    +
    +

    Reserving a GPU node

    +

    Multi-GPU Compute Nodes can be reserved using the gpu partition. Use the -G [<type>:]<number> to specify the total number of GPUs required for the job

    +
    # Interactive job on 1 GPU nodes with 1 GPU
    +si-gpu -G 1
    +nvidia-smi      # Check allocated GPU
    +
    +# Interactive job with 4 GPUs on the same node, one task per gpu, 7 cores per task
    +si-gpu -N 1 -G 4 --ntasks-per-node 4 --ntasks-per-socket 2 -c 7
    +
    +# Job submission on 2 nodes, 4 GPUs/node and 4 tasks/node:
    +sbatch -p gpu -N 2 -G 4 --ntasks-per-node 4 --ntasks-per-socket 2 -c 7 launcher.sh
    +
    + +
    +
    +

    Do NOT reserve a GPU node if you don't need a GPU!

    +

    Multi-GPU nodes are scarce (and very expansive) resources and should be dedicated to GPU-enabled workflows.

    +
    +
    16 GB vs. 32 GB Onboard GPU Memory
      +
    • +

      Compute nodes with Nvidia V100-SMX2 16GB accelerators are registrered with the -C volta feature.

      +
        +
      • it corresponds to the 18 Multi-GPU compute nodes iris-[169-186]
      • +
      +
    • +
    • +

      If you want to reserve GPUs with more memory (i.e. 32GB on-board HBM2), you should use -C volta32

      +
        +
      • you would then end on one of the 6 Multi-GPU compute nodes iris-[191-196]
      • +
      +
    • +
    +
    +

    Large-Memory Compute Nodes

    +

    Iris holds 4 Dell PowerEdge R840 Large-Memory ("bibmem") compute nodes iris-[187-190], totalling 448 computing cores.

    +
      +
    • Each node is configured as follows:
        +
      • 4 Xeon Platinum 8180M @ 2.5GHz [28c/205W]
      • +
      • RAM: 3072 GB DDR4 2666MT/s (12x64 GB DIMMs per socket, 48 DIMMs per node)
      • +
      • 1 Dell NVMe 1.6TB
      • +
      • InfiniBand (IB) EDR ConnectX-4 Dual Port
      • +
      • Theoretical Peak Performance per Node: R_\text{peak} 8.24 TF (see processor performance)
      • +
      +
    • +
    +
    +

    Reserving a Large-Memory node

    +

    These nodes can be reserved using the bigmem partition: +{sbatch|srun|salloc} -p bigmem [...]

    +
    +
    +

    DO NOT use bigmem nodes...

    +

    ... Unless you know what you are doing. We have too few large-memory compute nodes so kindly keep them for workloads that truly need these kind of expansive resources.

    +
      +
    • In short: carefully check your workflow and memory usage before considering using these node!
        +
      • use seff <jobid> or sacct -j <jobid> [...] for instance
      • +
      +
    • +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/systems/iris/images/iris-compute_back.jpg b/systems/iris/images/iris-compute_back.jpg new file mode 100644 index 00000000..99e1cd60 Binary files /dev/null and b/systems/iris/images/iris-compute_back.jpg differ diff --git a/systems/iris/images/iris-compute_front.jpg b/systems/iris/images/iris-compute_front.jpg new file mode 100644 index 00000000..2b408dc2 Binary files /dev/null and b/systems/iris/images/iris-compute_front.jpg differ diff --git a/systems/iris/images/iris_cluster_overview.pdf b/systems/iris/images/iris_cluster_overview.pdf new file mode 100644 index 00000000..1a83136d Binary files /dev/null and b/systems/iris/images/iris_cluster_overview.pdf differ diff --git a/systems/iris/images/iris_cluster_overview.png b/systems/iris/images/iris_cluster_overview.png new file mode 100644 index 00000000..81cfa954 Binary files /dev/null and b/systems/iris/images/iris_cluster_overview.png differ diff --git a/systems/iris/index.html b/systems/iris/index.html new file mode 100644 index 00000000..bbd3df26 --- /dev/null +++ b/systems/iris/index.html @@ -0,0 +1,2999 @@ + + + + + + + + + + + + + + + + + + + + + + + + Iris System - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Iris Overview

    +

    Iris is a Dell/Intel supercomputer which consists of 196 compute nodes, totaling 5824 compute cores and 52224 GB RAM, +with a peak performance of about 1,072 PetaFLOP/s.

    +

    All nodes are interconnected through a Fast InfiniBand (IB) EDR network1, configured over a Fat-Tree Topology (blocking factor 1:1.5). +Iris nodes are equipped with Intel Broadwell or Skylake processors. +Several nodes are equipped with 4 Nvidia Tesla V100 SXM2 GPU accelerators. +In total, Iris features 96 Nvidia V100 GPU-AI accelerators allowing for high speedup of GPU-enabled applications and AI/Deep Learning-oriented workflows. +Finally, a few large-memory (fat) computing nodes offer multiple high-core density CPUs and a large live memory capacity of 3 TB RAM/node, meant for in-memory processing of huge data sets.

    +

    Two global high-performance clustered file systems are available on all ULHPC computational systems: one based on GPFS/SpectrumScale, one on Lustre.

    +

    Iris Compute Iris Interconnect Global Storage

    +

    The cluster runs a Red Hat Linux Family operating system. +The ULHPC Team supplies on all clusters a large variety of HPC utilities, scientific applications and programming libraries to its user community. +The user software environment is generated using Easybuild (EB) and is made available as environment modules from the compute nodes only.

    +

    Slurm is the Resource and Job Management Systems (RJMS) which provides computing resources allocations and job execution. +For more information: see ULHPC slurm docs.

    +

    Cluster Organization

    +

    +

    Login/Access servers

    +
      +
    • Iris has 2 access servers (128 GB of memory each, general access) access[1-2]
    • +
    • Each login node has two sockets, each socket is populated with an Intel Xeon E5-2697A v4 processor (2.6 GHz, 16 core)
    • +
    +
    +

    Access servers are not meant for compute!

    +
      +
    • The module command is not available on the access servers, only on the compute nodes
    • +
    • you MUST NOT run any computing process on the access servers.
    • +
    +
    +

    Rack Cabinets

    +

    The Iris cluster (management, compute and interconnect) is installed across 7 racks within a row of cabinets in the premises of the Centre de Calcul (CDC), in the CDC-S02-005 server room.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Server RoomRack IDPurposeTypeDescription
    CDC-S02-005D02NetworkInterconnect equipment
    CDC-S02-005D04ManagementManagement servers, Interconnect
    CDC-S02-005D05Computeregulariris-[001-056], interconnect
    CDC-S02-005D07Computeregulariris-[057-112], interconnect
    CDC-S02-005D09Computeregulariris-[113-168], interconnect
    CDC-S02-005D11Computegpu, bigmemiris-[169-177,191-193](gpu), iris-[187-188](bigmem)
    CDC-S02-005D12Computegpu, bigmemiris-[178-186,194-196](gpu), iris-[189-190](bigmem)
    +

    In addition, the global storage equipment (GPFS/SpectrumScale and Lustre, common to both Iris and Aion clusters) is installed in another row of cabinets of the same server room.

    +
    +
    +
      +
    1. +

      Infiniband (IB) EDR networks offer a 100 Gb/s throughput with a very low latency (0,6\mus). 

      +
    2. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/systems/iris/interconnect/index.html b/systems/iris/interconnect/index.html new file mode 100644 index 00000000..7d599093 --- /dev/null +++ b/systems/iris/interconnect/index.html @@ -0,0 +1,2850 @@ + + + + + + + + + + + + + + + + + + + + + + + + Fast Local Interconnect - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Fast Local Interconnect Network

    +

    The Fast local interconnect network implemented within Iris relies on the Mellanox Infiniband (IB) EDR1 technology. +For more details, see Introduction to +High-Speed InfiniBand Interconnect.

    +

    One of the most significant differentiators between HPC systems and lesser performing systems is, apart from the interconnect technology deployed, the supporting topology. There are several topologies commonly used in large-scale HPC deployments (Fat-Tree, 3D-Torus, Dragonfly+ etc.).

    +

    +Iris (like Aion) is part of an Island which employs a "Fat-Tree" Topology2 which remains the widely used topology in HPC clusters due to its versatility, high bisection bandwidth and well understood routing.

    +

    Iris 2-Level 1:1.5 Fat-Tree is composed of:

    +
      +
    • 18x Infiniband EDR1 Mellanox SB7800 switches (36 ports)
        +
      • 12x Leaf IB (LIB) switches (L1), each with 12 EDR L1-L2 interlinks
      • +
      • 6x Spine IB (SIB) switches (L2), with 8 EDR downlinks (total: 48 links) used for the interconnexion with the Aion Cluster
      • +
      +
    • +
    • Up to 24 Iris compute nodes and servers EDR connection per L1 switch using 24 EDR ports
    • +
    +

    For more details: + ULHPC Fast IB Interconnect

    +

    Illustration of Iris network cabling (IB and Ethernet) within one of the rack hosting the compute nodes:

    +

    +
    +
    +
      +
    1. +

      Enhanced Data Rate (EDR) – 100 Gb/s throughput with a very low latency, typically below 0,6\mus. 

      +
    2. +
    3. +

      with blocking factor 1:1.5. 

      +
    4. +
    +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/systems/iris/timeline/index.html b/systems/iris/timeline/index.html new file mode 100644 index 00000000..413b6a80 --- /dev/null +++ b/systems/iris/timeline/index.html @@ -0,0 +1,3314 @@ + + + + + + + + + + + + + + + + + + + + + + + + Timeline - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Iris Timeline

    +

    This page records a brief timeline of significant events and user environment changes on Iris. +The Iris cluster exists since the beginning of 2017 as the flagship HPC supercomputer within the University of Luxembourg until 2020 and the release of the Aion supercomputer.

    +

    2016

    +

    September 2016

    +
      +
    • Official Public release of Iris cluster tenders on TED European tender and Portail des Marchés Publiques (PMP)
        +
      • RFP 160019: High Performance Storage System for the High Performance Computing Facility of the University of Luxembourg.
      • +
      • RFP 160020: High Performance Computing Facility (incl. Interconnect) for the University of Luxembourg.
      • +
      +
    • +
    +

    October 2016

    +
      +
    • Bids Opening for both RFPs on October 12, 2016.
        +
      • Starting offers analysis by the ULHPC team, together with the procurement and legal departments of the University
      • +
      +
    • +
    +

    November 2016

    +
      +
    • Awarding notification to the vendors
        +
      • RFP 160019 attributed to the Telindus/HPE/DDN consortium to provide High Performance Storage solution of capacity 1.44 PB (raw) (over GPFS/SpectrumScale Filesystem), with a RW performance above 10GB/s
      • +
      • RFP 160020 attributed to the Post/DELL consortium to provide a High Performance Computing (HPC) cluster of effective capacity R_\text{max} = 94.08 TFlops (raw capacity R_\text{peak} = 107.52 TFlops)
      • +
      +
    • +
    +

    2017

    +

    March-April 2017

    +

    Delivery and installation of the iris cluster composed of:

    +
      +
    • iris-[1-100], Dell PowerEdge C6320, 100 nodes, 2800 cores, 12.8 TB RAM
    • +
    • 10/40GB Ethernet network, high-speed Infiniband EDR 100Gb/s interconnect
    • +
    • SpectrumScale (GPFS) core storage, 1.44 PB
    • +
    • Redundant / load-balanced services with:
        +
      • 2x adminfront servers (cluster management)
      • +
      • 2x access servers (user frontend)
      • +
      +
    • +
    +

    May-June 2017

    +
      +
    • End of cluster validation
    • +
    • 8 new regular nodes added
        +
      • iris-[101-108], Dell PowerEdge C6320, 8 nodes, 224 cores, 1.024 TB RAM
      • +
      +
    • +
    • Official release of the iris cluster for production on June 12, 2017 at the occasion of the UL HPC Scool 2017.
    • +
    +

    October 2017

    +
      +
    • Official Public release of Iris Lustre Storage acquisition tenders on TED European tender and Portail des Marchés Publiques (PMP)
        +
      • RFP 170035: Complementary Lustre High Performance Storage System for the High Performance Computing Facility of the University of Luxembourg.
      • +
      +
    • +
    +

    November 2017

    +
      +
    • Bids Opening for Lustre RFP on November 28, 2017.
        +
      • Starting offers analysis by the ULHPC team, together with the procurement and legal departments of the University
      • +
      +
    • +
    +

    December 2017

    +
      +
    • Awarding notification to the vendors
        +
      • Lustre RFP 170035 attributed to the Fujitsu/DDN consortium to provide High Performance Storage solution of capacity 1.28 PB (raw)
      • +
      +
    • +
    • 60 new regular nodes added yet based on Skylake processors
        +
      • iris-[109-168], Dell PowerEdge C6420, 60 nodes, 1680 cores, 7.68 TB RAM
      • +
      +
    • +
    +

    2018

    +

    February 2018

    +
      +
    • iris cluster moved from CDC S-01 to CDC S-02
    • +
    +

    April 2018

    +
      +
    • SpectrumScale (GPFS) DDN GridScaler extension to reach 2284TB raw capacity
        +
      • new expansion unit and provisioning of enough complementary disks to feed the system.
      • +
      +
    • +
    • Delivery and installation of the complementary Lustre storage, with 1280 TB raw capacity
    • +
    +

    July 2018

    +
      +
    • Official Public release of tenders on TED European tender and Portail des Marchés Publiques (PMP)
        +
      • RFP 180027: Complementary Multi-GPU and Large-Memory Computer Nodes for the High Performance Computing Facility of the University of Luxembourg.
      • +
      +
    • +
    +

    September 2018

    +
      +
    • Bids Opening for Multi-GPU and Large-Memory nodes RFP on September 10, 2018.
        +
      • Starting offers analysis by the ULHPC team, together with the procurement and legal departments of the University
      • +
      +
    • +
    +

    October 2018

    +
      +
    • Awarding notification to the vendors
        +
      • RFP 180027 attributed to the Dimension Data/Dell consortium
      • +
      +
    • +
    +

    Dec 2018

    +
      +
    • New Multi-GPU and Bigmem compute nodes added
        +
      • iris-[169-186]: Dell C4140, 18 GPU nodes x 4 Nvidia V100 SXM2 16GB, part of the gpu partition
      • +
      • iris-[187-190]: Dell R840, 4 Bigmem nodes 4x28c i.e. 112 cores per node, part of the bigmem partition
      • +
      +
    • +
    +

    2019

    +

    May 2019

    +
      +
    • 6 new Multi-GPU nodes added
        +
      • iris-[191-196]: Dell C4140, 6 GPU nodes x 4 Nvidia V100 SXM2 32GB, part of the gpu partition
      • +
      +
    • +
    +

    October 2019

    +
      +
    • SpectrumScale (GPFS) extension to allow 1Bn files capacity
        +
      • replacement of 2 data pools (HDD-based) with new metadata pools (SSD-based)
      • +
      +
    • +
    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/teaching-with-the-ulhpc/index.html b/teaching-with-the-ulhpc/index.html new file mode 100644 index 00000000..854451f8 --- /dev/null +++ b/teaching-with-the-ulhpc/index.html @@ -0,0 +1,2891 @@ + + + + + + + + + + + + + + + + + + + + + + + + Teaching with the ULHPC - ULHPC Technical Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + +
    + +
    + +
    + + + + + + +
    +
    + + +
    +
    +
    + +
    +
    +
    + + +
    +
    +
    + + +
    +
    +
    + + +
    +
    + + + + + + + + + + +

    Teaching with the ULHPC

    +

    If you plan to use the ULHPC to teach for groups of students, we highly recommend that you contact us (the HPC team) for the following reasons:

    +
      +
    • When possible, we can plan our maintenance sessions outside of your planned teaching / training dates.
    • +
    • We can help with the reservation of HPC ressources (e.g., GPU or big memory nodes) as some are highly booked and may not be available on-demand the day of your teaching or training session.
    • +
    • We can provide temporary ULHPC account for your students / attendees.
    • +
    +

    Resource reservation

    +

    The ULHPC offers different types of computing nodes and their availability can vary greatly throughout the year. In particular, GPU and big memory nodes are rare and intensively used. If you plan to use them for a teaching session, please contact our team at hpc-team@uni.lu.

    +

    Temporary student accounts

    +

    For hands-on sessions involving students or trainees who don't necessarily have an ULHPC account, we can provide temporary accesses. As a teacher / trainer, your account will also have access to all the students / trainees accounts to simplify interactions and troubleshooting during your sessions.

    +

    Please contact our team at hpc-team@uni.lu to help you in the preparation of your teaching / training session.

    + + + + +
    +
    + + + Last update: November 13, 2024 + + +
    + + + + + + + + +
    +
    +
    +
    + + + + +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file