RPKI Documentation

https://img.shields.io/github/last-commit/NLnetLabs/rpki-doc.svg?label=last%20updated&style=flat-square

Resource Public Key Infrastructure(RPKI)のドキュメントへようこそ。RPKIはオープンスタンダードに基づいたコミュニティ主導の技術で、インターネットルーティングをより安全にすることを目的としています。このドキュメントに初めて触れる人は、このドキュメントが提供するものの概要を知るために、 はじめに を読むことをお勧めします。

以下の目次とサイドバーを使うと、興味のあるトピックのドキュメントに簡単にアクセスすることができます。左上の検索機能もご活用ください

注釈

このドキュメントはNLnet LabsのRPKIチームによってメンテナンスされ、世界中のネットワーク運用者コミュニティからコントリビューションを受けているオープンソースプロジェクトです。皆様からのフィードバックと改善をいつでもお待ちしています。

GitHubリポジトリ でIssueやプルリクエストを出したり、RPKIメーリングリスト に投稿することができます。このプロジェクトの翻訳に興味がある場合は、このガイド を参考に翻訳を始めてみてください。

このドキュメントの構成は以下の通りです。

はじめに

Resource Public Key Infrastructure(RPKI)のドキュメントへようこそ。ここではRPKI技術とRPKIのために開発されているいくつかのツールの概要を提供することを目的としています。ソフトウェア開発者であれば誰でも自分のツールのドキュメントをこのプロジェクトに追加することができます。

このページでは、RPKIの概要とRPKIがBorder Gateway Protocol(BGP)を使用したインターネットルーティングの安全性向上にどのように役立つかを説明します。このドキュメントを通して、RPKIが自身の組織のみならず他者の組織のインターネット上での安全性向上にどのような利点をもたらすかを知ることができます。

このドキュメントについて

このドキュメントはNLnet LabsのRPKIチームによって継続的に執筆、修正、編集されています。最初のバージョンはAlex Band、Tim Bruijnzeels、Martin Hoffmannによって書かれました。時間の経過とともに世界中のネットワーク事業者コミュニティや研究者、関係者が情報を追加してきました。ドキュメントは reStructuredText マークアップ言語のテキストファイルで編集され、オープンソースの SphinxReadTheDocs を使用して静的なウェブサイト/オフラインドキュメントとしてコンパイルされています。

注釈

GitHubリポジトリ でのIssueの作成やプルリクエストを通したパッチの提供を通して本ドキュメントに貢献することが可能です。

すべてのコンテンツはクリエイティブ・コモンズ 表示 3.0(CC-BY 3.0)ライセンスの下にあり、"The RPKI team at NLnet Labs and RPKI community"に帰属します。

RPKIについて

RPKIは、インターネット番号リソースの所有者が、そのリソースをどのように使用するかについて検証可能な申し立てを行うことを可能にします。これを実現するために、IPアドレスやAS番号の分配方法と同じ構造を持つリソース証明書のチェーンを作成する公開鍵基盤を使用します。

RPKIはインターネットのルーティングをより安全にするために使用されます。これはオープンソースのソフトウェア開発者、ルーターベンダー、5つの地域インターネットレジストリ(ARINAPNICAFRINICLACNICRIPE NCC)が参加するコミュニティ主導のシステムです。

現在RPKIは、IPアドレスブロックの正当な所有者が、どのASが自分のプレフィックスをBGPで広報することを許可されているかについての正式な申し立てを作成するために使用されています。これにより、他のネットワークオペレータはこれらの申し立てをダウンロードして検証し、それに基づいてルーティングを行うことができます。このプロセスはRoute Origin Validation(ROV)と呼ばれています。これは、将来的にPath Validationを提供するための足がかりとなります。

ドキュメントの構成

このドキュメントは主に3つのセクションで構成されています。

  • General のセクションではライセンスや著者などの情報を含む「はじめに」の他に、FAQクイックヘルプ についても記載します。
  • RPKI Technology のセクションでは、RPKIの要件と動作の仕組みをよく理解できるようにRPKI技術と規格について説明します。これによりRPKIのデータの生成、公開、および使用に関して自身の組織に適したソリューションを選択することができるようになります。
  • RPKI Tools のセクションではRPKIをサポートするためにメンテナンスされているさまざまなオープンソースプロジェクトについて説明しています。

FAQ

注釈

これはAlex Band、Job Snijders、David Monosov、Melchior Aelmansによって書かれたコミュニティ主導のRPKI FAQです。世界中のネットワーク運用者がこれらの質問と回答に貢献しています。このコンテンツは Github 上で公開されており、編集や追加のプルリクエストを送ったり、他の場所で使用するためにコンテンツをフォークしたりすることができます。

RPKIの仕組み

RPKIとは何でしょうか?なぜ開発されたのですか?

インターネットのグローバルルーティングシステムは機能的に独立した多数の主体(Autonomous Systems)で構成されており、各々がBGP(Border Gateway Protocol)を使用してルーティング情報を交換します。このシステムは設計上、非常に動的で柔軟性に富んでいます。接続性とルーティングトポロジはルーティング情報の変更に左右され、その変更というのは僅か数分で世界中に伝播します。このシステムの弱点は、これらの変更をBGPというプロトコルから離れ、その外部に存在する情報で検証出来ない点にあります。

RPKIは、BGPで交換される情報の正当性を検証するために、BGPというプロトコルの外側にデータを定義する手法です。RPKIの標準仕様は、インターネットルーティングやアドレスに関する資源情報の一部を暗号システムに落とし込むことを目的としてIETF(Internet Engineering Task Force)によって開発されました。これらの情報は公開されており、誰もがアクセスして暗号的にその整合性を検証することができます。

経路の広報元を確認するためにIRRを使っていますが、なぜ今更RPKIが必要なのでしょうか?

Default Free Zoneのインターネットエンジニアリングに長く携わっている人なら、1998年に RFC 2280 で定義されたルーティングポリシー仕様言語であるRPSL(Routing Policy Specification Language)をよく知っているでしょう。RPSLはその初期にかなりの熱狂を生み出しある種の牽引力を見せていましたが、当時はインターネットが急速に成長しており、データの信頼性よりもデータの可用性に主眼が置かれていました。当時は誰もが最終的なPINGを果たすために、他人のRPSL解析スクリプトを使って、「とにかく動くように」最低限必要なポリシーをその場しのぎで文書化することに忙しかったのです。

これにより、世界中に何十もの経路情報の登録にまたがって、有効性が不明な陳腐化したデータから成る大規模なリポジトリが時間の経過とともに作成されてきました。加えて、RPSLとそれをサポートするツールは、そのポリシーを矛盾なくルーターの設定言語に変換するには複雑すぎることが分かってきました。結果として、公開されているほとんどのRPSLのデータは、フィルタリングを目的とした場合には正確性や更新度合いが不十分であり、また、ルーター設定のゴールデンマスターとしては十分に包括性でなく精密さに欠けるものとなっています。

RPKIは主にデータの信頼性、適時性、正確性に焦点を当てることで、これまでの取り組みを補完し拡大することを目的としています。RPKIのROAは、厳格な基準に基づいてRIRによって階層的に委任され、暗号化された検証可能なものです。これにより、インターネットコミュニティはインターネット上に最新かつ正確なIPアドレス広報元のデータを構築することが可能となります。

なぜRPKIに投資をするのでしょうか?Internet Routing Registry(IRR)を直すほうが簡単なのではないでしょうか?

IRRの主な弱点は、それがグローバルに展開されたシステムではないこと、また、完璧なシステムにするための認証モデルを欠いていることにあります。その結果、公開されているルーティングの意向に関するすべての情報のうち、何が正当で真正なデータであり、何がそうでないのかを判断することが困難になります。RPKIはこれらの2つの問題を解決するもので、世界中の正当なIPリソース保有者が暗号的に検証可能な申し立てを確実に行うことを可能にします。

BGP4はもう限界というのは本当ですか?

残念ながら今すぐにBGPを置き換えることは事実上不可能です。しかし、壊れている部分を修正し状況を改善する作業を行う必要があります。

RPKIはX.509公開鍵基盤に依存しているので、信頼できないSSL/TLS認証局と同じ問題が繰り返し起こるのではないでしょうか?

ブラウザやOSにプリインストールされている可変の監査基準に従う多数の認証局に依拠するのではなく、RPKIは地域インターネットレジストリ(RIR)によって運営されているわずか5つのトラストアンカーに依拠しています。

これらのは十分に確立したオープンに管理されている組織である。RPKIリソース証明書の取得を希望する各事業者は、すでに一つ以上のRIRと契約関係を結んでいるます。

パス検証を伴わないRPKIベースのオリジン検証の価値とは何でしょうか?

Path Validationの導入は望ましいものですが、それがなかったとしても既存RPKI Origin Validation機能は大部分の問題に対応することができます。

既存の運用上および経済上のインセンティブにより、各ネットワークにとって最も重要な経路は可能な限り最短のAS Pathを経由したものとなります。一つの例として、ネットワーク運用者はIX(インターネットエクスチェンジ)やプライベートピアを通して学習した経路のLocal Preferenceを高く設定することがあります。これにより正しい広報元ASから広報されたかのように偽られた場合でも不正な経路がBGP Best Path Selectionに勝つリスクを低減することができます。

トランジットプロバイダーにとって、直接相互接続とAS Pathの短さという要素は、RPKIに基づいて動作し、再配布のために正当な経路のみを受け入れるという理想状態に位置づけるための重要な特徴です。

さらには運用上の経験から、経路ハイジャックの大部分は悪意のあるものではなく、オペレータが意図せず誤って自分が保有していないprefixを広報してしまう "fat-fingering"が原因であることが分かっています。Origin Validationはこれらの問題の多くを軽減します。

広報元ASを意図的になりすまそうとする攻撃者が、Path Validationがないことを利用しようとする可能性はあるものの、RPKI Origin Validationが普及することによりそのような事例を特定して対処することが容易になります。

ROAとルーターが受信した経路とを比較すると、どのような結果が得られるのでしょうか?

経路はValid、Invalid、NotFound (a.k.a. Unknown) のいずれかの状態を持つことになります。

  • Valid: その経路広報は少なくとも一つのROAの範囲に含まれている。
  • Invalid: prefixが不当なASから経路広報されている、もしくはprefixとASはROAと一致しているが、ROAで設定されたmaximum lengthを超過したprefix長をもつ経路広報である。
  • NotFound: 経路広報されたprefixがROAの範囲に含まれない(あるいは部分的にしか含まれていない)。

詳細については RFC 6811 の第2章を参照してください。

経路リークと経路ハイジャックについて違いは何でしょうか?

経路リークとは意図した範囲を超えて経路広報が伝搬する事象を指します。つまりそれは、BGPで経路を学習したある自律システム(AS)から他のASへの経路広報が、受信者、送信者、またはAS PATHに含まれているASの意図したポリシーに違反しているということになります。

経路ハイジャックとは不当な経路広報を指します。

その原因が偶発的であれ意図的であれ、経路リーク/経路ハイジャックのいずれにおいても結果として通信経路の遠回りやリダイレクト、サービス不能などの影響を引き起こす可能性があります。詳細は RFC 7908 を参照してください。

ROAが暗号的に不正だった場合、ROVによる経路の判定結果はInvalidになるのですか?

暗号的な検証をパスしなかった無効なROAは破棄されます。無効なROAが示す情報はROVのプロセスにおいて考慮されません。一方で、有効なROAによりInvalidと判定された経路とはつまり、不当なASによるprefixの経路広報、もしくはprefixとASはROAと一致しているが、ROAで設定されたmaximum lengthを超過したprefix長をもつ経路広報であると言えます。

オペレーションとそれに対する影響

暗号的な検証をすることによるルーターへの影響はありますか?

ありません。Route Origin Validationを実行するためにルーターで暗号化処理を行う必要はありません。署名はRelying PartyまたはRPKI Validatorと呼ばれる外部ソフトウェアによって検証され、軽量なプロトコルを介して処理済みのデータをルーターに送ります。このアーキテクチャにより、ルーターのオーバーヘッドを最小限にすることが可能です。

ルーターでRPKIを動作させることによってBGPの収束速度は低下しますか?

いいえ、RPKIの検証済みキャッシュに基づくフィルタリングは収束速度にほとんど影響を与えません。RPKIの検証はまだローカルに存在しない新しいprefixに対する経路学習と並行して行われます。それらのprefixが利用可能になった時にValid、Invalid、NotFoundのいずれかでマーキングされ、それに基づく適切なポリシーが適用されます。

Validatorを使うためにrsyncが必要となるのはなぜですか?

当初の標準規格においてrsyncはRPKIのデータを伝搬するための主要な手段として定義されていました。初期の段階ではシステムをうまく機能させていましたが、rsyncにはいくつかの欠点があります。

  • クライアントとして動作させるRPKI Relying Partyソフトウェアはrsyncに対して依存性を持ちます。rsyncのバージョンやサポートするオプション(例えば --contimeout など)が異なる場合には、思わぬ結果を招きます。さらに、そもそもrsyncを呼び出すことは非効率的です。これはRPKIの処理を考えた際には追加のプロセスであると言え、またrsyncによる出力はディスクをスキャンすることでした検証することができません。
  • グローバルでRPKIのデータが増加し、データをダウンロードして検証するオペレーターが増えれば増えるほど、スケーリングに関する問題の深刻さは増します。それはrsyncのみでなく差分を処理するサーバー側についても同様のことが言えます。

これらの制限を克服するためにHTTPSに則ったプロトコルであるRRDPが開発され、RFC 8182 で標準化されました。RRDPは特にスケールすることに重きを置いて設計されており、CDNがグローバルなスケールでRPKIデータセットを提供することが可能になります。さらに、HTTPSはプログラミング言語で十分にサポートされているため、Relying Partyソフトウェアをより容易に、より堅牢に開発することができるようになります。

現在RRDPはARIN、RIPE NCC、APNICのサーバサイドで実装されています。また、ほとんどのRPKI Validator実装で既にRRDPをサポートしているか、短期的なロードマップ上に据えています。

5つのRIRはホスト型RPKIシステムを提供していますが、その代わりに自分自身で委任型RPKIシステムを運用したい理由はなんでしょう?

RPKIは分散型システムとして設計されており、各組織が独自のCAを運営し、証明書とROAを自ら発行できるようになっています。RIRが提供するホスト型RPKIシステムは参入障壁を下げ、オペレータが独自のCAを運営するかどうかを決める前に運用経験を積むことを可能にします。

多くのオペレータにとっては長期的に見てホスト型RPKIシステムで十分であると言えます。しかし例えば、管理のためのWebインターフェースに依存したくない、複数のRIRリージョンに跨るアドレス空間を管理している、またはROA管理と統合したBGPの自動化を行っているなどの組織は、独自のシステム上でCAを運用する選択をすることができます。

インターネット上で外部のデータソースを見つけたとしても、自分自身でValidatorを動作させるべきなのでしょうか?

リソース保有者が資源保有の正当性を主張することに対して署名を施すことの価値は、データが本物であり、いかなる方法でも改竄されていないことを検証できることに由来しています。

署名の検証を第三者に委託することで、データの正確性と真正性が失われます。概念的にはDNSSECの検証に似ており、ローカルの信頼できるリゾルバーで検証を行うのがベストです。

RFC 7115 の第3章では、このトピックについて網羅的に説明されています。

どのくらいの頻度でRPKIリポジトリからデータを取得するべきですか?

RFC 7115 の第3章によると、4〜6時間毎に新しいデータを取得すべきです。現時点で最も大きなリポジトリからROAを取得するのに約10~15分かかります。つまり、システムに不要な負荷をかけずに、15分から30分ごとに取得するのが合理的であると言えます。

RPKIシステムが利用できなくなったり大きな障害が発生した場合、私が広報する署名済みのprefixは他の人にとって到達不能になるのでしょうか?また私のルーターがBGPで学習したprefix宛の通信は到達できなくなるのでしょうか?

RPKIはルーティングに関する意図を肯定的に表現します。もしも全てのRPKI Validatorが使用不能になったり、全ての証明書やROAが失効したりしたとしても、ROVによる経路の検証結果はRPKIを使用しなかった場合と同様のNotFoundに戻るだけです。RFC 7115 の第5章によると、NotFoundと判定された経路はacceptするべきと記載されています。なお、現状多くの経路は残念ながらNotFoundと判定されています。

Validatorがクラッシュしルーターがデータを受信できなくなった場合、ルーターがBGPで学習したprefixはどうなりますか?

Route Origin Validationをサポートするすべてのルーターでは、冗長性のために複数のバリデーターを指定することができます。復数のValidatorインスタンスを、できれば異なるサブネット上で独立して動作させることが推奨されます。この方法により復数のキャッシュに依存する形になります。

仮に全てのValidatorが動作不能に陥ったとしても、全ての経路はRoute Origin Validationを実施しなかった時と同じようにNotFoundと判定されます。

RPKIデータに頼らずに特定の経路に対してのみ特別な操作をしたい場合どうすればよいのでしょうか?

特定のprefixや経路広報に対して独自のポリシーを適用し、リポジトリから受信したRPKIデータを上書きすることが可能です。オーバーライドの方法は、RFC 8416 “Simplified Local Internet Number Resource Management with the RPKI (SLURM)” で標準化されています。

ROVによる検証を実施せずに、経路に署名を施しROAを作成することに意味はあるのでしょうか?

経路に署名を施してROAを作成することは良い試みであると言えます。もし仮にあなたがROVを実施しなかったとしても、他の誰かがあなたの作成したROAに基づいてROVを実施するでしょう。また一方で、最悪のケースでは誰かがあなたのprefixをハイジャックしようとする可能性もあります。そうなった時にROAを作っていなかったとしたら...

Miscellaneous

What is the global adoption and data quality of RPKI like?

There are several initiatives that measure the adoption and data quality of RPKI:

I want to use the RPKI services from a specific RIR that I'm not currently a member of. Can I transfer my resources?

The RPKI services that each RIR offers differ in conditions, terms of service, availability and usability. Most RIRs have a transfer policy that allow their members to transfer their resources from one RIR region to another. Organisations may wish to do this so that they bring all resources under one entity, simplifying management. Others may do this because they are are looking for a specific set of terms with regards to the holdership of their resources. Please check with your RIR for the possibilities and conditions for resource transfers.

Will RPKI be used as a censorship mechanism allowing governments to make arbitrary prefixes unroutable on a whim?

Unlikely. In order to suppress a prefix, it would be necessary to both revoke the existing ROA (if one is present) and publish a conflicting ROA with a different origin.

These characteristics make using RPKI as a mechanism for censorship a rather convoluted and uncertain way of achieving this goal, and has broad visibility (as the conflicting ROA, as well as the Regional Internet Registry under which it was issued, will be immediately accessible to everyone). A government would be much better off walking into the data center and confiscate your equipment.

What are the long-term plans for RPKI?

With RPKI Route Origin Validation being deployed in more and more places, there are several efforts to build upon this to offer out-of-band Path Validation. Autonomous System Provider Authorisation (ASPA) currently has the most traction in the IETF, defined in these drafts: draft-azimov-sidrops-aspa-profile and draft-azimov-sidrops-aspa-verification.

クイックヘルプ

あなたがこのページを読んでいるということは、「あなたの経路がRPKI ROAによってInvalidになっていますよ。」と他の誰かから言われたのだけれど、自分がその意味を理解できないという状況に陥っているからかもしれません。このページの内容の目的は、あなたにその正しい方向性を示すことと、さらに役に立つ情報を提供することです。このページはエキスパートを対象にしたものではなく、すべての人たちにわかりやすい回答を提供できるように多くの専門用語に目をつぶって説明します。

RPKIやROAとは?

RPKIはResource Public Key Infrastructure(リソース公開鍵基盤)の略で、ROAはRoute Origin Authorisationの略です。

RPKIやROAは何をするものですか?

RPKIやROAは、経路の生成元にについて、自分たちが正当な生成元でありそれ以外は正しくないことを主張するための手段を提供します。

どのように動作しますか?

すべてのIPアドレス空間(v4+v6)はIANAによって割り当てられます。彼らはこのIPアドレス空間をRIR(ARIN、RIPE NCC、APNIC、LANNIC、AFRINIC)のうちのどれかひとつに委任しています。これらのRIRは他の組織に順に割り当てます。各RIRにはIPアドレス空間の所有者がorigin AS番号を主張するポータルがあります。このポータルは経路とorigin AS番号の組み合わせに対応するROAを生成します。そしてこのROAは誰でも見ることのできるようにRIRによって公開されます。

ROAはどのようなものですか?

ROAはprefix、maximum prefix length、origin AS番号から構成される署名付き宣言です。

次に起こることは何ですか?

どんなオペレーターでもRIRからROAのリストを取得し、ROAに基づいてアクションを実行するために使用することができます。通常、それぞれの経路広報は次の3つのうちのいずれかの状態になります。

NotFound(別名Unknown)
経路広報に対応するROAが作成されていない場合、これがデフォルトの状態となります。すべてのオペレーターが、これらの経路をルーターにインストールするでしょう。
Valid
ROAが経路広報と一致する場合の状態を表します。すべてのオペレーターが、これらの経路をルーターにインストールするでしょう。これらの経路を優先する可能性もあります。
Invalid
ROAと経路広報が異なる場合の状態を表します。ROAで設定されているorigin AS番号が異なるか、maximum prefix lengthよりもmore specificな経路であるかのいずれかです。オペレーターが厳密にRPKIを使用している場合、この経路広報はルーターにインストールされません。

自分の経路がInvalid状態にある場合どうすればよいですか?

ROAに変更を加えることができる唯一の組織は、IPアドレス空間のRIRにリストされた所有者です。IPアドレス空間の所有者は以下のようなメンバーポータルの一部であるRIRのホスト型RPKIのインターフェースでROAを作成した可能性が高いです。

最初に気をつけておいてほしいことがあります。RPKI Invalid経路が存在するということは、誰かがすでに上記のポータルのうちのひとつにログインし、問題となっているIPアドレス空間のROAを作成しているということです。ROA作成者自身以外はそのROAを作成することはできません。つまり、そのIPアドレス空間の所有者にリンクされたRIRにアカウントがすでに存在している必要があります。

Introduction

Resource Public Key Infrastructure (RPKI) revolves around the right to use Internet number resources, such as IP addresses and autonomous system (AS) numbers.

In this PKI, the legitimate holder of a block of IP addresses or AS numbers can obtain a resource certificate. Using the certificate, they can make authoritative, signed statements about the resources listed on it. To understand the structure of RPKI and its usage, we must first look at how Internet number resources are allocated globally.

Internet Number Resource Allocation

Before being formalised within an organisation, the allocation of Internet number resources, such as IP addresses and AS numbers, had been the responsibility of Jon Postel. At the time, he worked at the Information Sciences Institute (ISI) of the University of Southern California (USC). He performed the role of Internet Assigned Numbers Authority (IANA), which is presently a function of the Internet Corporation for Assigned Names and Numbers (ICANN).

Jon Postel in 1994, with map of Internet top-level domains

Jon Postel in 1994, with a map of Internet top-level domains

Initially, the IANA function was performed globally, but as the work volume grew due to the expansion of the Internet, Regional Internet Registries (RIRs) were established over the years to take on this responsibility on a regional level. Until the available pool of IPv4 depleted in 2011, this meant that periodically, a large block of IPv4 address space was allocated from IANA to one of the RIRs. In turn, the RIRs would allocate smaller blocks to their member organisations, and so on. IPv6 address blocks and AS numbers are allocated in the same way.

Today, there are five RIRs responsible for the allocation and registration of Internet number resources within a particular region of the world:

Service regions of the Regional Internet Registries

The service regions of the five Regional Internet Registries

In the APNIC and LACNIC regions, Internet number resources are in some cases allocated to National Internet Registries (NIRs), such as NIC.br in Brazil and JPNIC in Japan. NIRs allocate address space to its members or constituents, which are generally organised at a national level. In the rest of world, the RIRs allocate directly to their member organisations, typically referred to as Local Internet Registries (LIRs). Most LIRs are Internet service providers, enterprises, or academic institutions. LIRs either use the allocated IP address blocks themselves, or assign them to End User organisations.

Internet number resource allocation hierarchy

Internet number resource allocation hierarchy

Mapping the Resource Allocation Hierarchy into the RPKI

As illustrated, the IANA has the authoritative registration of IPv4, IPv6 and AS number resources that are allocated to the five RIRs. Each RIR registers authoritative information on the allocations to NIRs and LIRs, and lastly, LIRs record to which End User organisation they assigned resources.

In RPKI, resource certificates attest to the allocation by the issuer of IP addresses or AS numbers to the subject. As a result, the certificate hierarchy in RPKI follows the same structure as the Internet number resource allocation hierarchy, with the exception of the IANA level. Instead, the five RIRs each run a root CA with a trust anchor from which a chain of trust for the resources they each manage is derived.

The chain of trust in RPKI starting at the five RIRs

The chain of trust in RPKI, starting at the five RIRs

The IANA does not operate a single root certificate authority (CA). While this was originally a recommendation from the Internet Architecture Board (IAB) to eliminate the possibility of resource conflicts in the system, they reconsidered after operational experience in deployment had caused the RIRs to conclude that the RPKI system would be less brittle using multiple overlapping trust anchors.

X.509 PKI Considerations

The digital certificates used in RPKI are based on X.509, standardised in RFC 5280, along with extensions for IP addresses and AS identifiers described in RFC 3779. Because RPKI is used in the routing security context, a common misconception is that this is the Routing PKI. However, certificates in this PKI are called resource certificates and conform to the certificate profile described in RFC 6487.

注釈

X.509 certificates are typically used for authenticating either an individual or, for example, a website. In RPKI, certificates do not include identity information, as their only purpose is to transfer the right to use Internet number resources.

In addition to RPKI not having any identity information, there is another important difference with commonly used X.509 PKIs, such as SSL/TLS. Instead of having to rely on a vast number of root certificate authorities which come pre-installed in a browser or an operating system, RPKI relies on just five trust anchors, run by the RIRs. These are well established, openly governed, not-for-profit organisations. Each organisation that wishes to get an RPKI resource certificate already has a contractual relationship with one or more of the RIRs.

In conclusion, RPKI provides a mechanism to make strong, testable attestations about Internet number resources. In the next sections, we will look at how this can be used to make Internet routing more secure.

Internet Routing

To understand how RPKI is used to make Internet routing more secure, we must first look at how routing works, what the weaknesses are and which elements RPKI can currently help protect against.

The global routing system of the Internet consists of a number of functionally independent actors called autonomous systems (AS), which use the Border Gateway Protocol (BGP) to exchange routing information.

An autonomous system is a set of Internet routable IP prefixes belonging to a network or a collection of networks that are all managed and supervised by a single entity or organisation. An AS utilises a common routing policy controlled by the entity and is identified by a globally unique 16 or 32-bit number. The AS number (ASN) is assigned by one of the five Regional Internet Registries (RIRs), just like IP address blocks.

The Border Gateway Protocol manages the routed peerings, prefix advertisement and routing of packets between different autonomous systems across the Internet. BGP uses the ASN to uniquely identify each system. In short, BGP is the routing protocol for AS paths across the Internet. The system is very dynamic and flexible by design. Connectivity and routing topologies are subject to change, which easily propagate globally within a few minutes.

Fundamentally, BGP is based on mutual trust between networks. When a network operator configures the routers in their AS, they specify which IP prefixes to originate and announce to their peers. There is no authentication or authorisation embedded within BGP. In principle, an operator can define any ASN as the origin and announce any prefix, also one they are not the holder of.

BGP Best Path Selection

BGP routing information includes the complete route to each destination. BGP uses the routing information to maintain a database of network reachability information, which it exchanges with other networks. For each prefix in the routing table, BGP continuously and dynamically makes decisions about the best path to reach a particular destination. After the best path is selected, the route is installed in the routing table.

Though there are many factors at play, two of them are the most important to keep in mind throughout the next sections: the preference for shortest path and most specific IP prefix.

Preference for Shortest Path

Out of all the possible routes that a router has in its Routing Information Base (RIB), BGP will always prefer the shortest path to its destination, minimising the amount of hops. When two matching prefixes are announced from two different networks on the Internet, BGP will route traffic to the destination that is topologically closest. This is an important feature of BGP, but when configuration errors occur, it can also be the cause of reachability problems.

When the same prefix is announced, the shortest path wins

When the announcement of a prefix is an exact match, the shortest path wins

Preference for Most Specific Prefix

Regardless any local preference, path length or any other attributes, when building the forwarding table, the router will always select most specific IP prefix available. This behaviour is important, but creates the possibility for almost any network to attract someone else's traffic by announcing an overlapping more specific.

A more specific prefix always wins

Regardless of the path length, the announcement of a more specific prefix always wins

With this in mind, there are several problems that can arise as a result of this behaviour.

Routing Errors

Routing errors on the Internet can be classified as route leaks or route hijacks. RFC 7908 provides a working definition of a BGP route leak:

A route leak is the propagation of routing announcement(s) beyond their intended scope. That is, an announcement from an Autonomous System (AS) of a learned BGP route to another AS is in violation of the intended policies of the receiver, the sender, and/or one of the ASes along the preceding AS path. The intended scope is usually defined by a set of local redistribution/filtering policies distributed among the ASes involved. Often, these intended policies are defined in terms of the pair-wise peering business relationship between autonomous systems.

A route hijack, also called prefix hijack, or IP hijack, is the unauthorised origination of a route.

注釈

Route leaks and hijacks can be accidental or malicious, but most often arise from accidental misconfigurations. The result can be redirection of traffic through an unintended path. This may enable eavesdropping or traffic analysis and may, in some cases, result in a denial of service or black hole.

Routing incidents occur every day. While several decades ago outages and redirections were often accidental, in recent years they have become more malicious in nature. Some notable events were the AS 7007 incident in 1997, Pakistan's attempt to block YouTube access within their country, which resulted in taking down YouTube entirely in 2008, and lastly, the almost 1,300 addresses for Amazon Route 53 that got rerouted for two hours in order to steal cryptocurrency, in 2018.

Mitigation of Routing Errors

One weakness of BGP is that routing errors cannot be easily be deduced from information within the protocol itself. For this reason, network operators have to carefully gauge what the intended routing policy of their peers is. As a result, it is imperative that networks employ filters to only accept legitimate traffic and drop everything else.

There are several well known methods to achieve this. Certain backbone and private peers require a valid Letter of Agency (LOA) to be completed prior to allowing the announcement or re-announcement of IP address blocks. A more widely accepted method is the use of Internet Routing Registry (IRR) databases, where operators can publish their routing policy. Both methods allow other networks to set up filters accordingly.

The Internet Routing Registry

The Internet Routing Registry (IRR) is a distributed set of databases allowing network operators to describe and query for routing intent. The IRR is used as a verification mechanism of route origination and is widely, though not universally, deployed to prevent accidental or intentional routing disturbances.

The notation used in the IRR is the Routing Policy Specification Language (RPSL), which was originally defined in RFC 2280 in 1998. RPSL is a very expressive language, allowing for an extremely detailed description of routing policy. While IRR usage had created considerable early enthusiasm and has seen quite some traction, the Internet was rapidly growing at the time. This meant that the primary focus was on data availability rather than data trustworthiness.

In later years, it was considered a good practice to extensively document how incoming and outgoing traffic was treated by the network, but nowadays the most prevalent usage is to publish and query for route objects, describing from which ASN a prefix is intended to be originated:

route:          192.0.2.0/24
descr:          Examplenet announcement of 192.0.2.0/24
country:        NL
origin:         AS65536
mnt-by:         EXAMPLENET-MNT
mnt-routes:     EXAMPLENET-MNT
last-modified:  2018-08-30T07:50:19Z
source:         RIPE

As explained earlier, only the Regional Internet Registries have authoritative information on the legitimate holder of an Internet number resource. This means that the entries in their IRR databases are authenticated, but they are not in any of the other routing registries. Over time, this has created an expansive repository of obsolete data of uncertain validity, spread across dozens of routing registries around the world.

Additionally, the RPSL language and supporting tools have proven to be too complex to consistently transpose policy into router configuration language. This resulted in most published RPSL data being neither sufficiently accurate and up to date for filtering purposes, nor sufficiently comprehensive or precise for being the golden master in router configuration.

In conclusion, the main weakness of the IRR is that it is not a globally deployed system and it lacks the authorisation model to make the system water tight. The result is that out of all the information on routing intent that is published, it is difficult to determine what is legitimate, authentic data and what isn’t.

RPKI solves these problems, as you can be absolutely sure that an authoritative, cryptographically verifiable statement can be made by any legitimate IP resource holder in the world. In the next sections we will look at how this is achieved.

Securing BGP

Now that we've looked at how the RPKI structure is built and understand the basics of Internet routing, we can look at how RPKI can be used to make BGP more secure.

RPKI provides a set of building blocks allowing for various levels of protection of the routing system. The initial goal is to provide route origin validation, offering a stepping stone to providing path validation in the future. Both origin validation and path validation are documented IETF standards. In addition, there are drafts describing autonomous system provider authorisation, aimed at providing a more lightweight, incremental approach to path validation.

Route Origin Validation

With route origin validation (ROV), the RPKI system tries to closely mimic what route objects in the IRR intend to do, but then in a more trustworthy manner. It also adds a couple of useful features.

Origin validation is currently the only functionality that is operationally used. The five RIRs provide functionality for it, there is open source software available for creation and publication of data, and all major router vendors have implemented ROV in their platforms. Various router software implementations offer support for it, as well.

Route Origin Authorisations

Using the RPKI system, the legitimate holder of a block of IP addresses can use their resource certificate to make an authoritative, signed statement about which autonomous system is authorised to originate their prefix in BGP. These statements are called Route Origin Authorisations (ROAs).

RPKI ROA Creation

Each CA can issue Route Origin Authorisations

The creation of a ROA is solely tied to the IP address space that is listed on the certificate and not to the AS numbers. This means the holder of the certificate can authorise any AS to originate their prefix, not just their own autonomous systems.

Maximum Prefix Length

In addition to the origin AS and the prefix, the ROA contains a maximum length (maxLength) value. This is an attribute that a route object in RPSL doesn't have. Described in RFC 6482, the maxLength specifies the maximum length of the IP address prefix that the AS is authorised to advertise. This gives the holder of the prefix control over the level of deaggregation an AS is allowed to do.

For example, if a ROA authorises a certain AS to originate 192.0.1.0/24 and the maxLength is set to /25, the AS can originate a single /24 or two adjacent /25 blocks. Any more specific announcement is unauthorised by the ROA. Using this example, the shorthand notation for prefix and maxLength you will often encounter is 192.0.1.0/24-25.

警告

According to RFC 7115, operators should be conservative in use of maxLength in ROAs. For example, if a prefix will have only a few sub-prefixes announced, multiple ROAs for the specific announcements should be used as opposed to one ROA with a long maxLength.

Liberal usage of maxLength opens up the network to a forged origin attack. ROAs should be as precise as possible, meaning they should match prefixes as announced in BGP.

In a forged origin attack, a malicious actor spoofs the AS number of another network. With a minimal ROA length, the attack does not work for sub-prefixes that are not covered by overly long maxLength. For example, if, instead of 10.0.0.0/16-24, one issues 10.0.0.0/16 and 10.0.42.0/24, a forged origin attack cannot succeed against 10.0.666.0/24. They must attack the whole /16, which is more likely to be noticed because of its size.

Route Announcement Validity

When a network operator creates a ROA for a certain combination of origin AS and prefix, this will have an effect on the RPKI validity of one or more route announcements. Once a ROA is validated, the resulting object contains an IP prefix, a maximum length, and an origin AS number. This object is referred to as validated ROA payload (VRP).

When comparing VRPs to route announcements seen in BGP, RFC 6811 describes their possible statuses, which are:

Valid
The route announcement is covered by at least one VRP. The term covered means that the prefix in the route announcement is equal, or more specific than the prefix in the VRP.
Invalid
The prefix is announced from an unauthorised AS, or the announcement is more specific than is allowed by the maxLength set in a VRP that matches the prefix and AS.
NotFound
The prefix in this announcement is not, or only partially covered by a VRP.

Anyone can download and validate the published certificates and ROAs and make routing decisions based on these three outcomes. In the Using RPKI Data section, we'll cover how this works in practice.

Path Validation

Currently, RPKI only provides origin validation. While BGPsec path validation is a desirable characteristic and standardised in RFC 8205, real-world deployment may prove limited for the foreseeable future. However, RPKI origin validation functionality addresses a large portion of the problem surface.

For many networks, the most important prefixes can be found one AS hop away (coming from a specific peer, for example), and this is the case for large portions of the Internet from the perspective of a transit provider - entities which are ideally situated to act on RPKI data and accept only valid routes for redistribution.

Furthermore, the vast majority of route hijacks are unintentional, and are caused by ‘fat-fingering’, where an operator accidentally originates a prefix they are not the holder of.

Origin validation would mitigate most of these problems, offering immediate value of the system. While a malicious party could still take advantage of the lack of path validation, widespread RPKI implementation would make such instances easier to pinpoint and address.

With origin validation being deployed in more and more places, there are several efforts to build upon this to offer out-of-band path validation. Autonomous system provider authorisation (ASPA) currently has the most traction in the IETF, and is described in these drafts: draft-azimov-sidrops-aspa-profile and draft-azimov-sidrops-aspa-verification.

Implementation Models

RPKI is designed to allow every resource holder to generate and publish cryptographic material on their own systems. This is commonly referred to as delegated RPKI. To offer a turn-key solution, each RIR also offers a hosted RPKI system in their member portals. Both models have their own advantages, based on the specific requirements of the organisation using the system.

No matter what implementation model you choose, it always a good idea to publish ROAs for your BGP announcements. Even when you are still evaluating how to deploy RPKI within your organisation, the benefits are immediate. Others can already filter based on what you publish, offering protection for you and other Internet users. For example, in case someone inadvertently announces your address space from their AS, it will be flagged as Invalid and dropped by everyone who has deployed route origin validation.

重要

Once you start authorising announcements with RPKI, it is imperative that ROAs are created for all route origins from the prefixes you hold, including more specifics announced by other business units or customers. In addition, RPKI should become a standard part of operations, ensuring staff is trained and ROAs are continually monitored and maintained.

Hosted RPKI

In 2008, when the five RIRs committed to start offering RPKI services, it was clear that there would be an early adopters phase for a considerable amount of time. Given the past experiences with IPv6 and DNSSEC uptake, the RIRs decided to offer a hosted RPKI solution to lower the entry barrier into the technology. This way, organisations could easily get operational experience with the technology, without having to manage a certificate authority themselves.

Hosted RPKI offers a fair balance between ease-of-use, maintenance and flexibility. It allows users to log into their RIR member portal and request a resource certificate, which is securely hosted on the servers of the RIR. All cryptographic operations, such as key roll overs, are automated. The certificates and ROA are published in repositories hosted by the RIR. In short, there is nothing that the user has to manage, apart from creating and maintaining ROAs.

Example of the Hosted RPKI interface by the RIPE NCC

Example of the Hosted RPKI interface of the RIPE NCC

The functionality and user interfaces of the hosted RPKI implementations vary greatly across the five RIRs. Despite these variations, if you are an organisation with a single ASN and a handful of statically announced IP address blocks that are not delegated to customers, hosted RPKI is sufficient for most use cases.

Functional differences across RIRs

This section provides an overview of the functionality each RIR provides to help users manage RPKI, which is summarised in the table below.

First, the table indicates if the RPKI system supports setting up delegated RPKI, so users can run their own certificate authority if they want. An RIR may also offer a publication server for users running delegated RPKI. When using the hosted RPKI system, there is an overview if multiple users can be authorised to manage ROAs, and whether they can authenticate using two-factors.

To make management of ROAs easier, some systems provide a list of all announcements with certified address space that are seen by BGP route collectors, such as the RIPE Routing Information Service (RIS). ROAs have an explicit start and end validity date, but in some cases it is possible to automatically renew the ROAs, so that they are valid for as long as there is an entry in the web interface. In addition, it may be possible to synchronise the management of "route" objects in the IRR with the ROAs that are created. An application programming interface (API) may be provided to make batch processing easier.

To improve retrieval of published RPKI data by relying party software, the RPKI Repository Delta Protocol (RRDP) protocol (RFC 8182) was developed. Support for this standard is listed as well.

Lastly, nonrepudiation refers to the inability for a party to dispute or deny having performed an action.

  APNIC AFRINIC ARIN LACNIC RIPE NCC
Support for delegated RPKI Yes Yes [1] Yes Yes [2] Yes
Publication service for delegated RPKI Yes [3] No No [4] No No
Multi-user support Yes Yes [5] Yes No Yes
Two-factor authentication Yes No Yes [6] No Yes
BGP route collector suggestions Yes No No Yes Yes
Auto-renew ROAs Yes No No Yes [7] Yes
Match "route" objects with ROAs Yes No No No No
API No No Yes No Yes
Publication via RRDP Yes Yes Yes No Yes
Nonrepudiation No No Yes No No
[1]Available in the test environment only.
[2]Available upon request.
[3]Available upon request.
[4]On the roadmap
[5]Requires a client X.509 certificate to use RPKI.
[6]Requires a ROA Request Key Pair.
[7]Explicit opt-in feature.

A final differentiator is the publication interval of each RIR repository. Please keep in mind that once a ROA is created by a user in one of the hosted systems, it can take between several minutes up to multiple hours before the object is published and available for download, depending on the RIR system you use.

Delegated RPKI

Operators who prefer more control and have better integration with their systems can run their own child CA. This model is usually referred to as delegated RPKI.

In this model, the certificate authority that manages object signing is functionally separated from the publication of cryptographic material. This means that an organisation can run a CA and either publish themselves, or delegate this responsibility to a third party, such as a hosting company or cloud provider.

There may be various reasons for organisations to choose this model. For example, this may be useful for organisations that need to be able to delegate RPKI to their customers or different business units, so that that they can run a CA on their systems and manage ROAs themselves.

Alternatively, enterprises who manage large amounts of address space across various RIRs, may not want to manage ROAs in up to five different web interfaces. Instead, they might prefer to be operationally independent from the RIR and manage everything from within one package that is tightly integrated with IP address management and provisioning systems.

Lastly, in the LACNIC and APNIC regions there are several National Internet Registries who provide registration services on a national level to their members and constituents. They also need to be operationally independent and run a certificate authority as a child of their RIR.

Using RPKI Data

Validation is a key part of any public key infrastructure. The value from signing comes with validation, and should always be done by the party relying on the data. If validation is outsourced to a third party, you can never be certain if the data is complete, or tampered with in any way.

Operators who want to deploy route origin validation in their BGP decision making process have to fetch and validate all of the published RPKI data. As with any PKI, you have to start with one or more entities you are prepared to trust. In the case of RPKI, these are the five Regional Internet Registries.

Connecting to the Trust Anchor

When you want to retrieve all RPKI data, you connect to the trust anchor that each RIR provides. The root certificate contains pointers to its children, which contain pointers to their children, and so on. These certificates, and other cryptographic material such as ROAs, can be published in the repository that the RIR provides, or a repository operated by an organisation who either runs delegated RPKI themselves, or hosts a repository as a service. As a person who wants to fetch and validate the data, formally known as a relying party, it is not a concern where data is published. By simply connecting to the trust anchor, the chain of trust is followed automatically.

The RIR trust anchor is found through a static trust anchor locator (TAL), which is a very simple file that contains a URL to retrieve the trust anchor and a public key to verify its authenticity. The reason the TAL exists is because it's very likely that the contents of the self signed root certificate change, due to resource transfers between RIRs. By using a TAL, the data in the trust anchor can change, without it needing to be redistributed.

Fetching and Verifying

Various open source relying party software packages, also known as RPKI validators, are available in order to download, verify and process RPKI data. Please note that most RPKI validators come preinstalled with TALs for all RIRs except the one for ARIN, as they require users to first review and agree to their Relying Party Agreement.

When the validator runs, it will start retrieval at each of the RIR trust anchors and follows the chain of trust to fetch all published certificates and ROAs. Fetching data was originally done via rsync but RIRs and software developers are gradually migrating to the RPKI Repository Delta Protocol (RRDP) for retrieval, standardised in RFC 8182. This protocol uses HTTPS, which makes development and implementation easier, and opens up possibilities for Content Delivery Networks to participate in serving RPKI data. Work to deprecate rsync altogether is ongoing in the IETF.

Once the data has been downloaded, the validator will verify the signatures on all objects and output the valid route origins as a list. Each object in this list contains an IP prefix, a maximum length, and an origin AS number. This object is referred to as validated ROA payload (VRP). The collection of VRPs is known as the validated cache.

注釈

Objects that do not pass cryptographic verification are discarded. Any statements made about route origins are not considered, as if a ROA was never published. As a result, they will not affect any route announcements.

Please note that objects that do not pass cryptographic verification are sometimes referred to as 'invalid ROAs', but we like to avoid this term because validity is used elsewhere in a different context.

Fetching and verification of data should be performed periodically, in order to process updates. Though the standards recommend retrieval at least once every 24 hours, current operational practice recommends that processing updates every 30 to 60 minutes is reasonable.

Validating Routes

As explained in the Route Origin Validation section, when comparing VRPs to the route announcements seen in BGP, it will have an effect on their RPKI validity state. They can be:

Valid
The route announcement is covered by at least one VRP. The term covered means that the prefix in the route announcement is equal, or more specific than the prefix in the VRP.
Invalid
The prefix is announced from an unauthorised AS, or the announcement is more specific than is allowed by the maxLength set in a VRP that matches the prefix and AS.
NotFound
The prefix in this announcement is not, or only partially covered by a VRP.

Please carefully note the use of the word validity. Because RPKI revolves around signing and verifying cryptographic objects, it's easy to confuse this term with the validity state of a BGP announcement. As mentioned, it can occur that a ROA doesn't pass cryptographic verification, for example because it expired. As a result, it is discarded and will not affect any BGP announcement. In turn, only a validated ROA payload—sometimes referred to as 'valid ROA'—can make a BGP announcement Valid or Invalid.

A route announcement may be covered by several VRPs. For example, there may be a VRP for the aggregate announcement, which overlaps with a customer announcement of a more specific prefix from a different AS. A route announcement will be Valid as long as there is one covering VRP that authorises it.

Based on the three validity outcomes, operators can make an informed decision what to do with the BGP route announcements they see. As a general guideline, announcements with Valid origins should be preferred over those with NotFound or Invalid origins. Announcements with NotFound origins should be preferred over those with Invalid origins.

As origin validation is deployed incrementally, the amount of IP address space that is covered by a ROA will gradually increase over time. Therefore, accepting the NotFound validity should be done for the foreseeable future.

重要

For route origin validation to succeed in its objective, operators should ultimately drop all BGP announcements that are marked as Invalid. Before taking this step, organisations should first analyse the effects of doing this, to avoid unintended results. Initially accepting Invalid announcements and giving them a lower preference, as well as tagging them with a BGP community is a good first step to measure this.

Local Overrides

Sometimes there is an operational need to accept Invalid announcements temporarily. Local overrides allow you to manage your own exceptions to the validated cache. This ensures that you remain in full control of the VRPs used by your routers. For example, if an Invalid origin is the result of a misconfigured ROA, you may accept it until the operator in question has resolved the issue. A format named SLURM is available for this, which is standardised in RFC 8416.

SLURM provides several ways to achieve exceptions. First, you can add a VRP specifically for the affected route by specifying the correct ASN, prefix and maximum length. Secondly, you can filter out an existing VRP, thereby moving the route back to NotFound state. In general, the former is the safer way, as it deals better with changing ROAs. Lastly, it is possible to allow all routes from a certain ASN or prefix. It is advised to use overrides with care, as liberal usage may have unintended consequences.

Feeding Routers

The validated cache can be fed directly into RPKI-capable routers via the RPKI to Router Protocol (RPKI-RTR), described in RFC 8210. Many routers, including Cisco, Juniper, Nokia, as well as BIRD and OpenBGPD support processing the validated cache. Alternatively, most validators can export the cache in various useful formats for processing outside of the router, in order to set up filters.

The RPKI Data Retrieval and Validation

RPKI publication, data retrieval, validation and processing

Note that your router does not perform any of the cryptographic validation, this is all handled by the relying party software. In addition, using RPKI causes minimal overhead for routers and has a negligible influence on convergence speed. Validation happens in parallel with route learning for new prefixes which are not yet in the cache. Those prefixes will be marked as Valid, Invalid, or NotFound as the information becomes available, after which the correct policy is applied.

Please keep in mind that the RPKI validator software you run in your network fetches cryptographic material from the outside world. To do this, it needs at least ports 873 and 443 open for rsync and HTTPS, respectively. In most cases, the processed data is fed to a router via RPKI-RTR over a clear channel, as it's running in your local network. Currently, only Cisco IOS-XR provides a practical means to secure transports for RPKI-RTR, using either SSH or TLS.

It is recommended to run multiple validator instances as a failover measure. The router will use the union of RPKI data from all validators to which they are connected. This means that (temporary) differences in the validated cache produced by the validators, for example due to differing fetching intervals, does not pose a problem.

In the Router Support section we will look at which routers support route origin validation, and how to get started with each.

Router Support

Several router vendors participated in the development of the RPKI standards in the IETF, ensuring the technology offered an end-to-end solution for route origin validation. The RPKI to Router protocol (RPKI-RTR) is standardised in RFC 6810 (v0) and RFC 8210 (v1). Is it specifically designed to deliver validated prefix origin data to routers. This, as well as origin validation functionality, is currently available in on various hardware platforms and software solutions.

Hardware Solutions

重要

The versions listed here are the earliest ones where RPKI support became available. However, a newer version may be required to get recommended improvements and bug fixes. Please check your vendor documentation and knowledge base.

Juniper — Documentation
Junos version 12.2 and newer. Please read PR1461602 and PR1309944 before deploying.
Cisco — Documentation
IOS release 15.2 and newer, as well as Cisco IOS/XR since release 4.3.2.
Nokia — Documentation
SR OS 12.0.R4 and newer, running on the 7210 SAS, 7250 IXR, 7750 SR, 7950 XRS and the VSR.
Arista - Blog post
EOS 4.24.0F and newer
MikroTik - RouterOS v7 BETA Forum thread - RPKI Forum thread
7.0beta7 and newer

Software Solutions

Various software solutions have support for origin validation:

In some solutions, such as OpenBGPD, RPKI-RTR is not available but the same result can be achieved through a static configuration. The router will periodically fetch the validated cache and allow operators to set up route maps based on the result. Relying party software such as Routinator and rpki-client can export validated data in a format that OpenBGPD can parse.

RTRlib is a C library that implements the client side of the RPKI-RTR protocol, as well as route origin validation. RTRlib powers RPKI in BGP software routers such as FRR. In a nutshell, it maintains data from RPKI relying party software and allows to verify whether an autonomous system (AS) is the legitimate origin AS, based on the fetched valid ROA data. BGP‑SRx by NIST is a prototype that can perform similar functions.

Resources

This page provides an overview of projects that support RPKI. It includes, statistics, measurements projects and presentations about operational experiences. Finally, there is an overview of all work in the Internet Engineering Task Force relevant to RPKI.

The Software Projects page an overview of all available tools for using RPKI.

Books

BGP RPKI: Instructions for use, by Flavio Luciani & Tiziano Tofoni (PDF)

Juniper Day One: Deploying BGP Routing Security, by Melchior Aelmans & Niels Raijer (PDF)

Insights and Statistics

There are several initiatives that measure the adoption and data quality of RPKI:

Operational Experiences

Using RPKI with IXP Manager
Documentation to set up Routinator, OctoRPKI and the RIPE NCC Validator with BIRD 2.x
Use Routinator with Cisco IOS-XR
Blog post by Fabien Vincent
Wikimedia RPKI Validation Implementation
Documentation by Arzhel Younsi describing RPKI validator and router configuration
Dropping RPKI invalid routes in a service provider network
Lightning talk by Nimrod Levy - AT&T, NANOG 75, February 2019
RPKI and BGP: our path to securing Internet Routing
Blog post by Jérôme Fleury & Louis Poinsignon - Cloudflare, September 2018
RPKI For Managers
Presentation by Niels Raijer - Fusix Networks, NLNOG Day 2018, September 2018
RPKI at IXP Route Servers
Presentation by Nick Hilliard - INEX, RIPE 78, May 2019

Examples of BGP Hijacks

How Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Today
Cloudflare Blog, 24 June 2019
BGP / DNS Hijacks Target Payment Systems
Oracle Internet Intelligence, 3 August 2018
Shutting down the BGP Hijack Factory
Oracle Dyn, 10 July 2018
Suspicious event hijacks Amazon traffic for 2 hours, steals cryptocurrency
Ars Technica, 24 April 2018
Popular Destinations rerouted to Russia
BGPmon, 12 December 2017
Insecure routing redirects YouTube to Pakistan
Ars Technica, 25 February 2008

IETF Documents

Most of the original work on RPKI standardisation for both origin and path validation was done in the Secure Inter-Domain Routing (sidr) working group. After the work was completed, the working group was concluded.

Since then, the SIDR Operations (sidrops) working group was formed. This working group develops guidelines for the operation of SIDR-aware networks, and provides operational guidance on how to deploy and operate SIDR technologies in existing and new networks.

All relevant drafts and standards can be found in the archives of these two working groups, with a few exceptions, such as draft-ietf-grow-rpki-as-cones.

Software Projects

This section provides an overview of all well known open source projects that support RPKI. It includes Relying Party software for validating RPKI data, Certificate Authority software to run RPKI on your own infrastructure and supporting tools that help deployment and integration.

Relying Party Software

Dragon Research Labs Validating Cache
Software to fetch and validate RPKI certificates and serve them to routers by Dragon Research Labs, written in the Python programming language.
Fort Validator
MIT-licensed Relying Party software by NIC.mx, written in C.
OctoRPKI
Cloudflare's Relying Party software, written in the Go programming language.
RIPE NCC RPKI Validator
Full-featured RPKI relying party software, written by the RIPE NCC in the Java programming language.
Routinator
RPKI relying party software written by NLnet Labs in the Rust programming language, designed to have a small footprint and great portability.
rpki-client(8)
rpki-client is written in C as part of the OpenBSD project, and has been ported to various Linux distributions. Designed to be secure and simple to use.
RPSTIR
Relying Party Security Technology for Internet Routing (RPSTIR) software, initially written by Raytheon BBN Technologies in the C programming language, now maintained by ZDNS.

Certificate Authority Software

Dragon Research Labs Certificate Authority
RPKI Certificate Authority software by Dragon Research Labs, written in the Python programming language.
Krill
RPKI Certificate Authority software by NLnet Labs, written in the Rust programming language.

Supporting Tools

BGP-SRx
SRx is an open source reference implementation and research platform by the National Institute for Standards and Technology (NIST). It is intended for investigating emerging BGP security extensions and supporting protocols such as RPKI Origin Validation and BGPSec Path Validation.
GoRTR
An open-source implementation of RPKI to Router protocol (RFC 6810) using the Go programming language. This project is maintained by Louis Poinsignon at Cloudflare.
pmacct

pmacct is a small set of multi-purpose passive network monitoring tools. It can account, classify, aggregate, replicate and export forwarding-plane data, i.e. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP and BMP; collect and correlate RPKI data; collect infrastructure data via Streaming Telemetry.

The pmacct toolset can perform RPKI Origin Validation and present the outcome as a property in the flow aggregation process. Because it separates out the various types kinds of (invalid) BGP announcements, operators can a good grasp on how their connectivity to the rest of the Internet would look like after deploying a "invalid == reject" policy.

rpki-ov-checker
rpki-ov-checker is an open source utility to quickly analyse BGP RIB dumps and the potential impact of deploying "invalid is reject" routing policies.
RTRlib

The RTRlib implements the client-side of the RPKI-RTR protocol (RFC 6810, RFC 8210) and BGP Prefix Origin Validation (RFC 6811). This also enables the maintenance of router keys, which are required to deploy BGPSec.

RTRlib was originally founded by researchers from the Computer Systems & Telematics group at Freie Universität Berlin and reseachers from the INET research group at Hamburg University of Applied Sciences, under the supervision of Matthias Wählisch and Thomas Schmidt. It is now a community project.

Krill

Krill is a free, open source Resource Public Key Infrastructure (RPKI) daemon, featuring a Certificate Authority (CA) and publication server, written by NLnet Labs.

You are welcome to ask questions or post comments and ideas on our RPKI mailing list. If you find a bug in Krill, feel free to create an issue on GitHub. Krill is distributed under the Mozilla Public License 2.0.

Welcome to Krill

Krill is intended for:

  • Organisations who hold address space from multiple Regional Internet Registries (RIRs). Using Krill, ROAs can be managed seamlessly for all resources within one system.
  • Organisations that need to be able to delegate RPKI to their customers or different business units, so that that they can run their own CA and manage ROAs themselves.
  • Organisations who do not wish to rely on the web interface of the hosted systems that the RIRs offer, but require RPKI management that is integrated with their own systems using a common UI or API.

Using Krill, you can run your own RPKI Certificate Authority as a child of one or more parent CAs, usually a Regional Internet Registry (RIR) or National Internet Registry (NIR). With Krill you can run under multiple parent CAs seamlessly and transparently. This is especially convenient if your organisation holds address space in several RIR regions, as it can all be managed as a single pool.

Krill can also act as a parent for child CAs. This means you can delegate resources down to children of your own, such as business units, departments, members or customers, who, in turn, manage ROAs themselves.

Lastly, Krill features a publication server so you can either publish your certificate and ROAs with a third party, such as your NIR or RIR, or you publish them yourself. Krill can be managed with a web user interface, from the command line and through an API.


You can choose to run Krill as a standalone application or run it together with Krill Manager, a tool that brings together all of the puzzle pieces needed to administer and run Delegated RPKI with Krill as a highly available scalable service.

Krill Manager includes Docker, Gluster, NGINX, Rsyncd, as well as Prometheus and Fluentd outputs for monitoring and log analysis. The integrated setup wizard allows for seamless TLS configuration, optionally using Let's Encrypt, as well as automated updating of the application itself and all included components.

Krill with Krill Manager is available for free as a 1-Click App on the AWS Marketplace and the DigitalOcean Marketplace.

Before You Start

RPKI is a very modular system and so is Krill. Which parts you need and how you fit them together depends on your situation. Before you begin with installing Krill, there are some basic concepts you should understand and some decisions you need to make.

The Moving Parts

With Krill there are two fundamental pieces at play. The first part is the Certificate Authority (CA), which takes care of all the cryptographic operations involved in RPKI. Secondly, there is the publication server which makes your certificate and ROAs available to the world.

In almost all cases you will need to run the CA that Krill provides under a parent CA, usually your Regional Internet Registry (RIR) or National Internet Registry (NIR). The communication between the parent and the child CA is initiated through the exchange of two XML files, which you need to handle manually: a child request XML and a parent response XML. This involves generating the request file, providing it to your parent, and giving the response file back to your CA.

After this initial exchange has been completed, all subsequent requests and responses are handled by the parent and child CA themselves. This includes the entitlement request and response that determines which resources you receive on your certificate, the certificate request and response, as well as the revoke request and response.

重要

The initial XML file exchange is the only manual step required to get started with Delegated RPKI. All other requests and responses, as well as re-signing and renewing certificates and ROAs are automated. As long as Krill is running, it will automatically update the entitled resources on your certificate, as well as reissue certificates, ROAs and all other objects before they expire or become stale. Note that even if Krill does go down, you have 8 hours to bring it back up before data starts going stale.

Whether you also run the Krill publication server depends on if you can, or want to use one offered by a third party. For the general wellbeing of the RPKI ecosystem, we would generally recommend to publish with your parent CA, if available. Setting this up is done in the same way as with the CA: exchanging a publisher request XML and a repository response XML.

Publishing With Your Parent

If you can use a publication server provided by your parent, the installation and configuration of Krill becomes extremely easy. After the installation has completed, you perform the XML exchange twice and you are done.

A repository hosted by the parent CA

A repository hosted by the parent CA, in this case the RIR or NIR.

Krill is designed to run continuously, but there is no strict uptime requirement for the CA. If the CA is not available you just cannot create or update ROAs. This means you can bring Krill down to perform maintenance or migration, as long as you bring it back up within 8 hours to ensure your cryptographic objects are re-signed before they go stale.

注釈

This scenario illustrated here also applies if you use an RPKI publication server offered by a third party, such as a cloud provider.

At this time, only Asia Pacific RIR APNIC and Brazilian NIR NIC.br offer a publication server for their members. Several other RIRs have this functionality on their roadmap. This means that in most cases, you will have to publish yourself.

Publishing Yourself

When you publish your certificate and ROAs yourself, you are faced with running a public service with all related responsibilities, such as uptime and DDoS protection.

Krill can be configured with two types of publication server: embedded and stand-alone. Using the embedded publication server is simple, and doesn't require a publisher request and repository response exchange. However, it is practically impossible to change its configuration after it has been initialised.

For production environments where you may want change strategies over time we recommend running a separate Krill instance acting as a repository only. This also allows you to host a publication server for others, such as children of your own. These can be business units, branches or customers.

Running your own publication server

Running a publication server for yourself and your children

In this scenario you install Krill on a separate, highly available machine and simply don't set up any CA. In addition, you will need to run Rsyncd and a web server of your choice to publish your certificate and ROAs.

System Requirements

The system requirements for Krill are quite minimal. The cryptographic operations that need to be performed by the Certificate Authority have a negligible performance and memory impact on any modern day machine.

When you publish ROAs yourself using the Krill publication server in combination with Rsyncd and a web server of your choice, you will see traffic from several hundred relying party software tools querying every few minutes. The total amount of traffic is also negligible for any modern day situation.

ちなみに

For reference, NLnet Labs runs Krill in production and serves ROAs to the world using a 2 CPU / 2GB RAM / 60GB disk virtual machine. Although we only serve four ROAs and our repository size is 16KB, the situation would not be different if serving 100 ROAs.

Installation

Getting started with Krill is quite easy either building from Cargo or running with Docker. In case you intend to serve your RPKI certificate and ROAs to the world yourself or you want to offer this as a service to others, you will also need to have a public Rsyncd and HTTPS web server available.

Krill can also be se up as a highly available, scalable service using Krill Manager. A 1-Click App on the DigitalOcean Marketplace can set up Krill with all required components, along with integration points for monitoring and log analysis.

Quick Start

Assuming you have a newly installed Debian or Ubuntu machine, you will need to install the C toolchain, OpenSSL, curl and Rust. You can then install Krill using Cargo.

After the installation has completed, first create a data directory in a location of your choice. Next, generate a basic configuration file specifying a secret token and make sure to refer to the data directory you just created. Finally, start Krill pointing to your configuration file.

apt install build-essential libssl-dev openssl pkg-config curl
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
cargo install krill
mkdir ~/data
krillc config simple --token correct-horse-battery-staple --data ~/data/ > ~/data/krill.conf
krill --config ~/data/krill.conf

Krill now exposes its user interface and API on https://localhost:3000 using a self-signed TLS certificate. You can go to this address in a web browser, accept the certificate warning and start configuring your RPKI Certificate Authority. A Prometheus endpoint is available at /metrics.

If you have an older version of Rust and Krill, you can update via:

rustup update
cargo install -f krill

注釈

Using a fully qualified domain name, configuring a real TLS certificate such as Let's Encrypt, running on a different port and exposing Krill securely to other machines is all possible, but goes beyond the scope of this Quick Start.

Installing with Cargo

There are three things you need for Krill: Rust, a C toolchain and OpenSSL. You can install Krill on any Operating System where you can fulfil these requirements, but we will assume that you will run this on a UNIX-like OS.

Rust

The Rust compiler runs on, and compiles to, a great number of platforms, though not all of them are equally supported. The official Rust Platform Support page provides an overview of the various support levels.

While some system distributions include Rust as system packages, Krill relies on a relatively new version of Rust, currently 1.40 or newer. We therefore suggest to use the canonical Rust installation via a tool called rustup.

To install rustup and Rust, simply do:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Alternatively, visit the official Rust website for other installation methods.

You can update your Rust installation later by running:

rustup update

For some platforms, rustup cannot provide binary releases to install directly. The Rust Platform Support page lists several platforms where official binary releases are not available, but Rust is still guaranteed to build. For these platforms, automated tests are not run so it’s not guaranteed to produce a working build, but they often work to quite a good degree.

One such example that is especially relevant for the routing community is OpenBSD. On this platform, patches are required to get Rust running correctly, but these are well maintained and offer the latest version of Rust quite quickly.

Rust can be installed on OpenBSD by running:

pkg_add rust

Another example where the standard installation method does not work is CentOS 6, where you will end up with a long list of error messages about missing assembler instructions. This is because the assembler shipped with CentOS 6 is too old.

You can get the necessary version by installing the Developer Toolset 6 from the Software Collections repository. On a virgin system, you can install Rust using these steps:

sudo yum install centos-release-scl
sudo yum install devtoolset-6
scl enable devtoolset-6 bash
curl https://sh.rustup.rs -sSf | sh
source $HOME/.cargo/env
C Toolchain

Some of the libraries Krill depends on require a C toolchain to be present. Your system probably has some easy way to install the minimum set of packages to build from C sources. For example, apt install build-essential will install everything you need on Debian/Ubuntu.

If you are unsure, try to run cc on a command line and if there’s a complaint about missing input files, you are probably good to go.

OpenSSL

Your system will likely have a package manager that will allow you to install OpenSSL in a few easy steps. For Krill, you will need libssl-dev, sometimes called openssl-dev. On Debian-like Linux distributions, this should be as simple as running:

apt install libssl-dev openssl pkg-config

Building

The easiest way to get Krill is to leave it to cargo by saying:

cargo install krill

If you want to update an installed version, you run the same command but add the -f flag, a.k.a. force, to approve overwriting the installed version.

The command will build Krill and install it in the same directory that cargo itself lives in, likely $HOME/.cargo/bin. This means Krill will be in your path, too.

Getting Started

After the installation has completed, there are just two things you need to configure before you can start using Krill. First, you will need a a data directory, which will store everything Krill needs to run. Secondly, you will need to create a basic configuration file, specifying a secret token and the location of your data directory.

Configuration

The first step is to choose where your data directory is going to live and to create it. In this example we are simply creating it in our home directory.

mkdir ~/data

Krill can generate a basic configuration file for you. We are going to specify the two required directives, a secret token and the path to the data directory, and then store it in this directory.

krillc config simple --token correct-horse-battery-staple --data ~/data/ > ~/data/krill.conf

You can find a full example configuration file with defaults in the GitHub repository.

Start and Stop the Daemon

There is currently no standard script to start and stop Krill. You could use the following example script to start Krill. Make sure to update the DATA_DIR variable to your real data directory, and make sure you saved your krill.conf file there.

#!/bin/bash
KRILL="krill"
DATA_DIR="/path/to/data"
KRILL_PID="$DATA_DIR/krill.pid"
CONF="$DATA_DIR/krill.conf"
SCRIPT_OUT="$DATA_DIR/krill.log"

nohup $KRILL -c $CONF >$SCRIPT_OUT 2>&1 &
echo $! > $KRILL_PID

You can use the following sample script to stop Krill:

#!/bin/bash
DATA_DIR="/path/to/data"
KRILL_PID="$DATA_DIR/krill.pid"

kill `cat $KRILL_PID`

Proxy and HTTPS

Krill uses HTTPS and refuses to do plain HTTP. By default Krill will generate a 2048 bit RSA key and self-signed certificate in /ssl in the data directory when it is first started. Replacing the self-signed certificate with a TLS certificate issued by a CA works, but has not been tested extensively.

For a robust solution, we recommend that you use a proxy server such as NGINX or Apache if you intend to make Krill available to the Internet. Also, setting up a widely accepted TLS certificate is well documented for these servers.

警告

We recommend that you do not make Krill available publicly. You can use the default where Krill will expose its CLI, API and UI on https://localhost:3000/ only. You do not need to have Krill available externally, unless you intend to provide certificates or a publication server to third parties.

Backup and Restore

To back-up Krill:

  • Stop Krill
  • Backup your data directory
  • Start Krill

We recommend that you stop Krill because there can be a race condition where Krill was just in the middle of saving its state after performing a background operation. We will most likely add a process in future that will allow you to back up Krill in a consistent state while it is running.

To restore Krill just put back your data directory and make sure that you refer to it in the configuration file that you use for your Krill instance.

Used Disk Space

Krill stores all of its data under the DATA_DIR. For users who will operate a CA under an RIR / NIR parent the following sub-directories are relevant:

Dir Purpose
ssl Contains the HTTPS key and cert used by Krill
cas Contains the history of your CA in raw JSON format
rfc6492 Contains all messages exchanged with your parent
rfc8181 Contains all messages exchanged with your repository

The space used by the latter two directories can grow significantly over time. We think it may be a good idea to have an audit trail of all these exchanges. However, if space is a concern you can safely archive or delete the contents of these two directories.

In a future version of Krill we will most likely only store the exchanges where either an error was returned, or your Krill instance asked for a change to be made at the parent side: like requesting a new certificate, or publishing an object. The periodic exchanges where your CA asks the parent for its entitlements will then no longer be logged.

Upgrade

It is our goal that future versions of Krill will continue to work with the configuration files and saved data from version 0.4.1 and above. However, please read the changelog to be sure.

The normal process would be to:

  • Install the new version of Krill
  • Stop the running Krill instance
  • Start Krill again, using the new binary, and the same configuration

Note that after a restart you may see a message like this in your log file:

2020-01-28 13:41:03 [WARN] [krill::commons::eventsourcing::store] Could not
deserialize snapshot json '/root/krill/data/pubd/0/snapshot.json', got error:
'missing field `stats` at line 296 column 1'. Will fall back to events.

You can safely ignore this message. Krill is telling you that the definition of a struct has changed and therefore it cannot use the snapshot.json file that it normally uses for efficiency. Instead, it needs to build up the current state by explicitly re-applying all the events that happened to your CA and/or embedded publication server.

RIR and NIR Interactions

In almost all cases, you will interact with one or more Regional Internet Registries (RIRs) or National Internet Registries (NIRs) when setting up delegated RPKI.

The fundamental principle is the same with each of them: the RIR or NIR needs to establish who you are, which resources you are entitled to and where your RPKI certificate and ROAs will be published.

Your identity, permissions and entitlements are all managed by the registry and exposed via their respective member portals. The rest of the information is exchanged in two XML files. You will need to provide a child request generated by Krill, and in return you will receive a parent response that you need to give back to Krill.

Hosted Publication Server

Your RIR or NIR may also provide an RPKI publication server. You are free to publish your certificate and ROAs anywhere you like, so a third party may provide an RPKI publication server as well. To use this service you will need to do an additional exchange. You need to generate and provide a publisher request file and in return you will receive a repository response.

Using an RPKI publication server relieves you of the responsibility to keep a public rsync and web server running at all times to make your certificate and ROAs available to the world.

Of the five RIRs, only APNIC currently offers RPKI publication as a service for their members, upon request. Most other RIRs have it on their roadmap. NIC.br, the Brazilian NIR, provides an RPKI repository server for their members as well. This means that in most cases you will have to publish your certificate and ROAs yourself, as described in the Publication Server section.

Member Portals

If you hold resources in one or more RIR or NIR regions, you will need to have access to the respective member portals and the permission to configure delegated RPKI.

Most RIRs have a few considerations to keep in mind.

AFRINIC

AFRINIC have delegated RPKI available in their test environment, but it’s not operational yet. Work to bring it to production is planned for later in 2020.

APNIC

If you are already using the hosted RPKI service provided by APNIC and you would like to switch to delegated RPKI, there is currently no option for this with MyAPNIC. Please open a ticket with the APNIC help desk to resolve this.

Please note that APNIC offers RPKI publication as a service upon request. It is highly recommended to make use of this, as it relieves you of the need to run a highly available repository yourself.

LACNIC

Although LACNIC offers delegated RPKI, it is not possible to configure this in their member portal yet. While the procedures are still being defined, please open a ticket via hostmaster@lacnic.net to get started.

RIPE NCC

When you are a RIPE NCC member who does not have RPKI configured, you will be presented with a choice if you would like to use hosted or non-hosted RPKI.

RIPE NCC RPKI setup screen

RIPE NCC RPKI setup screen

If you want to set up delegated RPKI with Krill, you will have to choose non-hosted. If you are already using the hosted service and you would like to switch, then there is currently no option for that in the RIPE NCC portal.

Make a note of the ROAs you created and then send an email to rpki@ripe.net requesting your hosted CA to be deleted, making sure to mention your registration id. After deletion, you will land on the setup screen from where you can choose non-hosted RPKI.

Publication Server

重要

It is highly recommended to use an RPKI publication server provided by your parent CA, if available. This relieves you of the responsibility to keep a public rsync and web server available at all times.

It could also be considered good for the RPKI ecosystem as a whole to have at least some centralisation of publication. If Relying Party software around the world has to periodically fetch data from possibly hundreds of RPKI repositories with various degrees of availability and latency, this would not result in speedy updates.

Please keep in mind that it is crucial to ensure that your certificate and ROAs remain available at all times. When the Krill CA is unavailable you will not be able to make updates to your ROAs, and Krill will not (re-)publish objects at a repository. This does not need to affect relying parties as long as the repository remains available, and Krill comes back before objects start to become stale. You have 8 hours with the current built-in defaults.

That being said, only Brazilian NIC.br members can configure a hosted publication server without manual intervention at this time. APNIC members can get access to their publication server upon request. Everyone else will need to host their own repository for now.

When running your own repository, it is advisable to use multiple rsync servers and multiple web servers. You can also use a Content Delivery Network (CDN) in front of your web servers. Please note that the official name for the HTTPS based transport is the RPKI Repository Delta Protocol (RRDP), so you will see this abbreviation used throughout the documentation.

Using the Embedded Repository

警告

Please note it is practically impossible to change the configuration of Krill's embedded repository after it has been initialised. For production environments where you may want to change strategies over time we recommend running a separate Krill instance running as a repository only, as described in Remote Publishing.

Krill can use an embedded repository to publish RPKI objects. It can generate the required configuration for you using the krillc config subcommand. This ensures the syntax is correct, as for example trailing slashes are required. Use this command with your own values, using domain names pointing to servers that are publicly reachable.

krillc config repo \
   --token correct-horse-battery-staple \
   --data ~/data/ \
   --rrdp "https://rpki.example.net/rrdp/" \
   --rsync "rsync://rpki.example.net/repo/" > krill.conf

Krill will write the repository files under its data directory:

$DATA_DIR/repo/rsync/current/    Contains the files for Rsync
$DATA_DIR/repo/rrdp/             Contains the files for HTTPS (RRDP)

You can share the contents of these directories with your repository servers in various ways. It is possible to have a redundant shared file system where the Krill CA can write, and your repository servers can read. Alternatively, you can synchronise the contents of these directories in another way, such as Rsyncing them over every couple of minutes.

If you are using a shared file system, please note that the rsync /current directory cannot be the mount point. Krill tries to write the entire repository to a new folder under $DATA_DIR/repo/rsync and then renames it. This is done to minimise issues with files being updated while relying party software is fetching data.

The next step is to configure your rsync daemons to expose a 'module' for your files. Make sure that the Rsync URI including the 'module' matches the rsync_base in your Krill configuration file. Basic configuration can then be as simple as:

$ cat /etc/rsyncd.conf
uid = nobody
gid = nogroup
max connections = 50
socket options = SO_KEEPALIVE

[repo]
path = /var/lib/krill/data/repo/rsync/current/
comment = RPKI repository
read only = yes

For RRDP you will need to set up a web server of your choice and ensure that it has a valid TLS certificate. Next, you can make the files found under, or copied from $DATA_DIR/repo/rrdp available here. Make sure that the public URI to the RRDP base directory matches the value of rrdp_service_uri in your krill.conf file.

If desired, you can also use a CDN in front of this server to further reduce your load and uptime requirements. If you do, make sure that the public URI matches the directive in krill.conf, because this will be used in your RPKI certificate.

If you find that there is an issue with your repository or, for example, you want to change its domain name, you can set up a new Krill instance with an embedded repository. When you are satisfied that the new one is correct, you can migrate your CA to it by adding them as a publisher under the new repository server, and then updating your CA to use the new repository.

Krill will then make sure that objects are moved properly, and that a new certificate is requested from your parent(s) to match the new location. This scenario would also apply when your RIR starts providing a repository service. You can update your CA to start publishing there instead.

Remote Publishing

By default Krill CAs use an embedded repository server for the publication of RPKI objects. However, you may want to allow delegated CAs (e.g. your customers, or other business units) to publish at a repository server that you manage. In addition, running the repository separately from the CA offers flexibility in changing publication strategy and redundancy.

The repository functions for Krill can be accessed through the publishers subcommand in the CLI:

$ krillc publishers --help
krillc-publishers
Manage publishers in Krill.

USAGE:
    krillc publishers [SUBCOMMAND]

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

SUBCOMMANDS:
  add         Add a publisher.
  help        Prints this message or the help of the given subcommand(s)
  list        List all publishers.
  remove      Remove a publisher.
  response    Show RFC8183 Repository Response for a publisher.
  show        Show details for a publisher.

List Publishers

You can list all publishers in Krill using the command below. Note that the list of publishers will include any embedded Krill CAs as well as any possible remote (RFC 8181 compliant) publishers.

$ krillc publishers list
Publishers: ta, ca

API Call: GET /v1/publishers

Show Publisher Details

You can show the full details of a publisher, including the files that they published.

$ krillc publishers show --publisher ta
handle: ta
id: 5ce21ed116540a22c562f45dae8f2eb5a3c13ceebase uri: rsync://localhost/repo/ta/

API Call: GET /v1/publishers/ta

The default text output just shows the handle of the publisher, the hash of its identity certificate key, and the rsync URI jail under which the publisher is allowed to publish objects.

The JSON response includes a lot more information, including the files which were published and the full ID certificate used by the publisher. Note that even embedded Krill CAs will have such a certificate, even if they access the repository server locally.

Remove a Publisher

You can remove publishers. If you do, then all of its content will be removed as well and the publisher will no longer be allowed to publish.

Note that you can do this without the publisher's knowledge, nor consent, even for embedded Krill CAs. With great power comes great responsibility. That said, you can always add a publisher again (also embedded publishers), and once a publisher can connect to your repository again, it should be able to figure out that it needs to re-publish all its content (Krill CAs will always check for this).

$ krillc publishers remove --publisher ca

API Call: DELETE /v1/publishers/ca

Add a Publisher

In order to add a publisher you have to get its RFC 8183 Publisher Request XML, and hand it over to the server:

$ krillc publishers add --publisher ca --rfc8183 ./data/ca-pub-req.xml

API Call: POST /v1/publishers

Show Repository Response

In order to show the RFC 8183 Repository Response XML for a specific publisher use the following:

$ krillc publishers response --publisher ca
<repository_response xmlns="http://www.hactrn.net/uris/rpki/rpki-setup/" version="1" publisher_handle="ca" service_uri="https://localhost:3000/rfc8181/ca" sia_base="rsync://localhost/repo/ca/" rrdp_notification_uri="https://localhost:3000/rrdp/notification.xml">
  <repository_bpki_ta> repository server id certificate base64 </repository_bpki_ta>
</repository_response>

API Call: GET /v1/publishers/ca/response.json

Publish at a Remote Repository

Controlling your CA's repository server is done through the repo subcommand of the CLI:

$ krillc repo --help
krillc-repo
Manage the repository for your CA.

USAGE:
    krillc repo [SUBCOMMAND]

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

SUBCOMMANDS:
  help       Prints this message or the help of the given subcommand(s)
  request    Show RFC8183 Publisher Request.
  show       Show current repo config.
  state      Show current repo state.
  update     Change which repository this CA uses.

Show repository for CA

You can use the following to show which repository server your CA is using, as well as what is has published at the location. Krill will issue an actual list query to the repository and give back the response, or an error in case of issues.

$ krillc repo show
Repository Details:
  type:        embedded
  base_uri:    rsync://localhost/repo/ca/
  rpki_notify: https://localhost:3000/rrdp/notification.xml

Currently published:
  c6e130761ccf212aea4038e95f6ffb3029afac3494ffe5fde6eb5062c2fa37bd rsync://localhost/repo/ca/0/281E18225EE6DCEB8E98C0A7FB596242BFE64B13.mft
  557c1a3b7a324a03444c33fd010c1a17540ed482faccab3ffe5d0ec4b7963fc8 rsync://localhost/repo/ca/0/31302e302e3132382e302f32302d3234203d3e20313233.roa
  444a962cb193b30dd1919b283ec934a50ec9ed562aa280a2bd3d7a174b6e1336 rsync://localhost/repo/ca/0/281E18225EE6DCEB8E98C0A7FB596242BFE64B13.crl
  874048a2df6ff1e63a14e69de489e8a78880a341db1072bab7a54a3a5174057d rsync://localhost/repo/ca/0/31302e302e302e302f32302d3234203d3e20313233.roa

API Call: GET /v1/cas/ca/repo

Show Publisher Request

You can use the following to show the RFC 8183 Publisher Request XML for a CA. You will need to hand this over to your remote repository so that they can add your CA.

$ krillc repo request
<publisher_request xmlns="http://www.hactrn.net/uris/rpki/rpki-setup/" version="1" publisher_handle="ca">
  <publisher_bpki_ta>your CA ID cert DER in base64</publisher_bpki_ta>
</publisher_request>

API Call: GET /v1/cas/ca/repo/request.json

Change Repository for a CA

You can change which repository server is used by your CA. If you have multiple CAs you will have to repeat this for each of them. Also, note that by default your CAs will assume that they use the embedded publication server. So, in order to use a remote server you will have to use this process to change over.

Changing repositories is actually more complicated than one might think, but fortunately it's all automated. When you ask Krill to change, the following steps will be executed:

  • check that the new repository can be reached, and this ca is authorised
  • regenerate all objects using the URI jail given by the new repository
  • publish all objects in the new repository
  • request new certificates from (all) parent CA(s) including the new URI
  • once received, do a best effort to clean up the old repository

In short, Krill performs a sanity check that the new repository can be used, and then tries to migrate there in a way that will not lead to invalidating any currently signed objects.

To start a migration you can use the following.

$ krillc repo update rfc8183 [file]

API Call: POST /v1/cas/ca/repo

If no file is specified the CLI will try to read the XML from STDIN.

Note that if you were using an embedded repository, and you instruct your CA to connect to the embedded repository, but set up as a remote, then you will find that you have no more published objects - because.. Krill tries to clean up the old repository, and we assume that you would not try to use an embedded server over the RFC 8181 protocol.

But, suppose that you did, you would now see this:

$ krillc repo show
Repository Details:
  type:        remote
  service uri: https://localhost:3000/rfc8181/ca
  base_uri:    rsync://localhost/repo/ca/
  rpki_notify: https://localhost:3000/rrdp/notification.xml

Currently published:
  <nothing>

But no worries.. this can be fixed.

First, you may want to migrate back to using the embedded repository without the RFC 8181 protocol overhead:

$ krillc repo update embedded

But this does not solve your problem just yet. Or well, it will re-publish everything under the new embedded repository, but then it will clean up the 'old' repository which happens to be the same one in this corner case.

The solution is 're-syncing' as described in the following section.

Re-syncing CAs with Repository

If your CAs have somehow become out of sync with their repository, then they will automatically re-sync whenever there is an update like a renewal of manifest and crl (every 8 hours), or whenever ROAs are changed. However, you can force that all Krill CAs re-sync using the following.

$ krillc bulk sync

API Call: POST /v1/bulk/cas/sync/repo

Using the UI

For most use cases, the user interface is the easiest way to use Krill. Creating a CA, connecting to a Regional or National Internet Registry parent and publication server, as well as managing ROAs can all be done from the UI.

You can access the user interface in a browser on the server running Krill at https://localhost:3000. By default, Krill generates a self-signed TLS certificate, so you will have to accept the security warning that your browser will give you.

If you want to access the UI from a different computer, you can either set up a reverse proxy on your server running Krill, or set up local port forwarding with SSH from your computer, for example:

ssh -L 3000:localhost:3000 user@krillserver.example.net

Initial Setup

The first screen will ask you for the secret token you configured for Krill.

Password screen

Enter your secret token to access Krill

Next, you will see the Welcome screen where you can create your Certificate Authority. It will be used to configure delegated RPKI with one or multiple parent CAs, usually your Regional or National Internet Registry.

The handle you select is not published in the RPKI but used as identification to parent and child CAs you interact with. Please choose a handle that helps others recognise your organisation. Once set, the handle cannot be changed.

Welcome screen

Enter a handle for your Certificate Authority

Repository Setup

Before Krill can request a certificate from a parent CA, it will need to know where it will publish. You can add a parent before configuring a repository for your CA, but in that case Krill will postpone requesting a certificate until you have done so.

If you are using a third party repository, copy the publisher request XML and supply it to your publication server provider.

Publisher request

Copy the publisher request XML or download the file

Your publication server provider will give you a repository response XML, which you need to paste or upload.

Repository response

Paste or upload the repository response XML

Alternatively, if you configured the embedded Publication Server using the CLI, this page will simply show your repository details.

Embedded repository details

Embedded repository details

Parent Setup

After successfully configuring the repository, the next step is to configure your parent CA. Copy the child request XML and provide it to your parent, i.e. your RIR or NIR.

Child request

Copy the child request XML or download the file

Your RIR or NIR will provide you with a parent response XML, which you need to paste or upload.

Parent response

Paste or upload the parent response XML

ROA Configuration

After successfully setting up the parent exchange, you are now running delegated RPKI. You can start creating ROAs for the resources you see in the pane on the right.

Resource overview

The ROAs screen displaying all resources and configured ROAs

Click the Add ROA button, then fill in the authorised ASN and one of your prefixes in the form. The maximum prefix length will automatically match the prefix you entered to follow best operational practices, but you can change it as desired.

ROA creation

Adding a new ROA

Using the CLI

Every function of Krill can be controlled from the command line interface (CLI). The Krill CLI is a wrapper around the Krill API which is based on JSON over HTTPS.

It's convenient to set up the following environment variables so that you can easily use the Krill CLI on the same machine where Krill is running:

export KRILL_CLI_TOKEN="correct-horse-battery-staple"
export KRILL_CLI_SERVER="https://localhost:3000/"
export KRILL_CLI_MY_CA="Acme-Corp-Intl"

For your CA name, you can use alphanumeric characters, dashes and underscores, i.e. a-zA-Z0-9_.

Note that you can use the CLI from another machine, but then you will need to set up a proxy server in front of Krill and make sure that it has a real TLS certificate.

To use the CLI you need to invoke krillc followed by one or more subcommands, and some arguments. Help is built-in:

krillc help [subcommand..]

The following arguments are expected by most subcommands:

krillc subcommand [subcommand..]
       [-s, --server https://<yourhost:port>/ ] \
       [-t, --token <token> ]
       [-f, --format text|json|none]   (default text)
       [-c, --ca <ca_name>]            (for ca specific subcommands)

You can set default values for these arguments in environment variables, to make it a bit easier to use the CLI:

Variable Argument
KRILL_CLI_SERVER -s, --server
KRILL_CLI_TOKEN -t, --token
KRILL_CLI_FORMAT -f, --format (defaults to text)
KRILL_CLI_MY_CA -c, --ca

For example:

export KRILL_CLI_TOKEN="correct-horse-battery-staple"
export KRILL_CLI_MY_CA="Acme-Corp-Intl"

If you do use the command line argument equivalents, you will override whatever value you set in the ENV. Krill will give you a friendly error message if you did not set the applicable ENV variable, and don't include the command line argument equivalent.

Setting up Your Certificate Authority

After Krill is running and configured, you can set up the Certificate Authority. This involves the following steps:

  • Create your CA
  • Retrieve your CA's 'child request'
  • Retrieve your CA's 'publisher request'
  • Upload the 'publisher request' to your publisher
  • Save the 'repository response'
  • Update the repository for your CA using the 'repository response'
  • Upload the 'child request' to your parent
  • Save the 'parent response'
  • Add the parent using the 'parent response'
# Add CA
krillc add

# retrieve your CA's 'child request'
krillc parents request > child_request.xml

# retrieve your CA's 'publisher request'
krillc repo request > publisher_request.xml

Next, upload the publisher request XML file to your publication server provider and save the repository response XML file.

# update the repository for you CA using the 'repository response'
krillc repo update remote --rfc8183 repository_response.xml

Next, upload the child request XML file to your parent CA (e.g. via their portal) and save the parent response XML file.

# add the parent using the 'parent response'
krillc parents add remote --parent myparent --rfc8183 ./parent-response.xml

Note that you can use any local name for --parent. This is the name that Krill will show to you. Similarly, Krill will use your local CA name which you set in the `KRILL_CLI_MY_CA ENV variable. However, the parent response includes the names (or handles as they are called in the RFC) by which it refers to itself, and your CA. Krill will make sure that it uses these names in the communication with the parent. There is no need for these names to be the same.

Managing Route Origin Authorisations

Krill lets users create Route Origin Authorisations (ROAs), the signed objects that state which Autonomous System (AS) is authorised to originate one of your prefixes, along with the maximum prefix length it may have.

You can update ROAs through the command line by submitting a plain text file with the following format:

# Some comment
  # Indented comment

 A: 192.0.2.0/24 => 64496
 A: 2001:db8::/32-48 => 64496   # Add prefix with max length
 R: 198.51.100.0/24 => 64496    # Remove existing authorisation

You can then add this to your CA:

$ krillc roas update --delta ./roas.txt

If you followed the steps above then you would get an error, because there is no authorisation for 198.51.100.0/24 => 64496. If you remove the line and submit again, then you should see no response, and no error.

You can list ROAs in the following way:

$ krillc roas list
192.0.2.0/24 => 64496
2001:db8::/32-48 => 64496

Displaying History

You can show the history of all the things that happened to your CA using the history command.

$ krillc history
id: ca version: 0 details: Initialised with ID key hash: 69ee7ef4dae43cd1dcd9ee65b8a1c7fd0c2499c3
id: ca version: 1 details: added RFC6492 parent 'ripencc'
id: ca version: 2 details: added resource class with name '0'
id: ca version: 3 details: requested certificate for key (hash) 'D5EE85EF047010771547FE3ACFE4316503B8EC6F' under resource class '0'
id: ca version: 4 details: activating pending key 'D5EE85EF047010771547FE3ACFE4316503B8EC6F' under resource class '0'
id: ca version: 5 details: added route authorization: '192.0.2.0/24 => 64496'
id: ca version: 6 details: added route authorization: '2001:db8::/32 => 64496'

Using the API

The Krill API is a primarily JSON based REST-like HTTPS API with bearer token based authentication.

Documentation

View the human readable interactive version of the Krill v0.6.2 API specification, made possible by the wonderful ReDoc tool.

Specification

The raw OpenAPI 3.0.2 specification description of the API is available in the Krill source repository (v0.6.2 link).

Generating Client Code

The OpenAPI Generator can generate client code for using the krill API with your favourite language. Below is an example of how to do this using Docker and Python 3.

First create a simple test Krill client program. Save the following as /tmp/krill_test.py, replacing <YOUR XXX> values with the correct access token and domain name for your Krill server.

# Import the OpenAPI generated Krill client library
import krill_api
from krill_api import *

# Create a configuration for the client library telling it how to connect to
# the Krill server
krill_api_config = krill_api.Configuration()
krill_api_config.access_token = '<YOUR KRILL API TOKEN>'
krill_api_config.host = "https://{}/api/v1".format('<YOUR KRILL FQDN>')
krill_api_config.verify_ssl = True
krill_api_config.assert_hostname = False
krill_api_config.cert_file = None

# Create a Krill API client
krill_api_client = krill_api.ApiClient(krill_api_config)

# Get the client helper for the Certificate Authority set of Krill API endpoints
krill_ca_api = CertificateAuthoritiesApi(krill_api_client)

# Query Krill for the list of configured CAs
print(krill_ca_api.list_cas())

Now generate the Krill client library:

GENDIR=/tmp/gen
VENVDIR=/tmp/venv

mkdir -p $GENDIR

wget -O $GENDIR/openapi.yaml https://raw.githubusercontent.com/NLnetLabs/krill/v0.6.2/doc/openapi.yaml

docker run --rm -v $GENDIR:/local \
    openapitools/openapi-generator-cli generate \
    -i /local/openapi.yaml \
    -g python \
    -o /local/out \
    --additional-properties=packageName=krill_api

python3 -m venv $VENVDIR
source $VENVDIR/bin/activate
pip3 install wheel
pip3 install $GENDIR/out/

And then run the sample client program:

python3 /tmp/krill_test.py

To learn more about using your OpenAPI generated client library consult the README.md file that is created in the generated client library directory, e.g. $GENDIR/out/README.md in the example above.

警告

Future improvements to the Krill OpenAPI specification may necessitate that you re-generate your client library and possibly also alter your client program to match any changed class and function names.

Monitoring

The HTTPS server in Krill provides endpoints for monitoring the application. A data format specifically for Prometheus is available and dedicated port 9657 has been reserved.

On the /metrics path, Krill will expose several data points:

  • A timestamp when the daemon was started
  • The number of CAs Krill has configured
  • The number of children for each CA
  • The number of ROAs for each CA
  • Timestamps when publishers were last updated
  • The number of objects in the repository for each publisher
  • The size of the repository, in bytes
  • The RRDP serial number

This is an example of the output of the /metrics endpoint:

# HELP krill_server_start timestamp of last krill server start
# TYPE krill_server_start gauge
krill_server_start 1582189609

# HELP krill_repo_publisher number of publishers in repository
# TYPE krill_repo_publisher gauge
krill_repo_publisher 1

# HELP krill_repo_rrdp_last_update timestamp of last update by any publisher
# TYPE krill_repo_rrdp_last_update gauge
krill_repo_rrdp_last_update 1582700400

# HELP krill_repo_rrdp_serial RRDP serial
# TYPE krill_repo_rrdp_serial counter
krill_repo_rrdp_serial 128

# HELP krill_repo_objects number of objects in repository for publisher
# TYPE krill_repo_objects gauge
krill_repo_objects{publisher="acme-corp-intl"} 6

# HELP krill_repo_size size of objects in bytes in repository for publisher
# TYPE krill_repo_size gauge
krill_repo_size{publisher="acme-corp-intl"} 16468

# HELP krill_repo_last_update timestamp of last update for publisher
# TYPE krill_repo_last_update gauge
krill_repo_last_update{publisher="acme-corp-intl"} 1582700400

# HELP krill_cas number of cas in krill
# TYPE krill_cas gauge
krill_cas 1

# HELP krill_cas_roas number of roas for CA
# TYPE krill_cas_roas gauge
krill_cas_roas{ca="acme-corp-intl"} 4

# HELP krill_cas_children number of children for CA
# TYPE krill_cas_children gauge
krill_cas_children{ca="acme-corp-intl"} 0

The monitoring service has several additional endpoints on the following paths:

/stats/info:Returns the Krill version and timestamp when the daemon was started in a concise format
/stats/cas:Returns the number of ROAs and children each CA has
/stats/repo:Returns details on the internal repository, if configured

Running a Test Environment

If you want to get operational experience with Krill before before configuring a production parent, you can run with an embedded TA which you can give any address space you want. You can generate your own Trust Anchor for it, which can be added to your Relying Party software in order to validate the objects you have published locally.

Setting up the Configuration

For testing we will assume that you will run your own Krill repository inside a single Krill instance, using 'localhost' in the repository URIs. You have to set the following environment variable to re-assure Krill that you are running a test environment, or it will refuse the use of 'localhost':

$ KRILL_TEST="true"

For convenience you may wish to set the following variables, so that you don't have to repeat command line arguments for these:

$ KRILL_CLI_SERVER="https://localhost:3000/"
$ KRILL_CLI_TOKEN="correct-horse-battery-staple"
$ KRILL_CLI_MY_CA="Acme-Corp-Intl"

注釈

Replace "correct-horse-battery-staple" with a token of your own choosing! If you don't the UI will kindly remind you that "You should not get your passwords from https://xkcd.com/936/".

You can now generate a krill configuration file using the following command:

$ krillc config repo \
   --token $KRILL_CLI_TOKEN \
   --rrdp https://localhost:3000/rrdp/ \
   --rsync rsync://localhost/repo/ > /path/to/krill.conf

Use an embedded TA

To run Krill in test mode you can set "use_ta" to "true" in your krill.conf, or use an environment variable:

$ KRILL_USE_TA="true"

Add a CA

When adding a CA you need to choose a handle, essentially just a name. The term "handle" comes from RFC 8183 and is used in the communication protocol between parent and child CAs, as well as CAs and publication servers. For the handle you can use alphanumerical characters, dashes or underscores.

The handle you select is not published in the RPKI but used as identification to parent and child CAs you interact with. You should choose a handle that helps others recognise your organisation. Once set, the handle cannot be be changed as it would interfere with the communication between parent and child CAs, as well as the publication repository.

$ krillc add

API Call: POST /v1/cas

When a CA has been added, it is registered to publish locally in the Krill instance where it exists, but other than that it has no configuration yet. In order to do anything useful with a CA you will first have to add at least one parent to it, followed by some Route Origin Authorisations and/or child CAs.

List CAs

You can list all handles (names) for the existing CAs in Krill using the following command:

$ krillc list
ta
ca

API Call: GET /v1/cas

Let CA publish in the embedded Repository

Step 1: Generate RFC8183 Publisher Request

First you will need to get the RFC 8183 Publisher Request XML for your CA.

$ krillc repo request > publisher_request.xml
Step 2: Add your CA to the Repository

You now need to authorise your CA in your repository and generate an RFC 8183 Repository Response XML file:

$ krillc publishers add \
   --publisher $KRILL_CLI_MY_CA \
   --rfc8183 publisher_request.xml > repository_response.xml
Step 3: Configure your CA to use the Repository

Now configure your CA using the response:

$ krillc repo update remote --rfc8183 repository_response.xml

Show CA Details

You can use the following to show the details of the embedded TA, if you enabled it:

$ krillc show --ca ta
Name:     ta

Base uri: rsync://localhost/repo/ta/
RRDP uri: https://localhost:3000/rrdp/notification.xml

ID cert PEM:
-----BEGIN CERTIFICATE-----
MIIDPDCCAiSgAwIBAgIBATANBgkqhkiG9w0BAQsFADAzMTEwLwYDVQQDEyg2MUE1
QkIzNDBBMDM4M0U4NDdENjI0MThDQUMwOTIxQUJCN0M4NTU1MCAXDTE5MTIwMzEx
..
Yge7BolTITNX8XBzDdTr91TgUKEtDEGlNh6sYOONJW9rQxZIsDIdTeBoPSQKCdXk
D13RgMxQSjycIfAeIBo9yg==
-----END CERTIFICATE-----

Hash: 85041ff6bf2d8edf4e02c716e8be9f4dd49e2cc8aa578213556072bab75575ee

Total resources:
    ASNs: AS0-AS4294967295
    IPv4: 0.0.0.0/0
    IPv6: ::/0

Parents:
Handle: ta Kind: This CA is a TA

Resource Class: 0
Parent: ta
State: active    Resources:
    ASNs: AS0-AS4294967295
    IPv4: 0.0.0.0/0
    IPv6: ::/0
Current objects:
  1529A3C0E47EA38C1101DECDD6330E932E3AB98F.crl
  1529A3C0E47EA38C1101DECDD6330E932E3AB98F.mft

Children:
<none>

API Call: GET /v1/cas/ta

Add a Child to the Embedded TA

If you are using an embedded TA for testing then you will first need to add your new CA "ca" to it. Krill supports two communication modes:

  1. embedded, meaning the both the parent and child CA live in the same Krill
  2. rfc6492, meaning that the official RFC protocol is used

Here we will document the second option. It's slightly less efficient, but it's the same as what you would need to delegate from your CA to remote CAs.

Step 1: RFC 8183 request XML

First you will need to get the RFC 8183 request XML from your child.

$ krillc parents request > myid.xml

API Call: GET /v1/cas/ca/child_request.json

Step 2: Add child "ca" to "ta"
To add a child, you will need to:
  1. Choose a unique local name (handle) that the parent will use for the child
  2. Choose initial resources (asn, ipv4, ipv6)
  3. Have an RFC 8183 request

And in this case we also need to override the ENV variable and indicate that we want to add this child to the CA "ta". The following command will add the child, and the RFC 8183 XML from the "ta":

$ krillc children add remote --ca ta \
                      --child ca \
                      --ipv4 "10.0.0.0/8" --ipv6 "2001:DB8::/32" \
                      --rfc8183 myid.xml > parent-res.xml

API Call: See: POST /v1/cas/ta/children

The default response is the RFC 8183 parent response XML file. Or, if you set --format json you will get the plain API response.

If you need the response again, you can ask the "ta" again:

$ krillc children response --ca "ta" --child "ca"

API Call: GET /v1/cas/ta/children/ca/contact

Step 3: Add parent "ta" to "ca"

You can now add "ta" as a parent to your CA "ca". You need to choose a locally unique handle that your CA will use to refer to this parent. Here we simply use the handle "ta" again, but in case you have multiple parents you may want to refer to them by names that make sense in your context.

Note that whichever handle you choose, your CA will use the handles that the parent response included for itself and for your CA in its communication with this parent. I.e. you may want to inspect the response and use the same handle for the parent (parent_handle attribute), and do not be surprised or alarmed if the parent refers to your ca (child_handle attribute) by some seemingly random name. Some parents do this to ensure unicity.

$ krillc parents add remote --parent ripencc --rfc8183 ./parent-res.xml

API Call: POST /v1/cas/ca/parents

Now you should see that your "child" is certified:

$ krillc show
Name:     ca

Base uri: rsync://localhostrepo/ca/
RRDP uri: https://localhost:3000/rrdp/notification.xml

ID cert PEM:
-----BEGIN CERTIFICATE-----
MIIDPDCCAiSgAwIBAgIBATANBgkqhkiG9w0BAQsFADAzMTEwLwYDVQQDEyg2NTA1
RDA4RUI5MTk5NkJFNkFERDNGOEYyQzUzQTUxNTg4RTY4NDJCMCAXDTE5MTIwMzEy
..
zKtG5esZ+g48ihf6jBgDyyONXEICowcjrxlY5fnjHhL0jsTmLuITgYuRoGIK2KzQ
+qLiXg2G+8s8u/1PW7PVYg==
-----END CERTIFICATE-----

Hash: 9f1376b2e1c8052c1b5d94467f8708935224c518effbe7a1c0e967578fb2215e

Total resources:
    ASNs:
    IPv4: 10.0.0.0/8
    IPv6: 2001:db8::/32

Parents:
Handle: ripencc Kind: RFC 6492 Parent

Resource Class: 0
Parent: ripencc
State: active    Resources:
    ASNs:
    IPv4: 10.0.0.0/8
    IPv6: 2001:db8::/32
Current objects:
  553A7C2E751CA0B04B49CB72E30EB5684F861987.crl
  553A7C2E751CA0B04B49CB72E30EB5684F861987.mft

Children:
<none>

API Call: GET /v1/cas/ca

ROAs

Krill lets users create Route Origin Authorisations (ROAs), the signed objects that state which Autonomous System (AS) is authorised to originate one of your prefixes, along with the maximum prefix length it may have.

You can update ROAs through the command line by submitting a plain text file with the following format:

# Some comment
  # Indented comment

A: 10.0.0.0/24 => 64496
A: 10.1.0.0/16-20 => 64496   # Add prefix with max length
R: 10.0.3.0/24 => 64496      # Remove existing authorization

You can then add this to your CA:

$ krillc roas update --delta ./roas.txt

API Call: POST /v1/cas/ca/routes

If you followed the steps above then you would get an error, because there is no authorisation for 10.0.3.0/24 => 64496. If you remove the line and submit again, then you should see no response, and no error.

You can list Route Origin Authorisations as well:

$ krillc roas list
10.0.0.0/24 => 64496
10.1.0.0/16-20 => 64496

API Call: GET /v1/cas/ca/routes

History

You can show the history of all the things that happened to your CA:

$ krillc history
id: ca version: 0 details: Initialised with cert (hash): 973e3e967ecb2a2a409a785d1faf61cf73a66044, base_uri: rsync://localhost:3000/repo/ca/, rpki notify: https://localhost:3000/rrdp/notification.xml
id: ca version: 1 details: added RFC6492 parent 'ripencc'
id: ca version: 2 details: added resource class with name '0'
id: ca version: 3 details: requested certificate for key (hash) '48C9F037625B3F5A6B6B9D4137DB438F8C1B1783' under resource class '0'
id: ca version: 4 details: activating pending key '48C9F037625B3F5A6B6B9D4137DB438F8C1B1783' under resource class '0'
id: ca version: 5 details: added route authorization: '10.1.0.0/16-20 => 64496'
id: ca version: 6 details: added route authorization: '10.0.0.0/24 => 64496'
id: ca version: 7 details: updated ROAs under resource class '0' added: 10.1.0.0/16-20 => 64496 10.0.0.0/24 => 64496
id: ca version: 8 details: updated objects under resource class '0' key: '48C9F037625B3F5A6B6B9D4137DB438F8C1B1783' added: 31302e312e302e302f31362d3230203d3e203634343936.roa 31302e302e302e302f3234203d3e203634343936.roa  updated: 48C9F037625B3F5A6B6B9D4137DB438F8C1B1783.crl 48C9F037625B3F5A6B6B9D4137DB438F8C1B1783.mft  withdrawn:

Running with Docker

This page explains the additional features and differences compared to running Krill with Cargo that you need to be aware of when running Krill with Docker.

Get Docker

If you do not already have Docker installed, follow the platform specific installation instructions via the links in the Docker official "Supported platforms" documentation.

Fetching and Running Krill

The docker run command will automatically fetch the Krill image the first time you use it, and so there is no installation step in the traditional sense. The docker run command can take many arguments and can be a bit overwhelming at first.

The command below runs Krill in the background and shows how to configure a few extra things like log level and volume mounts (more on this below).

$ docker run -d --name krill -p 127.0.0.1:3000:3000 \
  -e KRILL_LOG_LEVEL=debug \
  -e KRILL_FQDN=rpki.example.net \
  -e KRILL_AUTH_TOKEN=correct-horse-battery-staple \
  -e TZ=Europe/Amsterdam \
  -v krill_data:/var/krill/data/ \
  -v /tmp/krill_rsync/:/var/krill/data/repo/rsync/ \
  nlnetlabs/krill:v0.6.2

注釈

The Docker container by default uses UTC time. If you need to use a different time zone you can set this using the TZ environment variable as shown in the example above.

Admin Token

By default Docker Krill secures itself with an automatically generated admin token. You will need to obtain this token from the Docker logs in order to manage Krill via the API or the krillc CLI tool.

$ docker logs krill 2>&1 | fgrep token
docker-krill: Securing Krill daemon with token <SOME_TOKEN>

You can pre-configure the token via the auth_token Krill config file setting, or if you don't want to provide a config file you can also use the Docker environment variable KRILL_AUTH_TOKEN as shown above.

Running the Krill CLI

Local

Using a Bash alias with <SOME_TOKEN> you can easily interact with the locally running Krill daemon via its command-line interface (CLI):

$ alias krillc='docker exec \
  -e KRILL_CLI_SERVER=https://127.0.0.1:3000/ \
  -e KRILL_CLI_TOKEN=correct-horse-battery-staple \
  nlnetlabs/krill:v0.6.2 krillc'

$ krillc list -f json
{
  "cas": []
}
Remote

The Docker image can also be used to run krillc to manage remote Krill servers. Using a shell alias simplifies this considerably:

 $ alias krillc='docker run --rm \
   -e KRILL_CLI_SERVER=https://rpki.example.net/ \
   -e KRILL_CLI_TOKEN=correct-horse-battery-staple \
   -v /tmp/ka:/tmp/ka nlnetlabs/krill:v0.6.2 krillc'

$ krillc list -f json
{
   "cas": []
}

Note: The -v volume mount is optional, but without it you will not be able to pass files to krillc which some subcommands require, e.g.

$ krillc roas update --ca my_ca --delta /tmp/delta.in

Service and Certificate URIs

The Krill service_uri and rsync_base config file settings can be configured via the Docker environment variable KRILL_FQDN as shown in the example above. Providing KRILL_FQDN will set both service_uri and rsync_base.

Data

Krill writes state and data files to a data directory which in Docker Krill is hidden inside the Docker container and is lost when the Docker container is destroyed.

Persistence

To protect the data you can write it to a persistent Docker volume which is preserved even if the Krill Docker container is destroyed. The following fragment from the example above shows how to configure this:

docker run -v krill_data:/var/krill/data/
Access

Some of the data files written by Krill to its data directory are intended to be shared with external clients via the rsync protocol. To make this possible with Docker Krill you can either:

Mounting the data in a host directory:

docker run -v /tmp/krill_rsync:/var/krill/data/repo/rsync

Sharing via a named volume:

docker run -v krill_rsync:/var/krill/data/repo/rsync

Logging

Krill logs to a file by default. Docker Krill however logs by default to stderr so that you can see the output using the docker logs command.

At the default warn log level Krill doesn't output anything unless there is something to warn about. Docker Krill however comes with some additional logging which appears with the prefix docker-krill:. On startup you will see something like the following in the logs:

docker-krill: Securing Krill daemon with token ba473bac-021c-4fc9-9946-6ec109befec3
docker-krill: Configuring /var/krill/data/krill.conf ..
docker-krill: Dumping /var/krill/data/krill.conf config file
...
docker-krill: End of dump

Environment Variables

The Krill Docker image supports the following Docker environment variables which map to the following krill.conf settings:

Environment variable Equivalent Krill config setting
KRILL_AUTH_TOKEN auth_token
KRILL_FQDN service_uri and rsync_base
KRILL_LOG_LEVEL log_level
KRILL_USE_TA use_ta

To set these environment variables use -e when invoking docker, e.g.:

docker run -e KRILL_FQDN=https://rpki.example.net/

Using a Config File

Via a volume mount you can replace the Docker Krill config file with your own and take complete control:

docker run -v /tmp/krill.conf:/var/krill/data/krill.conf

This will instruct Docker to replace the default config file used by Docker Krill with the file /tmp/krill.conf on your host computer.

Krill Manager

Krill Manager is a tool for running Krill as a highly available scalable service. It brings together all of the puzzle pieces needed to administer and run Delegated RPKI with Krill.

Krill Manager includes Docker, Gluster, NGINX, Rsyncd, as well as Prometheus and Fluentd outputs for monitoring and log analysis. The integrated setup wizard allows for seamless TLS configuration, optionally using Let's Encrypt, as well as automated updating of the application itself and all included components.

Krill with Krill Manager is available for free as a 1-Click App on the AWS Marketplace and the DigitalOcean Marketplace.

You can watch an introduction to the capabilities of Krill Manager in the video below. It walks through setting up Krill and all additional components using the 1-Click App, configure the Certificate Authority to run Delegated RPKI under a Regional Internet Registry and create ROAs, all in just 6 minutes real-time.

Updates

News

To stay informed about new releases of Krill and Krill Manager and to learn about documentation updates and other announcements, subscribe to the NLnet Labs RPKI mailing list. Detailed release notes are available on GitHub.

Feedback

If you would like to suggest an improvement or to report a problem, please create an isse in the Krill or Krill Manager GitHub issue tracker as appropriate.

Upgrading

Krill Manager is able to upgrade itself and the components that it manages such as Krill, NGINX and Rsync.

To upgrade Krill Manager do one of the following:

  • Use the krillmanager upgrade CLI command.
  • Answer YES when notified by the krillmanager init command that a new version is available.
  • Automatically by krillmanager init if a new version is available and Initial Setup is not yet complete.

Prerequisites

Required

To use Krill Manager you will need the following:

  • A DigitalOcean or AWS account (to create the virtual machine that will run Krill Manager, Krill, et al).
  • The ability to create DNS subdomain A records (to point one or more domains at the new VM).
Optional
  • Your own TLS certificate and key file(s) (in PEM format) for the domain(s) that you wish to use with Krill. When using your own TLS certificate files you will need to upload them to the VM before performing the initial setup, e.g.:

    scp /local/path/to/certificate.pem username@<IP address>:/tmp/
    

    ちなみに

    It is not necessary to use your own TLS certificates as Krill Manager can obtain for you a Let's Encrypt TLS certificate per configured domain. Krill Manager will ensure that Let's Encrypt certificates are renewed before they expire.

  • Connection details and credentials for an AWS S3 compatible service to which host and application logs can be uploaded periodically.

Initial Setup

First, log into the virtual machine you created using SSH. Note that for AWS Marketplace EC2 instances you have use the username ubuntu and for DigitalOcean Marketplace droplets the username root.

Next, run the krillmanager init command to launch the interactive setup wizard.

重要

The wizard covers the most common cases. It does NOT yet support clustered or advanced log streaming setups or overriding the Krill or NGINX or RsyncD configuration. Such setups are possible but not yet via the wizard.

Exiting the Wizard

Press CTRL-C at any time to abort the wizard. You can use the krillmanager init command later to run the wizard again.

Automatic Upgrade

On first launch Krill Manager will automatically upgrade itself to a newer version if available:

# krillmanager init
A new version is available (v0.2.0 => v0.2.2).
Automatically upgrading to newer version v0.2.2..
Checking for newer version
Fetching newer version (v0.2.2)
Running post-upgrade actions
Upgrading host files
Upgrading dependencies
[###############-------------------------] 38% Upgrading dependencies

注釈

This is an upgrade of Krill Manager, which might not include an upgrade of Krill, it may just be an upgrade to the Manager itself and/or to one or more of the other components such as NGINX or Rsync. The version number shown is the version number of Krill Manager, not of Krill.

Manual Upgrade

If you have previously completed the wizard and later run krillmanager init again, if a new version is available you will be offered the choice to upgrade:

# krillmanager init
A new version is available (v0.2.0 => v0.2.2).

> Would you like to upgrade? [YES/NO]: YES

The default action is to upgrade, but you can continue without upgrading.

Step by Step

Once any available upgrade is complete you will be presented with the welcome page of the wizard. Below you will find help for each of the possible wizard pages to help you along the way:

Wizard: Welcome

The first page of the wizard welcomes you, summarises the steps ahead, and offers a few useful tips.

KRILL SETUP WIZARD: Welcome            [next: Restore from backup (Optional)]
-----------------------------------------------------------------------------

    'lx0KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK0xc';kKKKK0Od;.cO0Oko:.
 'lkOOxoxXWXkolcclOWMMMMMMMWOddONMMMMMMMMMMMMMMWKxxXMMMMMWNOdkNMMWNKx;
dKkc'  'kKo.      .:OWMMMMMO.  .kMMMMMMMMMMMMMMMMMWMMMMMMMMMMWWMMMMMMNd,:;.
,,    .dNl          .:kNMMMXd::dXMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWkkXO;
      .O0'             ;xXWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMNx.
      '00'               .cxKWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWO,
      .ok'    .;codxkxdl:'. .,ldk0KXNNWMMWWWWNWWNWWNXXXXXXXXXXNWMMMMMMMMMMMMMMO'
        .. .ck0kxdxxxxddkOOd:'.  ....'oNMOdXOoKOo00;...........,:lkNMMMMMMMMMMWd
           ckxdxkkkkkkkkxdddkkkkxollco0KxcdXolNxcKk.               'oXMMMMMMMMMK
            .:kkxxxkkxxxxkOkxdxxxdxxxddolkKdc0KcdNl                  'OWMMMMMMMN
              .;kkddddxOOxdxxdxkxxxxxxkkkdodKOcdXd.                   .OMMMMMMMN
                .'dkkOkdddxkkkkxxdxxxxxxxkOkooOKl.                     cWMMMMMMK
                  ...'cxOOkxdxxxdxxxxxxddxddO0d'                       :NMMMMMWd
                        ..;ldxkkkkkkxxkkkkko;.                         oWMMMMMO'
                               ...........                            ;KMMMMWO'
                                                                    .cKMMMMNx.
                                                                  .cONMMMWO;
                                                   .;;;;::;:::cldOXWMMWXx;.
                                                   .:OWMWMMMMMMMMWNKOo:.
                                                     .d0KKKKKK0Oko:'

Welcome to the Krill setup wizard. You will be guided through the following
topics:

  - Restore from backup (Optional)
  - Publication mode
  - CA name
  - Domains
  - Authentication
  - Logging
  - HTTPS certificates

Tips:
- For more information see: https://marketplace.digitalocean.com/apps/krill
- To redo this process later, invoke the following command: krillmanager init
- To abort this process without making any changes: press CTRL-C

Press any key to continue:
Wizard: Restore from Backup

A Krill Manager backup is a complete copy of all of the application settings and data files for applications managed by Krill Manager, e.g. including Krill settings, NGINX certificate and key files, a record of your wizard choices, the RRDP and Rsync content files, application logs, etc.

If you previously made such a backup using the krillmanager backup command on this or another Krill Manager instance (and in the latter case transported the backup archive to this instance) you can choose at this point to set up this Krill Manager instance using the data in the backup archive, instead of answering wizard questions to setup Krill Manager from scratch.

With or Without Existing Backups

In most cases the restore from backup wizard page will look like this:

KRILL SETUP WIZARD: Restore from backup (Optional)   [next: Publication mode]
-----------------------------------------------------------------------------

Would you like to:
- Setup Krill by answering questions, OR
- Restore from a previously made 'krillmanager backup' archive?

> Type: INITIALIZE, or FILE to supply an archive file path?:

If however you have previously used the krillmanager backup command on this Krill Manager instance, you will see the backup archives the command created listed on this page:

KRILL SETUP WIZARD: Restore from backup (Optional)   [next: Publication mode]
-----------------------------------------------------------------------------

Detected backup archives:
     1  backup-20200406-125743.tgz
     2  backup-20200406-125759.tgz

Would you like to:
- Setup Krill by answering questions, OR
- Restore from a previously made 'krillmanager backup' archive?

> Type: INITIALIZE, or FILE to supply an archive file path?:

Krill Manager has no way of knowing about backup archives that you might have copied to the filesystem from an external storage location or from another server and so also provides an option for you to specify the path to the backup archive manually.

Initialise or Restore

Enter one of:

  • INITIALIZE to skip this page and continue setting up Krill Manager from scratch.
  • N where N is the number of a listed backup that you would like to restore.
  • FILE to specify the path to a backup archive to restore, e.g. that you have previously copied to the system with the scp command.

注釈

If you choose a backup to restore from the wizard will complete the restore process and then exit with a status summary of the running Krill Manager instance.

Wizard: Publication Mode

ちなみに

See also the following Krill documentation:
Krill can operate in one of two modes:
  • Publish with a 3rd party
  • Publish in its own repository

The publication mode wizard page lets you choose which of these modes Krill will be configured for:

KRILL SETUP WIZARD: Publication mode                          [next: CA name]
-----------------------------------------------------------------------------

Would you like to publish with a 3rd party (e.g. NIC.br), or run your own
publication server?

Info:
- If you answer YES you will need to use your repository provider's portal
  to obtain details needed to permit Krill to publish to the repository.

- If instead you answer NO, Krill's embedded repository functionality will
  be enabled and the repository data will be served for you from HTTP/RRDP
  and Rsync servers that will run on this Droplet.

Warning: We advise against running your own repository as each additional
repository server is one more server that Relying Parties must contact, and
thus you should ensure that your repository server remains available and
reachable at all times.

> Would you like to publish with a 3rd party? [YES/NO]:
3rd Party Mode

In 3rd party mode your Krill instance will only be a Certificate Authority and you will need to configure it to publish any ROAs with an external repository, e.g. that of a parent such as NIC.br.

Answering YES will enable 3rd party mode.

Self-Publishing Mode

In self-publishing mode the ROA objects created by your Krill instance will be made available by Krill Manager to Internet clients via the RRDP and Rsync protocols.

Answering NO will enable self-publishing mode.

注釈

The wizard may need to ask you for additional information in later pages in order to complete the setup for self-publishing mode.

Wizard: CA Name

Normally with Krill when first visiting the web UI you will be prompted to enter the name of your Certificate Authority.

Krill Manager streamlines this process by asking you for the Certificate Authority name during the Krill Manager wizard. Once the wizard is complete Krill Manager will automatically create a CA in Krill by the name that you give here:

KRILL SETUP WIZARD: CA name                                   [next: Domains]
-----------------------------------------------------------------------------

What name would you like to use for your Certificate Authority?

Info: A Certificate Authority will be created in Krill for you using the
name that you specify.

> Certificate Authority name:

From Using the UI:

The handle you select is not published in the RPKI but used as identification to parent and child CAs you interact with. Please choose a handle that helps others recognise your organisation. Once set, the handle cannot be changed.
The CA name:
  • Can be used with the Krill API to manipulate the CA.
  • Will be shown in the Krill web UI.
  • Will be visible to child CAs.
  • Will appear as a component in URIs contained in RRDP snapshot XML and delta XML content.
  • Will be used as a component in the Rsync repository path for fetching content with the Rsync protocol.
  • Will appear inside .roa and .mft objects served via RRDP and Rsync.
Wizard: Domains

Krill Manager needs to know which domain names your clients will be expected to use in order to contact your Krill services. The domains that you need to specify in this step are influenced by the choice you made in the Wizard: Publication Mode step:

  • 3rd Party Mode: only a single domain name for the Krill UI and API is required.

    KRILL SETUP WIZARD: Domains                            [next: Authentication]
    -----------------------------------------------------------------------------
    
    Which domain name(s) will Krill, RRDP and Rsync on this Droplet be reachable
    at?
    
    Warning: These domain names may be the same, or different. You should ensure
    that they are registered in DNS and resolve to this Droplet.
    
    > Krill domain:
    
  • Self-Publishing Mode: additional domains for RRDP and Rsync will be requested.

    KRILL SETUP WIZARD: Domains                            [next: Authentication]
    -----------------------------------------------------------------------------
    
    Which domain name(s) will Krill, RRDP and Rsync on this Droplet be reachable
    at?
    
    Warning: These domain names may be the same, or different. You should ensure
    that they are registered in DNS and resolve to this Droplet.
    
    > Krill domain: ca.demo.krill.cloud
    > RRDP domain: rrdp.demo.krill.cloud
    > Rsync domain: rsync.demo.krill.cloud
    

警告

The domain names that you enter in this page of the wizard should already be configured to point at your Krill Manager IP address.

注釈

Later in the process the wizard will offer to obtain Let's Encrypt certificates on your behalf for the Krill and RRDP domains that you supply on this page of the wizard.

Domain Validity

Krill Manager will attempt to lookup the DNS records for the given domain names in order to verify that they are valid. If not found, Krill Manager will warn you.

If you are sure that the domain name is correct but DNS propagation has not completed yet, or for some other reason you would like to proceed, Krill Manager allows you to ignore the lookup failure:

KRILL SETUP WIZARD: Domains                            [next: Authentication]
-----------------------------------------------------------------------------

Which domain name(s) will Krill, RRDP and Rsync on this Droplet be reachable
at?

Warning: These domain names may be the same, or different. You should ensure
that they are registered in DNS and resolve to this Droplet.

> Krill domain: foo.bar
DNS lookup for this domain did not return any results

> Are you sure you wish to use this domain? [YES/NO]:
Wizard: Authentication

The Krill UI and API are secured by an authentication token. In this step of the wizard you can choose the token to use, or accept a token generated by Krill Manager for you:

KRILL SETUP WIZARD: Authentication                            [next: Logging]
-----------------------------------------------------------------------------

Please choose a token which you will use to authenticate both with the Krill
API and the Krill web portal, or accept the default.

> Authentication token: 529f463d-70c1-4b01-af33-c40ee8fbfa8a
Wizard: Logging

If you have an account with a 3rd party S3-like service such as DigitalOcean Spaces, Krill Manager can use it to store copies of the logs from your host operating systemd journal and the various Krill Manager operated services, including Krill RFC exchange logs.

KRILL SETUP WIZARD: Logging             [next: Determining public IP address]
-----------------------------------------------------------------------------

Would you like logs (e.g. Krill logs and RFC protocol messages, NGINX and
RsyncD access and error logs, and operating system logs) to be uploaded
automatically to an AWS S3 compatible provider (e.g. DigitalOcean Spaces)?

For information about these services and to sign up and create a
storage "bucket" please visit the service provider web pages, e.g.:
  - https://aws.amazon.com/s3/
  - https://www.digitalocean.com/products/spaces/

> Would you like to upload logs to an AWS S3 compatible service? [YES/NO]:
Enter:
  • NO to skip this page and continue with the wizard.
  • YES to provide your S3-like service connection details.

警告

If you do not choose to upload logs they are still available to you but in the event that the host suffers a failure you will lose these logs unless you capture them as part of a periodic backup process.

ちなみに

You can re-run krillmanager init later to enable log upload. However, note that only new logs from that moment on will be uploaded.

Providing Connection Details

After answering YES you will be prompted to enter the S3-like service connection and authentication details. You will need to obtain these from your S3-like service service provider.

The wizard will try to detect the environment that it is running in and provide sensible default values where possible, e.g. in the example below the S3 Endpoint value was set by the wizard based on the fact that the Droplet on which Krill Manager was running was located in the DigitalOcean ams3 region.

> Would you like to upload logs to an AWS S3 compatible service? [YES/NO]: YES

Please provide the connection details for your bucket.

You may find the official documentation for your AWS S3 provider helpful, e.g.:
  - https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html
  - https://developers.digitalocean.com/documentation/spaces/#aws-s3-compatibility

> Bucket Name: krillmanagerdemo
> Bucket Directory: logs
> S3 Endpoint: ams3.digitaloceanspaces.com
> Access Key: ********************
> Secret Key: *******************************************

Attempting to list contents of bucket krillmanagerdemo to verify credentials..
Invoking S3 client..
Success!

Press any key to continue:

Once the wizard has the connection and authentication details it will attempt to verify them by trying to list the contents of the destination S3 bucket.

In the event that the connection and/or authentication details are incorrect the wizard will output error messages instead of Success! and you will be returned to the initial yes/no question where you can either choose to try again or continue without log uploading at this time.

Video Guide
Advanced Configuration

For more information about what is logged, how, to where and how to configure the logging setup beyond what is possible with the wizard, consult the Krill Manager Logging documentation.

Wizard: HTTPS Certificates

As advised in Proxy and HTTPS, Krill Manager makes Krill available to the Internet via the NGINX proxy server. When running in self-publishing mode (see Wizard: Publication Mode) NGINX is also used to offer the RRDP protocol to Relying Party clients.

To secure the connection NGINX requires a TLS certificate, either provided by you or requested on your behalf from Let's Encrypt by the Krill Manager wizard.

For each domain that requires a TLS certificate (either just one domain for Krill, or if using a separate domain for RRDP then that domain too) the wizard checks whether or not it already has a certificate and asks how you would like to proceed:

KRILL SETUP WIZARD: HTTPS certificates              [next: Verifying domains]
-----------------------------------------------------------------------------

Domain: ca.demo.krill.cloud

Checking for certificate files for domain:
  Certificate: Not found
  Private key: Not found

An HTTPS certificate and corresponding private key file are required for
this domain.

How would you like to proceed? Enter one of:
  - NEW: to request (or renew) a Lets Encrypt certificate, OR
  - OWN: to supply your own certificate files from /tmp, OR

> NEW or OWN:
Enter one of:
  • NEW to request a new Let's Encrypt certificate.
  • OWN to supply your own certificate files.
  • USE to use the existing certificate that the wizard found, if any.
Using Let's Encrypt Certificates

When using Let's Encrypt issued certificates Krill Manager will ensure that they are renewed before they expire.

警告

When using your own certificates, instead of Krill Manager obtained Let's Encrypt certificates, you are responsible for replacing the certificate files before the certificates expire.

DNS and Firewall Requirements

For Let's Encrypt to issue a TLS certificate the following requirements must be met:

  • A DNS A record for the domain name must point to the Krill Manager IP address.
  • The DNS A record must have sufficiently propagated around the global DNS network such that multiple Let's Encrypt probe locations around the world can all resolve the name correctly.
  • Port 80 on the Krill Manager instance must be open, both on the host and on any cloud firewall or proxy layer (e.g. load balancer) in front of the Krill Manager instance.
IP Address Verification

Prior to requesting a Let's Encrypt certificate the wizard will ask you to confirm that DNS lookup results for the domain look correct.

KRILL SETUP WIZARD: HTTPS certificates              [next: Applying settings]
-----------------------------------------------------------------------------

Domain: ca.demo.krill.cloud

To respond to the Lets Encrypt HTTP-01 challenge, a standalone certbot web
server will be started on this Droplet on port 80.

info: In order for Lets Encrypt to issue a certificate for this domain there
must be a DNS A record pointing either to:

  - the IP address of this Droplet: 198.51.100.2, OR
  - the IP address of a proxy such as a load balancer or CDN

From this Droplet the DNS lookup result for the domain is:
  ca.demo.krill.cloud.        59      IN      A       198.51.100.2


> Are you sure you want to continue? [YES/NO]:
Let's Encrypt Request Log

If you approve the wizard will then contact Let's Encrypt:

> Are you sure you want to continue? [YES/NO]: YES
Deleting any existing Lets Encrypt certificate files for this domain
Deleting any self-signed/provided certificate files for this domain
Stopping NGINX if running
Requesting Lets Encrypt certificate for domain demo.krill.cloud
letsencrypt: Saving debug log to /var/log/letsencrypt/letsencrypt.log
letsencrypt: Plugins selected: Authenticator standalone, Installer None
letsencrypt: Registering without email!
letsencrypt: Obtaining a new certificate
letsencrypt: Performing the following challenges:
letsencrypt: http-01 challenge for demo.krill.cloud
letsencrypt: Waiting for verification...
letsencrypt: Cleaning up challenges
letsencrypt: IMPORTANT NOTES:
letsencrypt:  - Congratulations! Your certificate and chain have been saved at:
letsencrypt:    /etc/letsencrypt/live/ca.demo.krill.cloud/fullchain.pem
letsencrypt:    Your key file has been saved at:
letsencrypt:    /etc/letsencrypt/live/ca.demo.krill.cloud/privkey.pem
letsencrypt:    Your cert will expire on 2020-07-07. To obtain a new or tweaked
letsencrypt:    version of this certificate in the future, simply run certbot
letsencrypt:    again. To non-interactively renew *all* of your certificates, run
letsencrypt:    "certbot renew"
letsencrypt:  - Your account credentials have been saved in your Certbot
letsencrypt:    configuration directory at /etc/letsencrypt. You should make a
letsencrypt:    secure backup of this folder now. This configuration directory will
letsencrypt:    also contain certificates and private keys obtained by Certbot so
letsencrypt:    making regular backups of this folder is ideal.
letsencrypt:  - If you like Certbot, please consider supporting our work by:
letsencrypt:    Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
letsencrypt:    Donating to EFF:                    https://eff.org/donate-le

Press any key to continue:

In this example the request succeeded. If any problems occurred the log would instead indicate the reason for the failure.

Once you press a key to continue you will be returned to the start of the HTTPS Certificates wizard page. The wizard will verify if it now has a certificate for the domain and if so will give you the option to USE it:

KRILL SETUP WIZARD: HTTPS certificates              [next: Verifying domains]
-----------------------------------------------------------------------------

Domain: ca.demo.krill.cloud

  Checking for certificate files for domain:
    Certificate: Found
    Private key: Found

  This certificate was issued for: subject=CN = ca.demo.krill.cloud
  This certificate was issued by : issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3

  How would you like to proceed? Enter one of:
    - USE: Use this certificate, OR
    - NEW: to request (or renew) a Lets Encrypt certificate, OR
    - OWN: to supply your own certificate files from /tmp, OR

  > NEW, OWN, or USE:
Wizard: Verifying Domains

Before applying all settings and starting the services, the wizard will run a basic check to verify that each configured domain resolves and can be connected to:

KRILL SETUP WIZARD: Verifying domains               [next: Applying settings]
-----------------------------------------------------------------------------

Running a basic check that each configured domain resolves to this Droplet
and can be connected to.
  - ca.demo.krill.cloud: OKAY
  - rrdp.demo.krill.cloud: OKAY
  - rsync.demo.krill.cloud: OKAY
Wizard: Applying Settings

At this stage the wizard has everything it needs to generate application configuration files based on the settings chosen in the earlier wizard pages and to launch the applications:

KRILL SETUP WIZARD: Applying settings                  [next: Setup complete]
-----------------------------------------------------------------------------

Generating Krill configuration file
Preparing NGINX configuration
Preparing RSYNCD configuration
Creating network krill_default
Creating service krill_cert_renewer
Creating service krill_host_metrics
Creating service krill_krill
Creating service krill_nginx
Creating service krill_nginx_metrics
Creating service krill_rsyncd
Waiting for services to become ready..
[###################################-----] 88% Starting services..

Once the applications are running the wizard will configure the CA Name you requested (assuming no CA exists already), and in self-publishing mode the embedded Krill repository will be configured for use by the newly created CA:

Waiting for services to become ready..

Creating CA 'Acme-Corp-Intl'..
Registering CA 'Acme-Corp-Intl' with the embedded repository..
Wizard: Setup Complete

Once everything is setup the wizard will report the status of the running services and the locations at which the services can be found:

KRILL SETUP WIZARD: Setup complete                                [next: END]
-----------------------------------------------------------------------------

Service status summary:
cert_renewer   1/1
host_metrics   1/1
krill          1/1
nginx          1/1
nginx_metrics  1/1
rsyncd         1/1
All services appear to be running.

Krill and related services should now be available as follows:
  - Krill Web Portal: https://ca.demo.krill.cloud/ (token: 4741d1f8-e317-488e-8c8a-a36e0cb16bf1)
  - RRDP URI        : https://rrdp.demo.krill.cloud/rrdp/
  - Rsync URI       : rsync://rsync.demo.krill.cloud/repo/
  - Prometheus monitoring endponts:
    - Krill         : http://ca.demo.krill.cloud:9657/metrics
    - NGINX         : http://ca.demo.krill.cloud:9113/metrics
    - Docker        : http://ca.demo.krill.cloud:9323/metrics
    - O/S           : http://ca.demo.krill.cloud:9100/metrics
    - Gluster       : http://ca.demo.krill.cloud:8080/metrics

Please consult the documentation for guidance on administering and
monitoring these services.

Thanks,

The NLnet Labs RPKI team.

Press any key to continue:
Verify that Krill is Running

Use the Krill Web Portal link and token to login to the Krill UI where you should see your newly created Certificate Authority and the details required to link your CA to a parent:

Krill UI screenshot.

Refer to the Krill Documentation to learn more about Krill.

Next Steps

Click Next or return to the index to continue learning about Krill Manager.

Using the CLI

Krill Manager is controlled via a command line interface (CLI) tool called krillmanager, separate to the krillc tool that can be used to manage a Krill server. This page documents how to use both in the context of a Krill Manager instance.

krillc

On a Krill Manager machine you can invoke the krillc command just as if you had installed Krill yourself. However, what you are actually invoking is a special wrapper provided by Krill Manager which simplifies and tailors the use of the krillc command to the Krill Manager context. You can read more about this in the krillmanager krillc documentation below.

krillmanager

Krill Manager supports the following commands:

# krillmanager --help

Usage: COMMAND [ARGUMENTS]

A tool for managing NLnet Labs Krill and related services.

Commands:
  backup   Backup Krill and supporting services state
  certs    List the TLS certificates in use by NGINX
  help     Display this message
  init     (Re)initialize DNS, TLS and Krill settings
  krillc   Execute Krill CLI commands
  logs     Show the service container logs
  renew    Renew expiring NGINX Lets Encrypt certificates
  restart  Restart Krill and supporting services
  restore  Restore Krill and supporting services state from a backup
  start    Start Krill and supporting services
  status   Show the status of the service containers
  stop     Stop Krill and supporting services
  upgrade  Upgrade Krill and supporting services
Querying the Version
# krillmanager --version
v0.2.2 [Krill: v0.6.2]

This tells you that Krill Manager is version 0.2.2, and that it deploys version 0.6.2 of Krill.

Command: backup

Creates a tar archive on the host filesystem containing all configuration files and data for Krill Manager and the components that it manages. This includes NGINX certificate files and Krill embedded repository data files. It does NOT include log files.

The path to the created archive will be printed to the terminal on completion of the backup. The backup archive can be restored later using the krillmanager restore command.

警告

In order to avoid impacting your system the archive is made while all applications are running. There is a very small chance that a Krill data file will be inconsistently captured in the backup.


Command: certs

This command outputs information both about the certificates in use by NGINX, and the certificates being managed by the Lets Encrypt certbot tool.


Command: help

Displays the usage summary.


Command: init

Runs the (re)configuration wizard. See Initial Setup.

The init command supports some useful options for test and clustered scenarios that are not available via the interactive wizard:

# krillmanager [--use-lets-encrypt-staging] [--private] init

The --use-lets-encrypt-staging option causes any Let's Encrypt certificate requests to be made to the Let's Encrypt staging environment rather than the production environment. This can be useful to avoid hitting Let's Encrypt rate limits in the production environment through repeated testing.

The --private option causes a self-signed certificate to be issued to NGINX for serving the RRDP FQDN. This might be of interest if running Krill Manager behind a proxy which itself has the real RRDP certificate.


Command: krillc

This command invokes the Krill CLI tool krillc.

ちなみに

You can also invoke this command as just krillc without the krillmanager prefix, just like in the krillc documentation.

In a Krill Manager instance there is no krillc binary installed on the host. Instead this command runs a throw away Krill Docker container and invokes the krillc binary contained within.

Normally invoking krillc requires also defining environment variables or passing command line arguments to tell krillc where Krill is and how to authenticate with it. With Krill Manager this is taken care of for you automatically. If needed you can override the defaults using command line arguments in order to interact with a separate external instance of Krill.

Krill Manager also simplifies the interaction with the host filesystem by automatically remapping any paths to input files supplied on the command line so that they work when krillc accesses them from within the Docker container.


Command: logs

This command outputs the Docker service logs for key Krill Manager components. If invoked without any arguments it displays a usage tip:

# krillmanager logs
Usage: krillmanager logs <krill|nginx|rsyncd> [-f] [--tail=n]

The -f argument tells the command to keep following the log output.

The --tail argument tells the command to show only n lines of prior log output.


Command: renew

This command forces the Lets Encrypt certbot agent to attempt to renew any Let's Encrypt certificates that it is managing. If the certificates are renewed the NGINX instances will be signalled to reload the certificate files without causing any downtime.

注釈

It shouldn't be necessary to use this command as it is triggered automatically once a day.


Command: restart

This command is an alias for stop followed by start.


Command: restore

This command restores a backup made previously by the backup command.

The restored data will be processed by the current Krill Manager version which may be newer than the version that created the backup. Any incompatibilities should be handled automatically by the restore process.

If Krill and related services were running when the restore process started Krill Manager will stop them prior to restore and start them again afterwards. Otherwise you will need to use the start command to start the services after restore.

注釈

If the domain names referred to in the backup archive do not resolve to the external public IP address of the machine being restored to, the DNS setup or configuration in the archive may be incorrect. Krill Manager will advise against proceeding with the restore in this case. A valid scenario in which this can occur is when using a CDN for RRDP in which case the FQDN resolves to the CDN endpoint and not to the instance directly.


Command: start

Deploy all Krill Manager managed components according to the configuration settings chosen when the init command was last run.


Command: status

Display a status report indicating which of the Krill Manager components are running. It also shows a recap of key URIs that can be used to work with the Krill Manager instance.


Command: stop

Terminate all Krill Manager components.

警告

This will cause clients to receive connection refused errors.


Command: upgrade

Check to see if a newer version of Krill Manager is available and if so offer to upgrade to it.

注釈

A newer version of Krill Manager doesn't necessarily contain a newer version of Krill.

Monitoring

The available Prometheus endpoints for monitoring Krill Manager components can be determined using the krillmanager status command:

# krillmanager status
...
...
...
  - Prometheus monitoring endponts:
    - Krill         : http://<YOUR DOMAIN>:9657/metrics
    - NGINX         : http://<YOUR DOMAIN>:9113/metrics
    - Docker        : http://<YOUR DOMAIN>:9323/metrics
    - O/S           : http://<YOUR DOMAIN>:9100/metrics
    - Gluster       : http://<YOUR DOMAIN>:8080/metrics
    - Fluentd       : http://<YOUR DOMAIN>:24231/metrics

注釈

Fluentd metrics are available from Krill Manager v0.2.2.

注釈

In cluster mode the per-node metrics (NGINX, Docker, O/S and Gluster) should be queried on the node you are interested in, Krill Manager does NOT aggregate cluster metrics for you.

ちなみに

Krill metrics can be queried on any cluster node, NGINX will fetch them from Krill on whichever cluster node the single Krill instance is running.

Visualisation

To visualise the monitoring endpoint metrics deploy your own Prometheus and Grafana servers, e.g. using these DigitalOcean Marketplace Apps:

Alternatively, if you don't mind losing your monitoring and alerting if your server has problems, you could deploy Prometheus and Grafana on your Krill server like this.

Add stanzas like the following to the scrape_configs section of the prometheus.yml file on the Prometheus server and restart Prometheus:

scrape_configs:
  ...
  ...
  ...
  - job_name: 'krill'
    static_configs:
    - targets: ['<YOUR DOMAIN>:9657']

  - job_name: 'nginx'
    static_configs:
    - targets: ['<YOUR DOMAIN>:9113']

Add http://<PROMETHEUS DOMAIN OR IP>:9090 as a datasource to Grafana.

Then import Grafana Dashboards by ID, e.g.:
Alerting

Grafana can be configured to send notifications to a variety of destination types when alert conditions are met.

Logging

In Krill Manager when we refer to logs we primarily refer to a series of (mainly) unstructured messages, not to metrics such as counters and guages exposed by Prometheus endpoints.

On a Krill Manager host journald is the primary log subystem and Docker container logs are routed to the journal via the Docker journald logging driver.

Log Viewing
  • Host logs can be viewed in the usual way with journalctl and via files stored in /var/log/.
  • Primary Krill Manager logs can be viewed with the krillmanager logs command.
  • Other Krill Manager logs can be viewed with the docker service logs command.

ちなみに

In cluster mode krillmanager logs and docker service logs can be used to view logs even if the source container is on a slave cluster node.

Log Aggregation, Upload & Analysis
Using FluentD Krill Manager can:
  • aggregate journal logs across all cluster nodes together.
  • stream journal logs to an AWS S3 compatible storage service.
  • stream journal logs to one of many 3rd party services for external processing and analysis.
Using s3cmd Krill Manager can:
  • upload Krill RFC audit log files to an AWS S3 compatible storage service.

注釈

FluentD and s3cmd related Krill Manager Docker services are only created if log uploading was enabled during Initial Setup.

Upload Frequency

RFC protocol exchange logs are uploaded hourly. All other logs are uploaded at least every 10 minutes, more frequently if there is a lot of logging activity.

Force Flush

If needed you can force FluentD to flush its buffers which should cause it to stream any data it has pending to the destination, e.g. S3 compatible storage or a custom destination that you have configured:

  1. Use docker service ps krill_log_uploader to find the server running the log upload container.
  2. SSH to the server running the log upload container.
  3. Use docker ps to find the the container ID or name of the krill_log_uploader container.
  4. Use docker kill -s USR1 <container PID/name> to send the flush signal to FluentD.
  5. Use docker logs <container PID/name> to see that the flush was received and if it caused any upload activity, e.g.:
# docker service logs --raw z1c6ksk6zvdx | fgrep flush
2020-04-21 08:44:25 +0000 [info]: #0 force flushing buffered events
2020-04-21 08:44:25 +0000 [info]: #0 flushing all buffer forcedly
Log Retention

When log upload is enabled, local copies of Krill RFC audit logs are deleted after two days as these logs can become quite large. All other logs are rotated according to the default journald behaviour and logrotate configuration.

Log Bucket Structure

When using the default s3.conf fluentd config file, uploaded logs are structured like so:

/<Bucket Directory>/rfc_trail
/<Bucket Directory>/YYYY/MM/DD/HH/<hostname>/<service>.<N>.gz
/<Bucket Directory>/YYYY/MM/DD/HH/<hostname>/<container>/<instance id>.<N>.gz

Where <Bucket Directory> is the value you provided to the wizard.

Log File format

The format of the files is dependent on the type of log file:

  • rfc_trail log files are in a Krill internal binary format.
  • <service> log files are in JSON format.
  • <container> log file are in JSON format with additional fields.

This SSHD log message shows a <service> log line example:

{
  "hostname": "demomaster",
  "source": "syslog",
  "syslog_id": "sshd",
  "ts_epoch_ms": "1586277165425045",
  "message": "Invalid user test from 104.236.250.88 port 49112"
}

This NGINX access log message shows a <container> log line example:

{
  "hostname": "demomaster",
  "source": "journal",
  "syslog_id": "6ef2bbf3eba9",
  "ts_epoch_ms": "1586278786997270",
  "container": "krill_nginx.w2ia8pd3b2kxqm77uwyepooqh.o3lv5trgdnykegaeo9ylhs9d5",
  "message": "::ffff:104.206.128.2 - - [07/Apr/2020:16:59:46 +0000] \"GET / HTTP/1.1\" 404 153 \"-\" \"https://gdnplus.com:Gather Analyze Provide.\" \"-\"",
  "image": "krillmanager/http-server:v0.1.0@sha256:f88c52b73abf86c3223dcf4c0cc3ff8351f61e74ee307aa8c420c9e0856678f7"
}
Custom Behaviour

警告

When providing custom configuration files you should use the krillmanager edit command to create and edit configuration files so that the changes are properly replicated across all cluster nodes.

Customising Log Streaming

Files in /fluentd-conf/*.conf can be edited with krillmanager edit to configure fluentd according to your own design, streaming logs to any of the many 3rd party services that fluentd supports. Configuration elemnents should be placed inside a label stanza like so:

<label @ready>
  <match **>
    @type s3
    ..
  </match>
</label>

When working with Fluentd configuration files note the following useful commands:

# Reload the Fluentd configuration:
docker service restart krill_log_uploader --force

# Flush Fluentd output buffers:
docker kill -s SIGUSR1 <krill_log_uploader container name/id>
Diagnosing Streaming Problems

Krill Manager v0.2.2 added a Fluentd Prometheus metrics endpoint on port 24231 at /metrics. The statistics published at this endpoint can help identify whether events are being received and handled by the expected Fluentd output plugins.

Customising Audit Log Upload

The /s3cmd-conf/s3cmd.conf file can be edited with krillmanager edit to take advantage of any additional features of your S3-like service provider that s3cmd supports.

Analysis Examples
Manual Log Analysis

ちなみに

Upload to an AWS S3 compatible service is primarily intended for archival and root cause analysis after an incident. If your intention is to extract interesting metrics or you would like a more visual way to interact with your logs we suggest feeding tools like Grafana Loki or Elastic Search from FluentD.

Assuming that you have configured Krill Manager to store logs in a DigitalOcean Space, you can generate a report of RRDP clients visiting your Krill Manager instance on a particular date like so:

532 RIPE NCC RPKI Validator/3.1-2020.01.13.09.31.26
515 reqwest/0.9.19
190 Jetty/9.4.15.v20190215
101 RIPE NCC RPKI Validator/3.1-2019.12.16.15.18.18
 81 Routinator/0.7.0
...

Such a report can be produced using comands like those below:

$ DATE_OF_INTEREST="2020/05/11"
$ S3_BUCKET_NAME="my-bucket-name"
$ export AWS_ACCESS_KEY_ID="your-access-key"
$ export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
$ docker run -it --rm \
   -v /tmp/logs:/mnt/logs \
   -e AWS_ACCESS_KEY_ID \
   -e AWS_SECRET_ACCESS_KEY \
   --entrypoint=s3cmd \
   krillmanager/log-uploader:v0.1.1 \
     get \
       -r \
       --host-bucket="%(bucket)s.ams3.digitaloceanspaces.com" \
       --rexclude=".*" \
       --rinclude=".*${DATE_OF_INTEREST}.*/krill_nginx/.*" \
       s3://${S3_BUCKET_NAME}/logs/ /mnt/logs/
$ find /tmp/logs/ \
    -name '*.gz' \
    -exec zcat {} \; | \
      jq -r '.message | select(contains("/rrdp/"))' | \
        grep -oP '[0-9]+ [0-9]+ "-" \K"[^"]+"' | \
          cut -d '"' -f 2 | \
            sort | \
              uniq -c | \
                sort -rn
Streaming to Elasticsearch

注釈

The examples below require Krill Manager v0.2.2 or higher.

Using the Fluentd support integrated into Krill Manager you can stream logs to 3rd party log analysis tools such as EFK (Elasticsearch, Fluentd and Kibana).

When streaming to an external service you can either do that:

  • Instead of streaming to an S3 storage backend: replace s3.conf.
  • In addition to streaming to an S3 storage backend: modify s3.conf and add additional Fluentd config files.

Below is an example configuration for sending rsync access logs to Elasticsearch:

# elastic-search.conf
<label @ready>
  <filter **>
    @type grep
    <regexp>
      key container
      pattern /krill_rsyncd\..+/
    </regexp>
  </filter>

  <filter **>
    # Given a log record with a message field whose value is like:
    #   2020/05/11 23:59:59 [31881] connect from UNDETERMINED (105.16.160.2)
    @type parser
    key_name message
    reserve_data true
    <parse>
      @type regexp
      expression /^(?<datetime>\d+\/\d+\/\d+ \d+:\d+:\d+) \[(?<unknown>[^]]*)\] connect from (?<client_host>[^ ]+) \((?<client_ip>[^)]*)\)$/
    </parse>
  </filter>

  <match **>
    @type elasticsearch
    host elasticsearch.mydomain.com
    port 9200
    logstash_format true
  </match>
</label>

A similar technique can be used to stream NGINX access logs, using the built-in nginx parser in Fluentd. However, if you use a CDN (content delivery network) in front of your Krill Manager instance(s) you'll want to analzye the CDN provider logs, not the NGINX logs.

To stream rsync access logs to Elasticsearch but also still upload all logs to an S3 compatible storage target, use a copy configuration like so:

# copy.conf
<label @ready>
  <match **>
    @type copy
    <store>
      @type relabel
      @label @s3
    </store>
    <store>
      @type relabel
      @label @elastic-search
    </store>
  </match>
</label>

# elasticsearch.conf
<label @elastic-search>
  # the remainder of this file is the same as above
</label>

# s3.conf
<label @s3>
  # the remainder of this file is the same as the stock s3.conf file
  # that comes with Krill Manager.
</label>
Installing Additional Fluentd Plugins

Krill Manager comes with the following Fluentd plugins pre-installed:

  • fluent-plugin-elasticsearch
  • fluent-plugin-prometheus
  • fluent-plugin-rewrite-tag-filter
  • fluent-plugin-s3
  • fluent-plugin-systemd

注釈

The Elasticsearch plugin is included with Krill Manager from v0.2.2.

$ CONTAINER_ID=$(sudo docker ps -q --filter "name=krill_log_uploader")
$ sudo docker exec -it ${CONTAINER_ID} /bin/bash
# gem install fluent-plugin-XXX
# exit
$ sudo docker commit ${CONTAINER_ID} krillmanager/log-streamer:custom
$ sudo docker service update krill_log_uploader --image krillmanager/log-streamer:custom

警告

An upgrade of Krill Manager may cause the service to revert to a stock Krill Manager image. Repeat the steps above to re-install the missing plugin. You can also request inclusion of the plugin in the next Krill Manager release by submitting an issue to the Krill Manager GitHub issue tracker.

Cluster Mode

Krill Manager supports running on a cluster of servers but by default assumes that it is not part of a cluster.

Setting up a Cluster
Activate Cluster Mode

There is no support in the Initial Setup wizard for activating cluster mode, instead it must be done via command line arguments to the wizard.

After deploying N servers running Krill Manager, e.g. N instances of the DigitalOcean Marketplace 1-Click App, execute the following commands via SSH:

# open-cluster-ports                               # on both master and slave
# krillmanager --slave-ips=<ipv4,ipv4,...> init    # on the master only
# krillmanager --slave=1 init                      # on the slaves only

Example:

In a shell:
$ ssh root@slave1.rpki.example.com
# open-cluster-ports
# krillmanager --slave=1 init
...
Slave initialized

In another shell:
$ ssh root@slave2.rpki.example.com
# open-cluster-ports
# krillmanager --slave=1 init
...
Slave initialized

In another shell:
$ ssh root@master.rpki.example.com
# open-cluster-ports
# krillmanager --slave-ips=10.0.0.2,10.0.0.3
Joining slave at 10.0.0.2 to our GlusterFS cluster
Joining slave at 10.0.0.3 to our GlusterFS cluster
...
Waiting for all GlusterFS peers to become 'Connected'.
...
Initializing Swarm manager at <some.pubic.ip.address>
Sharing Swarm join token via GlusterFS
Waiting for 2 swarm workers to be in status 'Ready'
Waiting for 1 swarm workers to be in status 'Ready'
...

警告

open-cluster-ports is a simple helper script that opens to the world the ports required for cluster servers to communicate with each other. In a production setup you should restrict access so that these ports are only open between cluster servers and not to the wider Internet, either via ufw or via a cloud firewall.

Deploy & Configure a Load Balancer

For requests to be able to reach the Krill Manager servers, the load balancer must be configured to forward ports:

Port Forwarding Rules:

Port Protocol Required For
80 HTTP
  • HTTP -> HTTPS redirect.
  • Let's Encrypt HTTP-01 challenge responses.
443 HTTPS
  • Krill UI
  • Krill API
  • RRDP
873 TCP
  • Rsync

TLS Termination: Either configure your load balancer with a TLS certificate or set it to pass through TLS traffic still encrypted to the cluster servers.

Health Check: In order for the load balancer to route traffic only to healthy cluster servers you should configure a health check.

Proxy Protocol: Do NOT enable Proxy Mode on your load balancer. See the F.A.Q. item below for more information.

Configure DNS

In order to request a Let's Encrypt TLS certificate via Krill Manage the cluster servers need to be reachable via the desired DNS name, e.g. via a DNS A or CNAME record.

F.A.Q.
Should I Use a Cluster?

Whether cluster mode is needed or is the right way to achieve your objectives depends on your particular use case. If using a 3rd party repository and only a few ROAs, then you probably don't need a cluster.

A cluster provides various benefits including:

  1. Higher availability - loss of a cluster server, whether due to an issue or while upgrading, does not cause the service to be down toward customers.
  2. Scalability - RRDP and Rsync requests can be served by multiple servers instead of just one.

A cluster also comes with some costs, e.g.:

  1. The obvious cost of running more (virtual) hardware.
  2. The complexity cost of operating and maintaining a cluster, though Krill Manager greatly reduces this.
How Is Cluster Mode Different To Normal Mode?

The main difference is that instead of having one server running NGINX and RsyncD, in cluster mode every cluster server will run NGINX and RsyncD.

In clustered mode the Gluster volume enables Krill Manager to replicate configuration, TLS certificates, RRDP and Rsync repo contents, etc. to every cluster server.

Why Not Just Use a CDN?

Currently Relying Party software communicate with RPKI repository servers using the Rsync protocol and most also support the RRDP protocol.

Using a CDN (e.g. Fastly as used by the NLnet Labs production Krill deployment) should increase availability, increase capacity and decrease latency, but only for RRDP, not for Rsync. One could argue that Rsync is being rapidly obsoleted by RRDP and it is only a matter of time before Rsync is not used by Relying Parties at all.

Where Should My Cluster Servers Be Located?

Depending on how many 9's of uptime/availability you are aiming for, you should consider whether your cluster servers are separate enough from each other, e.g. several VMs running on the same server or in the same rack is less robust than spreading the VMs across cloud availability zones or across regions.

Note however that the further apart your cluster servers are from each other the longer it may take Gluster to keep the replicated volume contents consistent.

Also, not all load balancing technologies support wider separation, e.g. a cloud load balancer may be able to balance across VMs in one region but not across regions.

How Can I Balance Traffic Across My Cluster?

You can use a load balancer (e.g. the DigitalOcean Load Balancer), anycast IP, a CDN provider, geographic/latency based DNS, etc.

Is Proxy Protocol supported?

Not yet. Without Proxy Protocol you will likely see the IP addresses of the proxy in your NGINX and RsyncD logs but rather than that of the real client.

How Can a Proxy Check the Backend Health?

Krill Manager does not yet offer a dedicated health check endpoint. When using a load balancer or other proxy that supports health checks you are currently limited to testing TCP or HTTP(S) connectivity. For example if using a single DigitalOcean Load Balancer you can check either connectivity to NGINX or to RsyncD but not both. A dedicated Krill Manager health check endpoint would allow you to direct traffic to the cluster server only if all services were green.

What Happens If a Cluster Server Becomes Unreachable?

If your proxy detects that the backend is unreachable then clients (possibly after some delay) will no longer be routed to the "dead" server but will continue to be able to access RRDP and Rsync endpoints on the remaining servers.

If your proxy monitors the health of the backend services and the health check fails then connections to that service will be routed to other "healthy" servers. Howvever, as noted above, the current health check options are not perfect.

If the "unhealthy" cluster server is a slave and the "master" loses its connection to the slave then any Krill Manager components that were running only on that cluster server will be re-launched on a remaining "healthy" cluster server.

If the "unhealthy" cluster server is the "master" then any Krill Manager components that were running only on that cluster server will be lost and you will need to manually fix the Docker Swarm and Gluster clusters. However, note that NGINX and RsyncD run on every cluster server and so clients will still be able to get the last synced RRDP and Rsync data from the remaining "healthy" cluster servers. You may however lose Krill and/or log streaming/uploading services.

Can I Use Plain HTTP Behind a Load Balancer?

No, Krill Manager does not support this.

Can I Use Self-Signed TLS Certificates Behind a Load Balancer?

In the case where the load balancer handles TLS termination, to avoid having to install and renew real certificates on both the load balancer and the cluster servers the --private argument can be used on the master. This will cause Krill Manager to generate self-signed certificates for the cluster NGINX instances. E.g.

# krillmanager --slave-ips=<ipv4>,<ipv4>,... --private init
How is the cluster established?
  1. The master server activates Docker Swarm mode becoming a Swarm Manager.
  2. The master server adds the other servers as Gluster peers.
  3. The master server creates a Gluster replication volume across the peers. Each peer will have a complete copy of the data written to the volume.
  4. The master server writes the Docker Swarm join token to the Gluster volume.
  5. The slave servers detect the join token and use it to join the Docker Swarm.
Can I add or remove cluster servers later?
  1. Run open-cluster-ports and krillmanager --slave=1 init as usual on any new slave servers.
  2. Run krillmanager --slave-ips=<ipv4>,<ipv4>,... init on the master cluster server with the new set of IPv4 cluster slave addresses:
    • Any missing slave IP addresses will cause Krill Manager to forcibly disconnect those slaves from the Gluster cluster.
    • Any new slave IP addresses will be added to the Gluster cluster.
    • The new slaves will then add themselves to the Swarm cluster.
  3. Terminate the removed slave servers.
Is the Swarm Manager highly available?

No. This could be done but adds complexity while adding little value. If the manager server is lost the worst case is that the Krill UI and API become unavailable if Krill was running on the Swarm Manager server, RRDP and Rsync endpoints will continue to be available.

Is the Docker Swarm Routing Mesh Used?

No, the NGINX (HTTP(S)/RRDP) and Rsync containers bind directly to the host interface ensuring that IPv6 is supported and eliminating an unnecessary extra proxy hop.

Routinator

Routinator 3000 is free, open source RPKI Relying Party software written by NLnet Labs in the Rust programming language.

The application is designed to be lightweight and have great portability. This means it can run on any Unix-like operating system, but also works on Microsoft Windows. Due to its lean design, it can run effortlessly on minimalist hardware such as a Raspberry Pi. Monitoring is possible through the built-in Prometheus endpoint. It allows you to build beautiful dashboards for detailed insights.

Routinator connects to the Trust Anchors of the five Regional Internet Registries (RIRs) — APNIC, AFRINIC, ARIN, LACNIC and RIPE NCC — downloads all of the certificates and ROAs in the various repositories, verifies the signatures and makes the result available for use in the BGP workflow. It can perform RPKI validation as a one-time operation and store the result on disk in formats such as CSV, JSON and RPSL, or run as a service that periodically fetches and verifies RPKI data. The data is then served via the built-in HTTP server, or fetched from RPKI-capable routers via the RPKI-RTR protocol.

If you run into a problem with Routinator or you have a feature request, please create an issue on Github. We are also happy to accept your pull requests. For general discussion and exchanging operational experiences we provide a mailing list. This is also the place where we will announce releases of the application and updates on the project.

You can follow the adventures of Routinator on Twitter and listen to its favourite songs on Spotify.

Installation

Getting started with Routinator is really easy either building from Cargo or running with Docker.

Quick Start

Assuming you have a newly installed Debian or Ubuntu machine, you will need to install rsync, the C toolchain and Rust. You can then install Routinator and start it up as an RTR server listening on 127.0.0.1 port 3323 and HTTP on port 9556:

apt install rsync build-essential
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
cargo install --locked routinator
routinator init
# Follow instructions provided
routinator server --rtr 192.0.2.13:3323 --http 192.0.2.13:9556

If you have an older version of Rust and Routinator, you can update via:

rustup update
cargo install --locked --force routinator

If you want to try the master branch from the repository instead of a release version, you can run:

cargo install --git https://github.com/NLnetLabs/routinator.git

Quick Start with Docker

Due to the impracticality of complying with the ARIN TAL distribution terms in an unsupervised Docker environment, before launching the container it is necessary to first review and agree to the ARIN Relying Party Agreement (RPA). If you agree to the terms, you can let the Routinator Docker image install the TALs into a mounted volume that is later reused for the server:

# Create a Docker volume to persist TALs in
sudo docker volume create routinator-tals
# Review the ARIN terms.
# Run a disposable container to install TALs.
sudo docker run --rm -v routinator-tals:/home/routinator/.rpki-cache/tals \
    nlnetlabs/routinator init -f --accept-arin-rpa
# Launch the final detached container named 'routinator' exposing RTR on
# port 3323 and HTTP on port 9556
sudo docker run -d --restart=unless-stopped --name routinator -p 3323:3323 \
     -p 9556:9556 -v routinator-tals:/home/routinator/.rpki-cache/tals \
     nlnetlabs/routinator

System Requirements

At this time, the size of the global RPKI data set is about 500MB. Cryptographic validation of it takes Routinator about 2 seconds on a quad-core i7.

When choosing a system to run Routinator on, make sure you have 1GB of available memory and 1GB of disk space. This will give you ample margin for the RPKI repositories to grow over time, as adoption increases.

Getting Started

There are three things you need to install and run Routinator: rsync, a C toolchain and Rust. You can install Routinator on any system where you can fulfil these requirements.

You need rsync because most RPKI repositories currently use it as its main means of distribution. Some of the cryptographic primitives used by Routinator require a C toolchain. Lastly, you need Rust because that’s the programming language that Routinator has been written in.

rsync

Currently, Routinator requires the rsync executable to be in your path. Due to the nature of rsync, it is unclear which particular version you need at the very least, but whatever is being shipped with current Linux and *BSD distributions and macOS should be fine. Alternatively, you can download rsync from its website.

On Windows, Routinator requires the rsync version that comes with Cygwin – make sure to select rsync during the installation phase.

C Toolchain

Some of the libraries Routinator depends on require a C toolchain to be present. Your system probably has some easy way to install the minimum set of packages to build from C sources. For example, apt install build-essential will install everything you need on Debian/Ubuntu.

If you are unsure, try to run cc on a command line and if there’s a complaint about missing input files, you are probably good to go.

Rust

The Rust compiler runs on, and compiles to, a great number of platforms, though not all of them are equally supported. The official Rust Platform Support page provides an overview of the various support levels.

While some system distributions include Rust as system packages, Routinator relies on a relatively new version of Rust, currently 1.40 or newer. We therefore suggest to use the canonical Rust installation via a tool called rustup.

To install rustup and Rust, simply do:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Alternatively, visit the official Rust website for other installation methods.

You can update your Rust installation later by running:

rustup update

Building

The easiest way to get Routinator is to leave it to cargo by saying:

cargo install --locked routinator

If you want to try the master branch from the repository instead of a release version, you can run:

cargo install --git https://github.com/NLnetLabs/routinator.git

If you want to update an installed version, you run the same command but add the -f flag, a.k.a. force, to approve overwriting the installed version.

The command will build Routinator and install it in the same directory that cargo itself lives in, likely $HOME/.cargo/bin. This means Routinator will be in your path, too.

Notes

In case you want to build a statically linked Routinator, or you have an Operating System where special care needs to be taken, such as OpenBSD and CentOS, please refer to the Installation Notes section.

Installation Notes

In certain scenarios and on some platforms specific steps are needed in order to get Routinator working as desired.

Statically Linked Routinator

While Rust binaries are mostly statically linked, they depend on libc which, as least as glibc that is standard on Linux systems, is somewhat difficult to link statically. This is why Routinator binaries are actually dynamically linked on glibc systems and can only be transferred between systems with the same glibc versions.

However, Rust can build binaries based on the alternative implementation named musl that can easily be statically linked. Building such binaries is easy with rustup. You need to install musl and the correct musl target such as x86_64-unknown-linux-musl for x86_64 Linux systems. Then you can just build Routinator for that target.

On a Debian (and presumably Ubuntu) system, enter the following:

sudo apt-get install musl-tools
rustup target add x86_64-unknown-linux-musl
cargo build --target=x86_64-unknown-linux-musl --release

Platform Specific Instructions

ちなみに

GÉANT has created an Ansible playbook defining a role to deploy Routinator on Ubuntu.

For some platforms, rustup cannot provide binary releases to install directly. The Rust Platform Support page lists several platforms where official binary releases are not available, but Rust is still guaranteed to build. For these platforms, automated tests are not run so it’s not guaranteed to produce a working build, but they often work to quite a good degree.

OpenBSD

On OpenBSD, patches are required to get Rust running correctly, but these are well maintained and offer the latest version of Rust quite quickly.

Rust can be installed on OpenBSD by running:

pkg_add rust
CentOS 6

The standard installation method does not work when using CentOS 6. Here, you will end up with a long list of error messages about missing assembler instructions. This is because the assembler shipped with CentOS 6 is too old.

You can get the necessary version by installing the Developer Toolset 6 from the Software Collections repository. On a virgin system, you can install Rust using these steps:

sudo yum install centos-release-scl
sudo yum install devtoolset-6
scl enable devtoolset-6 bash
curl https://sh.rustup.rs -sSf | sh
source $HOME/.cargo/env
SELinux using CentOS 7

This guide, contributed by Rich Compton, describes how to run Routinator on Security Enhanced Linux (SELinux) using CentOS 7.

  1. Start by setting the hostname.
sudo nmtui-hostname
Hostname will be set
  1. Set the interface and connect it.

注釈

Ensure that "Automatically connect" and "Available to all users" are checked.

sudo nmtui-edit
  1. Install the required packages.
sudo yum check-update
sudo yum upgrade -y
sudo yum install -y epel-release
sudo yum install -y vim wget curl net-tools lsof bash-completion yum-utils \
    htop nginx httpd-tools tcpdump rust cargo rsync policycoreutils-python
  1. Set the timezone to UTC.
sudo timedatectl set-timezone UTC
  1. Remove postfix as it is unneeded.
sudo systemctl stop postfix
sudo systemctl disable postfix
  1. Create a self-signed certificate for NGINX.
sudo mkdir /etc/ssl/private
sudo chmod 700 /etc/ssl/private
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    -keyout /etc/ssl/private/nginx-selfsigned.key \
    -out /etc/ssl/certs/nginx-selfsigned.crt
# Populate the relevant information to generate a self signed certificate
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
  1. Add in the ssl.conf file to /etc/nginx/conf.d/ssl.conf and edit the ssl.conf file to provide the IP of the host in the server_name field.
  2. Replace /etc/nginx/nginx.conf with the nginx.conf file.
  3. Set the username and password for the web interface authentication.
sudo htpasswd -c /etc/nginx/.htpasswd <username>
  1. Start Nginx and set it up so it starts at boot.
sudo systemctl start nginx
sudo systemctl enable nginx
  1. Add the user "routinator", create the /opt/routinator directory and assign it to the "routinator" user and group
sudo useradd routinator
sudo mkdir /opt/routinator
sudo chown routinator:routinator /opt/routinator
  1. Sudo into the routinator user.
sudo su - routinator
  1. Install Routinator and add it to the $PATH for user "routinator"
cargo install routinator
vi /home/routinator/.bash_profile
Edit the PATH line to include "/home/routinator/.cargo/bin"
PATH=$PATH:$HOME/.local/bin:$HOME/bin:/home/routinator/.cargo/bin
  1. Initialise Routinator, accept the ARIN TAL and exit back to the user with sudo.
/home/routinator/.cargo/bin/routinator -b /opt/routinator init -f --accept-arin-rpa
exit
  1. Create a routinator systemd script using the template below.
sudo vi /etc/systemd/system/routinator.service
[Unit]
Description=Routinator RPKI Validator and RTR Server
After=network.target
[Service]
Type=simple
User=routinator
Group=routinator
Restart=on-failure
RestartSec=90
ExecStart=/home/routinator/.cargo/bin/routinator -v -b /opt/routinator server \
    --http 127.0.0.1:8080 --rtr <IPv4 IP>:8323 --rtr [<IPv6 IP>]:8323
TimeoutStartSec=0
[Install]
WantedBy=default.target

注釈

You must populate the IPv4 and IPv6 addresses. In addition, the IPv6 address needs to have brackets '[ ]' around it. For example:

/home/routinator/.cargo/bin/routinator -v -b /opt/routinator server \
--http 127.0.0.1:8080 --rtr 172.16.47.235:8323 --rtr [2001:db8::43]:8323
  1. Configure SELinux to allow connections to localhost and to allow rsync to write to the /opt/routinator directory.
sudo setsebool -P httpd_can_network_connect 1
sudo semanage permissive -a rsync_t
  1. Reload the systemd daemon and set the routinator service to start at boot.
sudo systemctl daemon-reload
sudo systemctl enable routinator.service
sudo systemctl start routinator.service
  1. Set up the firewall to permit ssh, HTTPS and port 8323 for the RTR protocol.
sudo firewall-cmd --permanent --remove-service=ssh --zone=public
sudo firewall-cmd --permanent --zone public --add-rich-rule='rule family="ipv4" \
    source address="<IPv4 management subnet>" service name=ssh accept'
sudo firewall-cmd --permanent --zone public --add-rich-rule='rule family="ipv6" \
    source address="<IPv6 management subnet>" service name=ssh accept'
sudo firewall-cmd --permanent --zone public --add-rich-rule='rule family="ipv4" \
    source address="<IPv4 management subnet>" service name=https accept'
sudo firewall-cmd --permanent --zone public --add-rich-rule='rule family="ipv6" \
    source address="<IPv6 management subnet>" service name=https accept'
sudo firewall-cmd --permanent --zone public --add-rich-rule='rule family="ipv4" \
    source address="<peering router IPv4 loopback subnet>" port port=8323 protocol=tcp accept'
sudo firewall-cmd --permanent --zone public --add-rich-rule='rule family="ipv6" \
    source address="<peering router IPv6 loopback subnet>" port port=8323 protocol=tcp accept'
sudo firewall-cmd --reload
  1. Navigate to https://<IP address of rpki-validator>/metrics to see if it's working. You should authenticate with the username and password that you provided in step 10 of setting up the RPKI Validation Server.

Initialisation

Before running Routinator for the first time, you must prepare its working environment. You do this using the init command. This will prepare both the directory for the local RPKI cache, as well as the Trust Anchor Locator (TAL) directory.

By default, both directories will be located under $HOME/.rpki-cache, but you can change their locations via the command line options --repository-dir and --tal-dir.

TALs provide hints for the trust anchor certificates to be used both to discover and validate all RPKI content. The five TALs — one for each Regional Internet Registry (RIR) — are bundled with Routinator and installed by the init command.

警告

Using the TAL from ARIN, the RIR for the United States, Canada as well as many Caribbean and North Atlantic islands, requires you to read and accept their Relying Party Agreement before you can use it. Running the init command will provide you with instructions.

routinator init
Before we can install the ARIN TAL, you must have read
and agree to the ARIN Relying Party Agreement (RPA).
It is available at

https://www.arin.net/resources/manage/rpki/rpa.pdf

If you agree to the RPA, please run the command
again with the --accept-arin-rpa option.

Running the init command with the --accept-arin-rpa option will create the TAL directory and copy the five Trust Anchor Locator files into it.

routinator init --accept-arin-rpa

If you decide you cannot agree to the ARIN RPA terms, the --decline-arin-rpa option will install all TALs except the one for ARIN. If, at a later point, you wish to use the ARIN TAL anyway, you can add it to your current installation using the --force option, to force the installation of all TALs.

Performing a Test Run

To see if Routinator has been initialised correctly and your firewall allows the required connections, it is recommended to perform an initial test run. You can do this by having Routinator print a validated ROA payload (VRP) list with the vrps sub-command, and using -v to increase the log level to INFO to see if Routinator establishes rsync and RRDP connections as expected.

routinator -v vrps

Now, you can see how Routinator connects to the RPKI trust anchors, downloads the the contents of the repositories to your machine, validates it and produces a list of validated ROA payloads in the default CSV format to standard output. From a cold start, this process will take a couple of minutes.

routinator -v vrps
rsyncing from rsync://repository.lacnic.net/rpki/.
rsyncing from rsync://rpki.afrinic.net/repository/.
rsyncing from rsync://rpki.apnic.net/repository/.
rsyncing from rsync://rpki.ripe.net/ta/.
rsync://rpki.ripe.net/ta: The RIPE NCC Certification Repository is subject to Terms and Conditions
rsync://rpki.ripe.net/ta: See http://www.ripe.net/lir-services/ncc/legal/certification/repository-tc
rsync://rpki.ripe.net/ta:
Found valid trust anchor rsync://rpki.ripe.net/ta/ripe-ncc-ta.cer. Processing.
rsyncing from rsync://rpki.ripe.net/repository/.
Found valid trust anchor rsync://rpki.afrinic.net/repository/AfriNIC.cer. Processing.
rsyncing from rsync://rpki.arin.net/repository/.
Found valid trust anchor rsync://rpki.arin.net/repository/arin-rpki-ta.cer. Processing.
Found valid trust anchor rsync://rpki.apnic.net/repository/apnic-rpki-root-iana-origin.cer. Processing.
rsyncing from rsync://rpki.apnic.net/member_repository/.
Found valid trust anchor rsync://repository.lacnic.net/rpki/lacnic/rta-lacnic-rpki.cer. Processing.
rsync://rpki.ripe.net/repository: The RIPE NCC Certification Repository is subject to Terms and Conditions
rsync://rpki.ripe.net/repository: See http://www.ripe.net/lir-services/ncc/legal/certification/repository-tc
rsync://rpki.ripe.net/repository:
rsyncing from rsync://rpkica.twnic.tw/rpki/.
rsyncing from rsync://rpki-repository.nic.ad.jp/ap/.
rsyncing from rsync://rpki.cnnic.cn/rpki/.
Summary:
afrinic: 338 valid ROAs, 459 VRPs.
lacnic: 2435 valid ROAs, 7042 VRPs.
apnic: 3186 valid ROAs, 21934 VRPs.
ripe: 10780 valid ROAs, 56907 VRPs.
arin: 4964 valid ROAs, 6621 VRPs.
ASN,IP Prefix,Max Length,Trust Anchor
AS43289,2a03:f80:373::/48,48,ripe
AS14464,131.109.128.0/17,17,arin
AS17806,114.130.5.0/24,24,apnic
AS59587,151.232.192.0/21,21,ripe
AS13335,172.68.30.0/24,24,arin
AS6147,190.40.0.0/14,24,lacnic
...

Running Interactively

Routinator can perform RPKI validation as a one-time operation and print a Validated ROA Payload (VRP) list in various formats, or it can return the validity of a specific announcement. These functions are accessible on the command line via the following sub-commands:

vrps
Fetches RPKI data and produces a Validated ROA Payload (VRP) list in the specified format.
validate
Outputs the RPKI validity for a specific announcement by supplying Routinator with an ASN and a prefix.

Printing a List of VRPs

Routinator can produce a Validated ROA Payload (VRP) list in five different formats, which are either printed to standard output or saved to a file:

csv
The list is formatted as lines of comma-separated values of the prefix in slash notation, the maximum prefix length, the autonomous system number, and an abbreviation for the trust anchor the entry is derived from. The latter is the name of the TAL file without the extension .tal. This is the default format used if the --format or -f option is missing.
csvcompat
The same as csv except that all fields are embedded in double quotes and the autonomous system number is given without the prefix AS. This format is pretty much identical to the CSV produced by the RIPE NCC Validator.
csvext
This is an extended version of the csv format, which was used by the RIPE NCC RPKI Validator 1.x. Each line contains these comma-separated values: the rsync URI of the ROA the line is taken from (or "N/A" if it isn't from a ROA), the autonomous system number, the prefix in slash notation, the maximum prefix length, and lastly the not-before and not-after date of the validity of the ROA.
json
The list is placed into a JSON object with a single element roas which contains an array of objects with four elements each: The autonomous system number of the network authorised to originate a prefix in asn, the prefix in slash notation in prefix, the maximum prefix length of the announced route in maxLength, and the trust anchor from which the authorisation was derived in ta. This format is identical to that produced by the RIPE NCC Validator except for different naming of the trust anchor. Routinator uses the name of the TAL file without the extension .tal whereas the RIPE NCC Validator has a dedicated name for each.
openbgpd
Choosing this format causes Routinator to produce a roa-set configuration item for the OpenBGPD configuration.
bird
Choosing this format causes Routinator to produce a roa table configuration item for the BIRD configuration.
bird2
Choosing this format causes Routinator to produce a route table configuration item for the BIRD2 configuration.
rpsl
This format produces a list of RPSL objects with the authorisation in the fields route, origin, and source. In addition, the fields descr, mnt-by, created, and last-modified, are present with more or less meaningful values.
summary
This format produces a summary of the content of the RPKI repository. For each trust anchor, it will print the number of verified ROAs and VRPs. Note that this format does not take filters into account. It will always provide numbers for the complete repository.

For example, to get the validated ROA payloads in CSV format, run:

routinator vrps --format csv
ASN,IP Prefix,Max Length,Trust Anchor
AS55803,103.14.64.0/23,23,apnic
AS267868,45.176.192.0/24,24,lacnic
AS41152,82.115.18.0/23,23,ripe
AS28920,185.103.228.0/22,22,ripe
AS11845,209.203.0.0/18,24,afrinic
AS63297,23.179.0.0/24,24,arin
...

To generate a file with with the validated ROA payloads in JSON format, run:

routinator vrps --format json --output authorisedroutes.json
Filtering

In case you are looking for specific information in the output, Routinator allows filtering to see if a prefix or ASN is covered or matched by a VRP. You can do this using the --filter-asn and --filter-prefix options.

When using --filter-asn, you can use both AS64511 and 64511 as the notation. With --filter-prefix, the result will include VRPs regardless of their ASN and MaxLength. Both filter flags can be combined and used multiple times in a single query and will be treated as a logical "or".

A validation run will be started before returning the result, making sure you get the latest information. If you would like a result from the current cache, you can use the --noupdate or -n option.

Here are some examples filtering for an ASN and prefix in CSV and JSON format:

routinator vrps --format csv --filter-asn 196615
ASN,IP Prefix,Max Length,Trust Anchor
AS196615,2001:7fb:fd03::/48,48,ripe
AS196615,93.175.147.0/24,24,ripe
routinator vrps --format json --filter-prefix 93.175.146.0/24
{
  "roas": [
    { "asn": "AS12654", "prefix": "93.175.146.0/24", "maxLength": 24, "ta": "ripe" }
  ]
}

Validity Checker

You can check the RPKI origin validation status of a specific BGP announcement using the validate subcommand and by supplying the ASN and prefix. A validation run will be started before returning the result, making sure you get the latest information. If you would like a result from the current cache, you can use the --noupdate or -n option.

routinator validate --asn 12654 --prefix 93.175.147.0/24
Invalid

A detailed analysis of the reasoning behind the validation outcome is printed in JSON format. In case of an Invalid state, whether this because the announcement is originated by an unauthorised AS, or if the prefix is more specific than the maximum prefix length allows. Lastly, a complete list of VRPs that caused the result is included.

routinator validate --json --asn 12654 --prefix 93.175.147.0/24
{
  "validated_route": {
   "route": {
     "origin_asn": "AS12654",
     "prefix": "93.175.147.0/24"
   },
   "validity": {
     "state": "Invalid",
     "reason": "as",
     "description": "At least one VRP Covers the Route Prefix, but no VRP ASN matches the route origin ASN",
     "VRPs": {
      "matched": [
      ],
      "unmatched_as": [
        {
         "asn": "AS196615",
         "prefix": "93.175.147.0/24",
         "max_length": "24"
        }

      ],
      "unmatched_length": [
      ]      }
   }
  }
}

If you run the HTTP service in daemon mode, this information is also available at the /validity endpoint.

Running as a Daemon

Routinator can run as a service that periodically fetches RPKI data, verifies it and makes the resulting data set available via the RPKI-RTR protocol and through the built-in HTTP server. You can start the Routinator service using the server sub-command.

The HTTP Service

The CSV, JSON, OpenBGPD and RPSL formats that Routinator can produce in interactive mode are available via HTTP if the application is running as a service. You can also check the RPKI origin validation status of a specific BGP announcement at the /validity endpoint by supplying the ASN and prefix.

The HTTP server is not enabled by default for security reasons, nor does it have a default host or port. In order to start the HTTP server at 192.0.2.13 and 2001:0DB8::13 on port 8323, run this command:

routinator server --http 192.0.2.13:8323 --http [2001:0DB8::13]:8323

The application will stay attached to your terminal unless you provide the --detach option. After fetching and validating the data set, the following paths are available:

/csv
Returns the current set of VRPs in csv output format
/csvext
Returns the current set of VRPs in csvext output format.
/json
Returns the current set of VRPs in json output format
/openbgpd
Returns the current set of VRPs in OpenBGPD output format
/bird
Returns the current set of VRPs in bird output format
/bird2
Returns the current set of VRPs in bird2 output format
/rpsl
Returns the current set of VRPs in RPSL output format
/validity
Returns the RPKI origin validation status of a specific BGP announcement by supplying the ASN and prefix in the path, e.g. /validity?asn=12654&prefix=93.175.147.0/24

Please note that this server is intended to run on your internal network and doesn't offer HTTPS natively. If this is a requirement, you can for example run Routinator behind an NGINX reverse proxy.

Lastly, the HTTP server provides paths that allow you to monitor Routinator itself and the data it processes, so it may be desirable to have HTTP running alongside the RTR server. For more information, please refer to the Monitoring section.

The RTR Service

Routinator supports RPKI-RTR as specified in RFC 8210 as well as the older version described in RFC 6810.

When launched as an RTR server, routers with support for route origin validation (ROV) can connect to Routinator to fetch the processed data. This includes hardware routers such as Juniper, Cisco and Nokia, as well as software solutions like BIRD, GoBGP and others. The processed data is also available in a number of useful output formats, such as CSV, JSON, RPSL and a format specifically for OpenBGPD.

Like the HTTP server, the RTR server is not started by default, nor does it have a default host or port. Thus, in order to start the RTR server at 192.0.2.13 and 2001:0DB8::13 on port 3323, run Routinator using the server command:

routinator server --rtr 192.0.2.13:3323 --rtr [2001:0DB8::13]:3323

Please note that port 3323 is not the IANA-assigned default port for the protocol, which would be 323. But as this is a privileged port, you would need to be running Routinator as root when otherwise there is no reason to do that. The application will stay attached to your terminal unless you provide the --detach option.

By default, the repository will be updated and re-validated every 10 minutes. You can change this via the --refresh option and specify the interval between re-validations in seconds. That is, if you rather have Routinator validate every 15 minutes, the above command becomes:

routinator server --rtr 192.0.2.13:3323 --rtr [2001:0DB8::13]:3323 --refresh=900

Communication between Routinator and the router using the RPKI-RTR protocol is done via plain TCP. Below, there is an explanation how to secure the transport using either SSH or TLS.

Secure Transports

These instructions were contributed by wk on Github.

RFC 6810#section-7 defines a number of secure transports for RPKI-RTR that can be used to secure communication between a router and a RPKI relying party.

However, the RPKI Router Implementation Report documented in RFC 7128#section-5 suggests these secure transports have not been widely implemented. Implementations, however, do exist, and a secure transport could be valuable in situations where the RPKI relying party is provided as a public service, or across a non-trusted network.

SSH Transport

SSH transport for RPKI-RTR can be configured with the help of netcat and OpenSSH.

  1. Begin by installing the openssh-server and netcat packages.

Make sure Routinator is running as an RTR server on localhost:

routinator server --rtr 127.0.0.1:3323
  1. Create a username and a password for the router to log into the host with, such as rpki.
  2. Configure OpenSSH to expose an rpki-rtr subsystem that acts as a proxy into Routinator by editing the /etc/ssh/sshd_config file or equivalent to include the following line:
# Define an `rpki-rtr` subsystem which is actually `netcat` used to
# proxy STDIN/STDOUT to a running `routinator server --rtr 127.0.0.1:3323`
Subsystem       rpki-rtr        /bin/nc 127.0.0.1 3323

# Certain routers may use old KEX algos and Ciphers which are no longer enabled by default.
# These examples are required in IOS-XR 5.3 but no longer enabled by default in OpenSSH 7.3
Ciphers +3des-cbc
KexAlgorithms +diffie-hellman-group1-sha1
  1. Restart the OpenSSH server daemon.
  2. Set up the router running IOS-XR using this example configuration:
router bgp 65534
 rpki server 192.168.0.100
  username rpki
  password rpki
  transport ssh port 22
TLS Transport

TLS transport for RPKI-RTR can be configured with the help of stunnel.

  1. Begin by installing the stunnel package.
  2. Make sure Routinator is running as an RTR server on localhost:
routinator server --rtr 127.0.0.1:3323
  1. Acquire (via for example Let's Encrypt) or generate an SSL certificate. In the example below, an SSL certificate for the domain example.com generated by Let's Encrypt is used.
  2. Create an stunnel configuration file by editing /etc/stunnel/rpki.conf or equivalent:
[rpki]
; Use a letsencrypt certificate for example.com
cert = /etc/letsencrypt/live/example.com/fullchain.pem
key = /etc/letsencrypt/live/example.com/privkey.pem

; Listen for TLS rpki-rtr on port 323 and proxy to port 3323 on localhost
accept = 323
connect = 127.0.0.1:3323
  1. Restart stunnel to complete the process.

Configuration

Routinator has a number of default settings, such as the location where files are stored, the refresh interval and the log level. You can view these settings by running:

routinator config

It will return the list of defaults in the same notation that is used by the optional configuration file, which will be largely similar to this:

allow-dubious-hosts = false
dirty = false
disable-rrdp = false
disable-rsync = false
exceptions = []
expire = 7200
history-size = 10
http-listen = []
log = "default"
log-level = "WARN"
refresh = 600
repository-dir = "/Users/routinator/.rpki-cache/repository"
retry = 600
rrdp-proxies = []
rrdp-root-certs = []
rsync-command = "rsync"
rsync-timeout = 300
rtr-listen = []
stale = "warn"
strict = false
syslog-facility = "daemon"
systemd-listen = false
tal-dir = "/Users/routinator/.rpki-cache/tals"
validation-threads = 4

You can override these defaults, as well as configure a great number of additional options using either command line arguments or via the configuration file.

To get an overview of all available options, please refer to the configuration file section of the Manual Page, which can be also viewed by running routinator man.

Using a Configuration File

Routinator can take its configuration from a file. You can specify such a config file via the -c option. If you don’t, Routinator will check if there is a $HOME/.routinator.conf and if it exists, use it. If it doesn’t exist and there is no -c option, the default values are used.

For specifying configuration options, Routinator uses a TOML file. Its entries are named similarly to the command line options. A complete sample configuration file showing all the default values can be found in the repository at etc/routinator.conf.example.

For example, if you want Routinator to refresh every 15 minutes and run as an RTR server on 192.0.2.13 and 2001:0DB8::13 on port 3323, in addition to providing an HTTP server on port 9556, simply take the output from routinator config and change the refresh, rtr-listen and http-listen values in your favourite text editor:

allow-dubious-hosts = false
dirty = false
disable-rrdp = false
disable-rsync = false
exceptions = []
expire = 7200
history-size = 10
http-listen = ["192.0.2.13:9556", "[2001:0DB8::13]:9556"]
log = "default"
log-level = "WARN"
refresh = 900
repository-dir = "/Users/routinator/.rpki-cache/repository"
retry = 600
rrdp-proxies = []
rrdp-root-certs = []
rsync-command = "rsync"
rsync-timeout = 300
rtr-listen = ["192.0.2.13:3323", "[2001:0DB8::13]:3323"]
stale = "warn"
strict = false
syslog-facility = "daemon"
systemd-listen = false
tal-dir = "/Users/routinator/.rpki-cache/tals"
validation-threads = 4

After saving this file as .routinator.conf in your home directory, you can start Routinator with:

routinator server

Applying Local Exceptions

In some cases, you may want to override the global RPKI data set with your own local exceptions. For example, when a legitimate route announcement is inadvertently flagged as invalid due to a misconfigured ROA, you may want to temporarily accept it to give the operators an opportunity to resolve the issue.

You can do this by specifying route origins that should be filtered out of the output, as well as origins that should be added, in a file using JSON notation according to the SLURM standard specified in RFC 8416.

A full example file is provided below. This, along with an empty one is available in the repository at /test/slurm.

{
  "slurmVersion": 1,
  "validationOutputFilters": {
   "prefixFilters": [
     {
      "prefix": "192.0.2.0/24",
      "comment": "All VRPs encompassed by prefix"
     },
     {
      "asn": 64496,
      "comment": "All VRPs matching ASN"
     },
     {
      "prefix": "198.51.100.0/24",
      "asn": 64497,
      "comment": "All VRPs encompassed by prefix, matching ASN"
     }
   ],
   "bgpsecFilters": [
     {
      "asn": 64496,
      "comment": "All keys for ASN"
     },
     {
      "SKI": "Zm9v",
      "comment": "Key matching Router SKI"
     },
     {
      "asn": 64497,
      "SKI": "YmFy",
      "comment": "Key for ASN 64497 matching Router SKI"
     }
   ]
  },
  "locallyAddedAssertions": {
   "prefixAssertions": [
     {
      "asn": 64496,
      "prefix": "198.51.100.0/24",
      "comment": "My other important route"
     },
     {
      "asn": 64496,
      "prefix": "2001:DB8::/32",
      "maxPrefixLength": 48,
      "comment": "My other important de-aggregated routes"
     }
   ],
   "bgpsecAssertions": [
     {
      "asn": 64496,
      "comment" : "My known key for my important ASN",
      "SKI": "<some base64 SKI>",
      "routerPublicKey": "<some base64 public key>"
     }
   ]
  }
}

Use the -x option to refer to your file with local exceptions. Routinator will re-read that file on every validation run, so you can simply update the file whenever your exceptions change.

Monitoring

The HTTP server in Routinator provides endpoints for monitoring the application. This means it may be a good idea to run the HTTP server alongside the RTR server.

To launch Routinator in server mode on 192.0.2.13 with RTR running on port 3323 and HTTP on 9556, use the following command:

routinator server --rtr 192.0.2.13:3323 --http 192.0.2.13:9556

The HTTP service has three monitoring endpoints on the following paths:

/version
Returns the version of the Routinator instance
/metrics
Exposes a data format specifically for Prometheus, for which dedicated port 9556 is reserved.
/status
Returns the information from the /metrics endpoint in a more concise format

Metrics

Update metrics
  • When the last update started and finished
  • The total duration of the last update
  • The retrieval duration and exit code for each rsync publication point
  • The retrieval duration and HTTP status code for each RRDP publication point
Object metrics
  • The number of valid ROAs per Trust Anchor
  • The number of Validated ROA Payloads (VRPs) per Trust Anchor
  • The number of stale objects found
RTR server
  • The current RTR serial number
  • The current and total number of RTR connections
  • The total amount of bytes sent and received over the RTR connection
HTTP server
  • The current and total number of HTTP connections
  • The total amount of bytes sent and received over the HTTP connection
  • The number of HTTP requests

Grafana

Using the Prometheus endpoint it's possible to build a detailed dashboard using for example Grafana. We provide a template to get started.

Grafana dashboard

A sample Grafana dashboard

Manual Page

routinator - RPKI relying party software

Date:2020-05-06
Author:Martin Hoffmann
Copyright:2019-2020 - NLnet Labs
Version:0.7.0

Synopsis

routinator options init init-options

routinator options vrps vrps-options -o output-file -f format

routinator options validate validate-options -a asn -p prefix

routinator options server server-options

routinator options update update-options

routinator man -o file

routinator -h

routinator -V

Description

Routinator collects and processes Resource Public Key Infrastructure (RPKI) data. It validates the Route Origin Attestations contained in the data and makes them available to your BGP routing workflow.

It can either run in one-shot mode outputting a list of validated route origins in various formats or as a server for the RPKI-to-Router (RTR) protocol that routers often implement to access the data, or via HTTP.

These modes and additional operations can be chosen via commands. For the available commands, see Commands below.

Options

The available options are:

-c path, --config=path

Provides the path to a file containing basic configuration. If this option is not given, Routinator will try to use $HOME/.routinator.conf if that exists. If that doesn't exist, either, default values for the options as described here are used.

See Configuration File below for more information on the format and contents of the configuration file.

-b dir, --base-dir=dir

Specifies the base directory to keep status information in. Unless overwritten by the -r or -t options, the local repository will be kept in the sub-directory repository and the TALs will be kept in the sub-directory tals.

If omitted, the base directory defaults to $HOME/.rpki-cache.

-r dir, --repository-dir=dir

Specifies the directory to keep the local repository in. This is the place where Routinator stores the RPKI data it has collected and thus is a copy of all the data referenced via the trust anchors.

-t dir, --tal-dir=dir

Specifies the directory containing the trust anchor locators (TALs) to use. Trust anchor locators are the starting points for collecting and validating RPKI data. See Trust Anchor Locators for more information on what should be present in this directory.

-x file, --exceptions=file

Provides the path to a local exceptions file. The option can be used multiple times to specify more than one file to use. Each file is a JSON file as described in RFC 8416. It lists both route origins that should be filtered out of the output as well as origins that should be added.

--strict

If this option is present, the repository will be validated in strict mode following the requirements laid out by the standard documents very closely. With the current RPKI repository, using this option will lead to a rather large amount of invalid route origins and should therefore not be used in practice.

See Relaxed Validation below for more information.

--stale=policy

This option defines how deal with stale objects. In RPKI, manifests and CRLs can be stale if the time given in their next-update field is in the past, indicating that an update to the object was scheduled but didn't happen. This can be because of an operational issue at the issuer or an attacker trying to replay old objects.

There are three possible policies that define how Routinator should treat stale objects.

A policy of reject instructs Routinator to consider all stale objects invalid. This will result in all material published by the CA issuing this manifest and CRL to be invalid including all material of any child CA.

The warn policy will allow Routinator to consider any stale object to be valid. It will, however, print a warning in the log allowing an operator to follow up on the issue. This is the default policy if the option is not provided.

Finally, the accept policy will cause Routinator to quietly accept any stale object as valid.

--allow-dubious-hosts

As a precaution, Routinator will reject rsync and HTTPS URIs from RPKI data with dubious host names. In particular, it will reject the name localhost, host names that consist of IP addresses, and a host name that contains an explicit port.

This option allows to disable this filtering.

--disable-rsync

If this option is present, rsync is disabled and only RRDP will be used.

--rsync-command=command

Provides the command to run for rsync. This is only the command itself. If you need to provide options to rsync, use the rsync-args configuration file setting instead.

If this option is not given, Routinator will simply run rsync and hope that it is in the path.

--rsync-timeout=seconds

Sets the number of seconds an rsync command is allowed to run before it is terminated early. This protects against hanging rsync commands that prevent Routinator from continuing. The default is 300 seconds which should be long enough except for very slow networks.

--disable-rrdp

If this option is present, RRDP is disabled and only rsync will be used.

--rrdp-timeout=seconds

Sets the timeout in seconds for any RRDP-related network operation, i.e., connects, reads, and writes. If this option is omitted, the default timeout of 30 seconds is used. Set the option to 0 to disable the timeout.

--rrdp-connect-timeout=seconds

Sets the timeout in seconds for RRDP connect requests. If omitted, the general timeout will be used.

--rrdp-local-addr=addr

If present, sets the local address that the RRDP client should bind to when doing outgoing requests.

--rrdp-root-cert=path

This option provides a path to a file that contains a certificate in PEM encoding that should be used as a trusted certificate for HTTPS server authentication. The option can be given more than once.

Providing this option does not disable the set of regular HTTPS authentication trust certificates.

--rrdp-proxy=uri

This option provides the URI of a proxy to use for all HTTP connections made by the RRDP client. It can be either an HTTP or a SOCKS URI. The option can be given multiple times in which case proxies are tried in the given order.

--dirty

If this option is present, unused files and directories will not be deleted from the repository directory after each validation run.

--validation-threads=count

Sets the number of threads to distribute work to for validation. Note that the current processing model validates trust anchors all in one go, so you are likely to see less than that number of threads used throughout the validation run.

-v, --verbose

Print more information. If given twice, even more information is printed.

More specifically, a single -v increases the log level from the default of warn to info, specifying it more than once increases it to debug.

-q, --quiet

Print less information. Given twice, print nothing at all.

A single -q will drop the log level to error. Repeating -q more than once turns logging off completely.

--syslog

Redirect logging output to syslog.

This option is implied if a command is used that causes Routinator to run in daemon mode.

--syslog-facility=facility

If logging to syslog is used, this option can be used to specify the syslog facility to use. The default is daemon.

--logfile=path

Redirect logging output to the given file.

-h, --help

Print some help information.

-V, --version

Print version information.

Commands

Routinator provides a number of operations around the local RPKI repository. These can be requested by providing different commands on the command line.

init

Prepares the local repository directories and the TAL directory for running Routinator. Specifically, makes sure the local repository directory exists, and creates the TAL directory and fills it with the TALs of the five RIRs.

For more information about TALs, see Trust Anchor Locators below.

-f, --force

Forces installation of the TALs even if the TAL directory already exists.

--accept-arin-rpa

Before you can use the ARIN TAL, you need to agree to the ARIN Relying Party Agreement (RPA). You can find it at https://www.arin.net/resources/manage/rpki/rpa.pdf and explicitly agree to it via this option. This explicit agreement is necessary in order to install the ARIN TAL.

--decline-arin-rpa

If, after reading the ARIN Relying Party Agreement, you decide you do not or cannot agree to it, this option allows you to skip installation of the ARIN TAL. Note that this means Routinator will not have access to any information published for resources assigned under ARIN.

vrps

This command requests that Routinator update the local repository and then validate the Route Origin Attestations in the repository and output the valid route origins, which are also known as Validated ROA Payload or VRPs, as a list.

-o file, --output=file

Specifies the output file to write the list to. If this option is missing or file is - the list is printed to standard output.

-f format, --format=format

The output format to use. Routinator currently supports the following formats:

csv

The list is formatted as lines of comma-separated values of the prefix in slash notation, the maximum prefix length, the autonomous system number, and an abbreviation for the trust anchor the entry is derived from. The latter is the name of the TAL file without the extension .tal.

This is the default format used if the -f option is missing.

csvcompat
The same as csv except that all fields are embedded in double quotes and the autonomous system number is given without the prefix AS. This format is pretty much identical to the CSV produced by the RIPE NCC Validator.
csvext

An extended version of csv each line contains these comma-separated values: the rsync URI of the ROA the line is taken from (or "N/A" if it isn't from a ROA), the autonomous system number, the prefix in slash notation, the maximum prefix length, the not-before date and not-after date of the validity of the ROA.

This format was used in the RIPE NCC RPKI Validator version 1. That version produces one file per trust anchor. This is not currently supported by Routinator -- all entries will be in one single output file.

json
The list is placed into a JSON object with a single element roas which contains an array of objects with four elements each: The autonomous system number of the network authorized to originate a prefix in asn, the prefix in slash notation in prefix, the maximum prefix length of the announced route in maxLength, and the trust anchor from which the authorization was derived in ta. This format is identical to that produced by the RIPE NCC RPKI Validator except for different naming of the trust anchor. Routinator uses the name of the TAL file without the extension .tal whereas the RIPE NCC Validator has a dedicated name for each.
openbgpd
Choosing this format causes Routinator to produce a roa- set configuration item for the OpenBGPD configuration.
bird
Choosing this format causes Routinator to produce a roa table configuration item for the BIRD configuration.
bird2
Choosing this format causes Routinator to produce a roa table configuration item for the BIRD2 configuration.
rpsl
This format produces a list of RPSL objects with the authorization in the fields route, origin, and source. In addition, the fields descr, mnt-by, created, and last-modified, are present with more or less meaningful values.
summary
This format produces a summary of the content of the RPKI repository. For each trust anchor, it will print the number of verified ROAs and VRPs. Note that this format does not take filters into account. It will always provide numbers for the complete repository.
none
This format produces no output whatsoever.
-n, --noupdate

The repository will not be updated before producing the list.

--complete

If any of the rsync commands needed to update the repository failed, Routinator completes the operation and exits with status code 2. Normally, it would exit with status code 0 indicating success.

-a asn, --filter-asn=asn

Only output VRPs for the given ASN. The option can be given multiple times, in which case VRPs for all provided ASNs are provided. ASNs can be given with or without the prefix AS.

-p prefix, --filter-prefix=prefix

Only output VRPs with an address prefix that covers the given prefix, i.e., whose prefix is equal to or less specific than the given prefix. This will include VRPs regardless of their ASN and max length. In other words, the output will include all VRPs that need to be considered when deciding whether an announcement for the prefix is RPKI valid or invalid.

The option can be given multiple times, in which case VRPs for all prefixes are provided. It can also be combined with one or more ASN filters. Then all matching VRPs are included. That is, filters combine as "or" not "and."

validate

This command can be used to perform RPKI route origin validation for a route announcement. Routinator will determine whether the provided announcement is RPKI valid, invalid, or not found.

-a asn, --asn=asn

The AS number of the autonomous system that originated the route announcement. ASNs can be given with or without the prefix AS.

-p prefix, --prefix=prefix

The address prefix the route announcement is for.

-j, --json

A detailed analysis on the reasoning behind the validation is printed in JSON format including lists of the VPRs that caused the particular result. If this option is omitted, Routinator will only print the determined state.

-n, --noupdate

The repository will not be updated before performing validation.

--complete

If any of the rsync commands needed to update the repository failed, Routinator completes the operation and exits with status code 2. Normally, it would exit with status code 0 indicating success.

server

This command causes Routinator to act as a server for the RPKI-to-Router (RTR) and HTTP protocols. In this mode, Routinator will read all the TALs (See Trust Anchor Locators below) and will stay attached to the terminal unless the -d option is given.

The server will periodically update the local repository, every ten minutes by default, notify any clients of changes, and let them fetch validated data. It will not, however, reread the trust anchor locators. Thus, if you update them, you will have to restart Routinator.

You can provide a number of addresses and ports to listen on for RTR and HTTP through command line options or their configuration file equivalent. Currently, Routinator will only start listening on these ports after an initial validation run has finished.

It will not listen on any sockets unless explicitly specified. It will still run and periodically update the repository. This might be useful for use with vrps mode with the -n option.

-d, --detach

If present, Routinator will detach from the terminal after a successful start.

--rtr=addr:port

Specifies a local address and port to listen on for incoming RTR connections.

Routinator supports both protocol version 0 defined in RFC 6810 and version 1 defined in RFC 8210. However, it does not support router keys introduced in version 1. IPv6 addresses must be enclosed in square brackets. You can provide the option multiple times to let Routinator listen on multiple address-port pairs.

--http=addr:port

Specifies the address and port to listen on for incoming HTTP connections. See HTTP Service below for more information on the HTTP service provided by Routinator.

--listen-systemd

The RTR listening socket will be acquired from systemd via socket activation. Use this option together with systemd's socket units to allow a Routinator running as a regular user to bind to the default RTR port 323.

Currently, all TCP listener sockets handed over by systemd will be used for the RTR protocol.

--refresh=seconds

The amount of seconds the server should wait after having finished updating and validating the local repository before starting to update again. The next update will be earlier if objects in the repository expire earlier. The default value is 600 seconds.

--retry=seconds

The amount of seconds to suggest to an RTR client to wait before trying to request data again if that failed. The default value is 600 seconds, as recommended in RFC 8210.

--expire=seconds

The amount of seconds to an RTR client can keep using data if it cannot refresh it. After that time, the client should discard the data. Note that this value was introduced in version 1 of the RTR protocol and is thus not relevant for clients that only implement version 0. The default value, as recommended in RFC 8210, is 7200 seconds.

--history=count

In RTR, a client can request to only receive the changes that happened since the last version of the data it had seen. This option sets how many change sets the server will at most keep. If a client requests changes from an older version, it will get the current full set.

Note that routers typically stay connected with their RTR server and therefore really only ever need one single change set. Additionally, if RTR server or router are restarted, they will have a new session with new change sets and need to exchange a full data set, too. Thus, increasing the value probably only ever increases memory consumption.

The default value is 10.

--pid-file=path

States a file which will be used in daemon mode to store the processes PID. While the process is running, it will keep the file locked.

--working-dir=path

The working directory for the daemon process. In daemon mode, Routinator will change to this directory while detaching from the terminal.

--chroot=path

The root directory for the daemon process. If this option is provided, the daemon process will change its root directory to the given directory. This will only work if all other paths provided via the configuration or command line options are under this directory.

--user=user-name

The name of the user to change to for the daemon process. It this option is provided, Routinator will run as that user after the listening sockets for HTTP and RTR have been created. The option has no effect unless --detach is also used.

--group=group-name

The name of the group to change to for the daemon process. It this option is provided, Routinator will run as that group after the listening sockets for HTTP and RTR have been created. The option has no effect unless --detach is also used.

update

Updates the local repository by resyncing all known publication points. The command will also validate the updated repository to discover any new publication points that appear in the repository and fetch their data.

As such, the command really is a shortcut for running routinator vrps -f none.

--complete

If any of the rsync commands needed to update the repository failed, Routinator completes the operation and exits with status code 2. Normally, it would exit with status code 0 indicating success.

man

Displays the manual page, i.e., this page.

-o file, --output=file

If this option is provided, the manual page will be written to the given file instead of displaying it. Use - to output the manual page to standard output.

Trust Anchor Locators

RPKI uses trust anchor locators, or TALs, to identify the location and public keys of the trusted root CA certificates. Routinator keeps these TALs in files in the TAL directory which can be set by the -t option. If the -b option is used instead, the TAL directory will be in the subdirectory tals under the directory specified in this option. The default location, if no options are used at all is $HOME/.rpki-cache/tals.

This directory can be created and populated with the TALs of the five Regional Internet Registries (RIRs) via the init command.

If the directory does exist, Routinator will use all files with an extension of .tal in this directory. This means that you can add and remove trust anchors by adding and removing files in this directory. If you add files, make sure they are in the format described by RFC 7730 or the upcoming RFC 8630.

Configuration File

Instead of providing all options on the command line, they can also be provided through a configuration file. Such a file can be selected through the -c option. If no configuration file is specified this way but a file named $HOME/.routinator.conf is present, this file is used.

The configuration file is a file in TOML format. In short, it consists of a sequence of key-value pairs, each on its own line. Strings are to be enclosed in double quotes. Lists can be given by enclosing a comma-separated list of values in square brackets.

The configuration file can contain the following entries. All path values are interpreted relative to the directory the configuration file is located in. All values can be overridden via the command line options.

repository-dir
A string containing the path to the directory to store the local repository in. This entry is mandatory.
tal-dir
A string containing the path to the directory that contains the Trust Anchor Locators. This entry is mandatory.
exceptions
A list of strings, each containing the path to a file with local exceptions. If missing, no local exception files are used.
strict
A boolean specifying whether strict validation should be employed. If missing, strict validation will not be used.
stale

A string specifying the policy for dealing with stale objects.

reject
Consider all stale objects invalid rendering all material published by the CA issuing the stale object to be invalid including all material of any child CA.
warn
Consider stale objects to be valid but print a warning to the log.
accept
Quietly consider stale objects valid.
allow-dubious-hosts
A boolean value that, if present and true, disables Routinator's filtering of dubious host names in rsync and HTTPS URIs from RPKI data.
disable-rsync
A boolean value that, if present and true, turns off the use of rsync.
rsync-command
A string specifying the command to use for running rsync. The default is simply rsync.
rsync-args

A list of strings containing the arguments to be passed to the rsync command. Each string is an argument of its own.

If this option is not provided, Routinator will try to find out if your rsync understands the --contimeout option and, if so, will set it to 10 thus letting connection attempts time out after ten seconds. If your rsync is too old to support this option, no arguments are used.

rsync-timeout
An integer value specifying the number seconds an rsync command is allowed to run before it is being terminated. The default if the value is missing is 300 seconds.
disable-rrdp
A boolean value that, if present and true, turns off the use of RRDP.
rrdp-timeout
An integer value that provides a timeout in seconds for all individual RRDP-related network operations, i.e., connects, reads, and writes. If the value is missing, a default timeout of 30 seconds will be used. Set the value to 0 to turn the timeout off.
rrdp-connect-timeout
An integer value that, if present, sets a separate timeout in seconds for RRDP connect requests only.
rrdp-local-addr
A string value that provides the local address to be used by RRDP connections.
rrdp-root-certs
A list of strings each providing a path to a file containing a trust anchor certificate for HTTPS authentication of RRDP connections. In addition to the certificates provided via this option, the system's own trust store is used.
rrdp-proxies
A list of string each providing the URI for a proxy for outgoing RRDP connections. The proxies are tried in order for each request. HTTP and SOCKS5 proxies are supported.
dirty
A boolean value which, if true, specifies that unused files and directories should not be deleted from the repository directory after each validation run. If left out, its value will be false and unused files will be deleted.
validation-threads
An integer value specifying the number of threads to be used during validation of the repository. If this value is missing, the number of CPUs in the system is used.
log-level
A string value specifying the maximum log level for which log messages should be emitted. The default is warn.
log

A string specifying where to send log messages to. This can be one of the following values:

default
Log messages will be sent to standard error if Routinator stays attached to the terminal or to syslog if it runs in daemon mode.
stderr
Log messages will be sent to standard error.
syslog
Log messages will be sent to syslog.
file
Log messages will be sent to the file specified through the log-file configuration file entry.

The default if this value is missing is, unsurprisingly, default.

log-file
A string value containing the path to a file to which log messages will be appended if the log configuration value is set to file. In this case, the value is mandatory.
syslog-facility
A string value specifying the syslog facility to use for logging to syslog. The default value if this entry is missing is daemon.
rtr-listen
An array of string values each providing the address and port which the RTR daemon should listen on in TCP mode. Address and port should be separated by a colon. IPv6 address should be enclosed in square brackets.
http-listen
An array of string values each providing the address and port which the HTTP service should listen on. Address and port should be separated by a colon. IPv6 address should be enclosed in square brackets.
listen-systemd
The RTR TCP listening socket will be acquired from systemd via socket activation. Use this option together with systemd's socket units to allow Routinator running as a regular user to bind to the default RTR port 323.
refresh
An integer value specifying the number of seconds Routinator should wait between consecutive validation runs in server mode. The next validation run will happen earlier, if objects expire earlier. The default is 600 seconds.
retry
An integer value specifying the number of seconds an RTR client is requested to wait after it failed to receive a data set. The default is 600 seconds.
expire
An integer value specifying the number of seconds an RTR client is requested to use a data set if it cannot get an update before throwing it away and continuing with no data at all. The default is 7200 seconds if it cannot get an update before throwing it away and continuing with no data at all. The default is 7200 seconds.
history-size
An integer value specifying how many change sets Routinator should keep in RTR server mode. The default is 10.
pid-file
A string value containing a path pointing to the PID file to be used in daemon mode.
working-dir
A string value containing a path to the working directory for the daemon process.
chroot
A string value containing the path any daemon process should use as its root directory.
user
A string value containing the user name a daemon process should run as.
group
A string value containing the group name a daemon process should run as.
tal-label

An array containing arrays of two string values mapping the name of a TAL file (without the path but including the extension) as given by the first string to the name of the TAL to be included where the TAL is referenced in output as given by the second string.

If the options missing or if a TAL isn't mentioned in the option, Routinator will construct a name for the TAL by using its file name (without the path) and dropping the extension.

HTTP Service

Routinator can provide an HTTP service allowing to fetch the Validated ROA Payload in various formats. The service does not support HTTPS and should only be used within the local network.

The service only supports GET requests with the following paths:

/metrics
Returns a set of monitoring metrics in the format used by Prometheus.
/status
Returns the current status of the Routinator instance. This is similar to the output of the /metrics endpoint but in a more human friendly format.
/version
Returns the version of the Routinator instance.
/api/v1/validity/as-number/prefix
Returns a JSON object describing whether the route announcement given by its origin AS number and address prefix is RPKI valid, invalid, or not found. The returned object is compatible with that provided by the RIPE NCC RPKI Validator. For more information, see https://ripe.net/support/documentation/developer-documentation/rpki-validator-api
/validity?asn=as-number&prefix=prefix
Same as above but with a more form-friendly calling convention.

In addition, the current set of VRPs is available for each output format at a path with the same name as the output format. E.g., the CSV output is available at /csv.

These paths accept filter expressions to limit the VRPs returned in the form of a query string. The field filter-asn can be used to filter for ASNs and the field filter-prefix can be used to filter for prefixes. The fields can be repeated multiple times.

This works in the same way as the options of the same name to the vrps command.

Relaxed Validation

The documents defining RPKI include a number of very strict rules regarding the formatting of the objects published in the RPKI repository. However, because PRKI reuses existing technology, real-world applications produce objects that do not follow these strict requirements.

As a consequence, a significant portion of the RPKI repository is actually invalid if the rules are followed. We therefore introduce two validation modes: strict and relaxed. Strict mode rejects any object that does not pass all checks laid out by the relevant RFCs. Relaxed mode ignores a number of these checks.

This memo documents the violations we encountered and are dealing with in relaxed validation mode.

Resource Certificates (RFC 6487)

Resource certificates are defined as a profile on the more general Internet PKI certificates defined in RFC 5280.

Subject and Issuer

The RFC restricts the type used for CommonName attributes to PrintableString, allowing only a subset of ASCII characters, while RFC 5280 allows a number of additional string types. At least one CA produces resource certificates with Utf8Strings.

In relaxed mode, we will only check that the general structure of the issuer and subject fields are correct and allow any number and types of attributes. This seems justified since RPKI explicitly does not use these fields.

Signed Objects (RFC 6488)

Signed objects are defined as a profile on CMS messages defined in RFC 5652.

DER Encoding

RFC 6488 demands all signed objects to be DER encoded while the more general CMS format allows any BER encoding -- DER is a stricter subset of the more general BER. At least one CA does indeed produce BER encoded signed objects.

In relaxed mode, we will allow BER encoding.

Note that this isn't just nit-picking. In BER encoding, octet strings can be broken up into a sequence of sub-strings. Since those strings are in some places used to carry encoded content themselves, such an encoding does make parsing significantly more difficult. At least one CA does produce such broken-up strings.

Signals

SIGUSR1: Reload TALs and restart validation
When receiving SIGUSR1, Routinator will attempt to reload the TALs and, if that succeeds, restart validation. If loading the TALs fails, Routinator will exit.

Exit Status

Upon success, the exit status 0 is returned. If any fatal error happens, the exit status will be 1. Some commands provide a --complete option which will cause the exit status to be 2 if any of the rsync commands to update the repository fail.

RTRlib

This is the user handbook of the RTRlib. It provides guidance on how to use the library for development and gives an overview of some command line tools that are based on RTRlib. Further information can be found on the RTRlib website [1] and its source code repository on Github [2].

About

In a Nutshell

RTRlib is a C library that implements the client side of the RPKI-RTR protocol as well as route origin validation. Basically, it maintains data from an RPKI cache server (e.g., Routinator) and allows to verify whether an autonomous system (AS) is the legitimate origin AS, based on the fetched valid ROA data. It is prepared for BGPsec path validation.

RTRlib powers RPKI in BGP software routers such as FRR and is the base for monitoring tools. A Python binding is available. The basis RTRlib package includes the library and lightweight command line tools.

Why do I need the RTRlib?

RTRlib gives easy and highly efficient access to cryptographically valid RPKI data without relying on a specific cache server or RPKI validator implementation. To prevent single point of failures, it handles failover between multiple cache servers.

Not only developers of routing software but also network operators benefit from RTRlib. Developers can integrate the RTRlib into their BGP daemon to extend their implementation towards RPKI. Network operators may use the RTRlib to develop monitoring tools (e.g., to evaluate the performance of caches or to validate BGP data).

License

This software is free, open source and licensed under MIT.

Supported RFCs

The current version implements RFC 6811 and RFC 8210.

Community

If you run into a problem with RTRlib or you have a feature request, please create an issue on Github. We are also happy to accept your pull requests. For general discussion and exchanging operational experiences we provide a mailing list. More details about RTRlib are available on the project website.

Installation

Most Linux distributions as well as Apple macOS support RTRlib. The RTRlib software package includes the library and basic ready-to-use command line tools that show some of the RTRlib features.

Apple macOS

For macOS we provide a Homebrew tap to easily install the RTRlib. First, setup Homebrew [1] and then install the RTRlib package:

brew tap rtrlib/pils
brew install rtrlib

Footnotes

[1]Homebrew -- http://brew.sh

Archlinux

For Archlinux we maintain two PKGBUILDs in the Archlinux User Repository, rtrlib [2] and rtrlib-git [3]. rtrlib includes the latest official RTRlib release, rtrlib-git includes the current git master.

You can either use your favourite aur helper or execute the following commands:

sudo pacman --needed base-devel

# for the latest release
wget https://aur.archlinux.org/cgit/aur.git/snapshot/rtrlib.tar.gz
tar xf rtrlib
cd rtrlib

# for the git version
wget https://aur.archlinux.org/cgit/aur.git/snapshot/rtrlib-git.tar.gz
tar xf rtrlib-git
cd rtrlib-git

# for both
makepkg -sci

Footnotes

[2]https://aur.archlinux.org/packages/rtrlib/
[3]https://aur.archlinux.org/packages/rtrlib-git/

Debian

RTRlib is part of the official Debian package repository since Buster [4] and can be installed using apt. The following packages are available:

librtr0:includes the basis library.
librtr0-dev:includes header files etc. for developers.
rtr-tools:includes basic command line tools based on RTRlib.
librtr0-dbgsym:includes debugging symbols.
librtr-doc:includes offline documentation.

To install the minimal set of packages required for development, execute the following command:

apt install librtr0 librtr-dev

If you just want to use the RTRlib command line tools, run

apt install librtr0 rtr-tools

Footnotes

[4]Buster is currently in testing and scheduled for release Mid 2019.

Gentoo

The FRR routing project maintains a gentoo overlay [5] that contains an ebuild for the RTRlib. First, setup layman [6], then install rtrlib with the following commands:

# If this doe not work try layman -f
layman -a frr-gentoo
emerge rtrlib

Footnotes

[5]https://github.com/FRRouting/gentoo-overlay
[6]https://wiki.gentoo.org/wiki/Layman

From Source

The source code repository of RTRlib includes everything that you need to implement or run applications based on the RTRlib, and to use the RTRlib command line tools.

The RTRlib source code consists of the following subdirectories:

  • cmake/ CMake modules
  • doxygen/ Example code and graphics used in the Doxygen documentation
  • rtrlib/ Header and source code files of the RTRlib
  • tests/ Function tests and unit tests
  • tools/ Contains rtrclient and rpki-rov
Getting Started

To build and install the RTRlib from source, you need the following common software:

cmake version >= 2.6:
 to build the system.
libssh version >= 0.5.0:
 to establish SSH transport connections (optional but highly recommended).

Additional optional requirements are:

cmocka:to run RTRlib unit tests
doxygen:to build the RTRlib API documentation
Building

The easiest way to get the source code is to download either the latest RTRlib release from https://github.com/rtrlib/rtrlib/releases/latest or the current master from https://github.com/rtrlib/rtrlib/archive/master.zip, and then unpack:

unzip rtrlib-master.zip
cd rtrlib-master
# or alternatively, clone the current git master
git clone https://github.com/rtrlib/rtrlib/
cd rtrlib

Then, build the library and command line tools using cmake. We recommend an out-of-source build:

# inside the main RTRlib source code directory
mkdir build && cd build
cmake -D CMAKE_BUILD_TYPE=Release ../
make
sudo make install

To enable debug symbols and messages, change the cmake command to:

cmake -D CMAKE_BUILD_TYPE=Debug ../

If the build command fails with any error, please consult the RTRlib README [7] and Wiki [8], you may also join our mailing list [9] or open an issue on Github [10].

Additional cmake Options and Targets

If you did not install libssh in the default directories, you can run cmake with the following parameters:

-D LIBSSH_LIBRARY=<path-to-libssh.so>
-D LIBSSH_INCLUDE=<include-directory>

To configure explicitly a directory where to place the RTRlib during installation, you can pass the following argument to cmake:

-D CMAKE_INSTALL_PREFIX=<path>

For developers, we provide a pre-build API documentation online [11] which documents the API of the latest release. Alternatively, and if Doxygen is available on your system, you can build the documentation locally as follows:

make doc

To execute the build-in tests provided by the RTRlib package, run:

make test

Footnotes

[7]README -- https://github.com/rtrlib/rtrlib/blob/master/README
[8]Wiki -- https://github.com/rtrlib/rtrlib/wiki
[9]Mailing list -- https://groups.google.com/forum/#!forum/rtrlib
[10]Issue tracker -- https://github.com/rtrlib/rtrlib/issues
[11]API reference -- https://rtrlib.realmv6.org/doxygen/latest

RTRlib Command Line Tools

The RTRlib software package includes two lightweight command line tools to showcase some of the RTRlib features. rtr-client connects to an RPKI cache server, fetches and maintains the valid ROA payloads, and prints the received data. rpki-rov allows to verify whether an autonomous system is the legitimate origin AS of an IP prefix, based on RPKI data.

If you want to use these command line tools, you need an RPKI-RTR connection to an RPKI cache server (e.g., Routinator). For those who do not have access to a cache server, we provide a public cache with hostname rpki-validator.realmv6.org and port 8282.

RTRlib RTR Client

rtrclient is part of the default RTRlib software package. This command line tool connects to an RPKI cache server and prints the received valid ROA payloads to standard out.

To establish a connection to RPKI cache servers, the client can use TCP or SSH transport sockets. To run the program you have to specify the transport protocol as well as the hostname and port of the RPKI cache server; additionally you can set several options. To get a complete reference over all options for the command simply run rtrclient in a shell.

リスト 1 shows how to connect the rtrclient to a cache server as well as 10 lines of the resulting output. It shows IPv4 and IPv6 prefixes secured by ROAs, the allowed prefix lengths, and the legitimate origin AS numbers. Each line represents either a ROA that was added (+) or removed (-) from the selected RPKI cache server. The RTRlib client will receive and print such updates until the program is terminated, i.e., by ctrl + c.

Output of the rtrclient tool.
rtrclient tcp -k -p rpki-validator.realmv6.org 8282
Prefix                                     Prefix Length         ASN
+ 89.185.224.0                                19 -  19        24971
+ 180.234.81.0                                24 -  24        45951
+ 37.32.128.0                                 17 -  17       197121
+ 161.234.0.0                                 16 -  24         6306
+ 85.187.243.0                                24 -  24        29694
+ 2a02:5d8::                                  32 -  32         8596
+ 2a03:2260::                                 30 -  30       201701
+ 2001:13c7:6f08::                            48 -  48        27814
+ 2a07:7cc3::                                 32 -  32        61232
+ 2a05:b480:fc00::                            48 -  48        39126

RTRlib ROV Validator

rpki-rov is also part of the RTRlib software package. This simple command line interface allows to verify whether an autonomous system is allowed to announce a specific IP prefix, based on data received from an RPKI cache server.

To run the program, you must provide two parameters, hostname and port of a known RPKI cache server. Then, you can interactively validate IP prefixes by typing prefix, prefix length, and origin ASN separated by spaces. Press ENTER to run the validation. The result will be shown instantly below the input.

注釈

rpki-rov can validate IPv4 and IPv6 prefixes by default.

リスト 2 shows the validation results of all RPKI-enabled RIPE RIS beacons. The output consists of three columns, which are separated by pipes (|):

<input query> | <ROAs> | <validation result>.

The validation results are 0 for valid, 1 for not found, and 2 for invalid.

In case of a valid and invalid prefix-AS pair, the output shows the matching ROAs for the given prefix and AS number. If multiple ROAs for a prefix exist, they are listed in a row separated by commas (,).

Output of rpki-rov showing validation results of multiple prefixes.
rpki-rov rpki-validator.realmv6.org 8282
93.175.146.0 24 12654
93.175.146.0 24 12654|12654 93.175.146.0 24 24|0
2001:7fb:fd02:: 48 12654
2001:7fb:fd02:: 48 12654|12654 2001:7fb:fd02:: 48 48|0
93.175.147.0 24 12654
93.175.147.0 24 12654|196615 93.175.147.0 24 24|2
2001:7fb:fd03:: 48 12654
2001:7fb:fd03:: 48 12654|196615 2001:7fb:fd03:: 48 48|2
84.205.83.0 24 12654
84.205.83.0 24 12654||1
2001:7fb:ff03:: 48 12654
2001:7fb:ff03:: 48 12654||1

Footnotes

[1]Project website -- https://rtrlib.realmv6.org
[2]Source code on Github -- https://github.com/rtrlib/rtrlib

RIPE NCC RPKI Validator 3.1

A fully-featured RPKI relying party software, written by the RIPE NCC in Java. This application allows operators to download and validate the global RPKI data set for use in their BGP decision making process and router configuration.

The project consists of two separate deployable units called the RPKI Validator and RPKI-RTR Server.

Installation

RIPE NCC provides a total of four options for installations:

CentOS

We have set up a repository with CentOS 7 RPMs for Prod builds. You can add the repository to your system as follows:

sudo yum-config-manager --add-repo https://ftp.ripe.net/tools/rpki/validator3/prod/centos7/ripencc-rpki-prod.repo

You might have to install 'yum-utils' first:

sudo yum install yum-utils

Install the RPKI Validator:

sudo yum install rpki-validator

Install the RPKI-RTR Server:

sudo yum install rpki-rtr-server

Run and enable the services:

sudo systemctl enable rpki-validator-3
sudo systemctl start rpki-validator-3
sudo systemctl enable rpki-rtr-server
sudo systemctl start rpki-rtr-server

To monitor the logs:

sudo journalctl -f -u rpki-validator-3
sudo journalctl -f -u rpki-rtr-server

The RPKI Validator 3.1 will be running on http://localhost:8080/

The RPKI-RTR Server will be running on http://localhost:8081/

You can also explore the API at http://localhost:8080/swagger-ui.html

Debian

The Debian packages for the RPKI Validator and RPKI-RTR Server can be found at: https://ftp.ripe.net/ripe/tools/rpki/validator3/prod/deb/

Download the suitable package and proceed with the installation:

Install the RPKI Validator:

sudo apt install ./rpki-validator-3-latest.deb

Install the RPKI-RTR Server:

sudo apt install ./rpki-rtr-server-latest.deb

Run and enable the services:

sudo systemctl enable rpki-validator-3
sudo systemctl start rpki-validator-3
sudo systemctl enable rpki-rtr-server
sudo systemctl start rpki-rtr-server

To monitor the logs:

sudo journalctl -f -u rpki-validator-3
sudo journalctl -f -u rpki-rtr-server

The RPKI Validator 3.1 will be running on http://localhost:8080/

The RPKI-RTR Server will be running on http://localhost:8081/

You can also explore the API at http://localhost:8080/swagger-ui.html

Generic build

You can find generic production builds at: https://ftp.ripe.net/tools/rpki/validator3/prod/generic/ Download the suitable package and unpack it.

To run the RPKI Validator generic build:

./rpki-validator-3.sh

To run the RPKI-RTR generic build:

./rpki-rtr-server.sh

The RPKI Validator 3.1 will be running on http://localhost:8080/

The RPKI-RTR Server will be running on http://localhost:8081/

You can also explore the API at http://localhost:8080/swagger-ui.html

Docker

To run the Centos/RPM based image with systemd:

docker pull  ripencc/rpki-validator-3-docker:latest
docker run --privileged --name rpkival -p 8080:8080 -d ripencc/rpki-validator-3-docker:latest

To run the generic alpine based image:

docker pull  ripencc/rpki-validator-3-docker:alpine
docker run --name validator-3-alpine -p 8080:8080 -d ripencc/rpki-validator-3-docker:alpine

The RPKI Validator 3.1 will be running on: http://localhost:8080/

More info can be found at https://hub.docker.com/r/ripencc/rpki-validator-3-docker

Extra TALs

By default, the Validator will have Trust Anchor Locators (TALs) installed for AFRINIC, APNIC, LACNIC, RIPE NCC, but not ARIN.

You can download the ARIN TAL at https://www.arin.net/resources/manage/rpki/tal/

Any of the formats will work, but the "RIPE NCC RPKI Validator format" will ensure that the TAL will have a friendly name like "ARIN".

You can use the following script to upload it:

./upload-tal.sh arin-ripevalidator.tal http://localhost:8080/

The script should be in the root folder if you unpacked the generic build, or in /usr/bin if you installed it using RPM/Debian package.

Alternatively, you can put extra TAL files to the preconfigured-tals directory of the RPKI Validator installation. This directory is scanned on the start and all the parseable TALs are picked up for validation. For the RPM/Debian package installation the directory is /var/lib/rpki-validator-3/preconfigured-tals/.

RPKI Validator

Set up to run as a daemon, and has the following features:

  • Supports all current RPKI objects: certificates, manifests, CRLs, ROAs, router certificates, and ghostbuster records
  • Supports the RRDP delta protocol
  • Supports caching RPKI data in case a repository is unavailable
  • Uses an asynchronous strategy to retrieve (often delegated) repositories, so that unavailable repositories do not block validation
  • Features an API
  • Has a full UI
  • Supports exceptions trough local filters and assertions

RPKI-RTR Server

A separate daemon that implements RPKI to the Router protocol (RTR), allowing validated prefix origin data to be delivered to routers. The RPKI-RTR Server is set up as a separate deamon because not everyone needs to run it. Far more importantly, a separate daemon allows you to start multiple instances for redundancy.

For more information, check the release notes. You can also contribute to the project on GitHub.

System Requirements

You will need a UNIX-like system with OpenJDK 8 or higher and rsync. You will also need at least 1.5GB of RAM available on your server (2GB in total if you also run the RPKI-RTR server). One (virtual) CPU should be enough. The repository objects are stored in a file-based database, rather than in memory, for which we recommend at least 10GB of available disk space.

索引