diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..ed021d207e99e62eeb964c50447b549c8824bab6 --- /dev/null +++ b/.gitignore @@ -0,0 +1,46 @@ +.idea +.git +# Byte-compiled / optimized / DLL files +__pycache__/ +./libraries/__pycache__/*.py[cod] +*.py[cod] +*$py.class +*.DS_Store +*.key +*.csr +*.pem +*.crt +*.zip +*.srl +services/nginx/certs/sign_req_body.json +services/easy_rsa/certs/pki +services/easy_rsa/certs/*EasyRSA* +services/easy_rsa/certs/*.profile +services/easy_rsa/certs/*.csr + + +monitoring/grafana/*grafana_db* +monitoring/prometheus/*prometheus_db +monitoring/prometheus/*.rules +monitoring/tempo/*tempo-data + + + + + + + +!docs/testing_with_curl/exposer.key +!docs/testing_with_curl/invoker.key + + +services/docker-elk/elasticsearch/_state +services/docker-elk/elasticsearch/snapshot_cache +services/docker-elk/elasticsearch/indices +services/docker-elk/elasticsearch/node.lock +services/docker-elk/elasticsearch/nodes + +results + +helm/capif/*.lock +helm/capif/charts \ No newline at end of file diff --git a/CITATION.cff b/CITATION.cff new file mode 100644 index 0000000000000000000000000000000000000000..ef7f8ff7765f6d33008f868f696db3f7e15b25aa --- /dev/null +++ b/CITATION.cff @@ -0,0 +1,9 @@ +cff-version: 2.1.0 +message: "If you use this software, please cite it as below." +authors: +- family-names: "EVOLVED-5g" + given-names: "EVOLVED-5g" +title: "CAPIF_API_Services" +version: 2.1 +date-released: 2022-01-30 +url: "https://github.com/EVOLVED-5G/CAPIF_API_Services" diff --git a/FAQ.md b/FAQ.md new file mode 100644 index 0000000000000000000000000000000000000000..14a88ff4fbd25be37448551307c9f4773f9587b9 --- /dev/null +++ b/FAQ.md @@ -0,0 +1,88 @@ +[**[Return To Main]**] + +# FAQ + +### Does the user have to develop the 3 elements of the provider (AEF, AMF and APF)? +No, you only have to make the request to the "/onboarding" endpoint. In it you must specify a CSR for the AEF, APF and AMF and you will receive the certificates for each of them in the response. + +### There is one party that publishes the API and another that exposes it, what is the difference? +There are different services, the APF, intended for publishing the APIs, and the AEF, intended so that the invoker can call it. The APF is what connects to the Capif Core Function to publish the service and when the service is up, you need the AEF service so that invokers can connect to it. + + +### Before publishing an API, do you have to be registered in CAPIF? +Yes, before publishing an API you must register using the POST /register endpoint. + + +### Where is the registration done? +Registration is done in a REST API outside of the CAPIF specification taht we have implemented. + + +### Is the username and password chosen by the user when registering or is it assigned when requesting registration to CAPIF public instance? +When you make the request to the "/endpoint" of register, you will be returned a username and a password determined by CAPIF. + + +### What is a CSR? +A CSR is a Certificate Signing Request. It is a generated data block where the certificate is planned to be installed and contains key information such as public key, organization, and location, and is used to request a certificate from a certificate authority (CA). In CAPIF, 3 CSRs are necessary to register a provider, for AEF, APF and AMF. + + +### When doing the register_provider where can I find the CSRs that are generated? +When using the "register_provider" command, if you add the "debug" option, it shows you a json with the data used to register the provider. There we can find in the body a list of 3 elements corresponding to AEF, APF and AMF. IN each of them, the apiProbPubKey field corresponds to the CSR. + + +### How to use the example client (CAPIF_INVOKER_GUI)? +First you have to make a "./run.sh host:port" indicating the address of the public CAPIF. Once the Docker containers are up, you have to do a "./terminal_to_py_netapp.sh" and then a "python main.py". At this point we will find ourselves in a console with some predefined commands to use the Client. If we press tab twice it will bring up the list of available commands. + + +### Where is the CAPIF public instance located? +The CAPIF public instance can be found at the following URLs: +- capif.mobilesandbox.cloud:37211 (HTTPS) +- capif.mobilesandbox.cloud:37212 (HTTP) + + +### Do you have to publish 3 APIs? one for each instance? +No, you only have to publish a single API but each component is responsible for a specific service, whether publishing or exposing. + + +### Once the API is published, is it always active? Or do you have to republish it every time you want to use it? +It is better to unsubscribe the API every time you exit the application since otherwise it could be republished and it would be double. + + +### Would the same username and password be valid for different invokers? +Yes, a user can have multiple invokers at the same time, and as such, the username and password would be the same. + + +### What is the notfication destination field in the register_invoker request? +This is the callback URL used to notify events. CAPIF has an Event service to subscribe to that notifies actions such as a subscription to an API, a change in the state of an API... + + +### Is the notification_destination a required field in the register_invoker +No, it is not mandatory, but if you do not enter it you will not receive any CAPIF events. For example, the APF may delete the API, you will not be notified that the API is no longer available. + + +### What is the purpose of the "discover_service" function in the invoker client? +The discover_service returns a json with all the services that exist exposed in CAPIF at that moment. + + +### What is the purpose of the "get_security_auth" function in the invoker client? +Sirve para pedir el token o para refrescarlo en caso de que haya caducado. You have to use that token to call the API from the invoker. + + +### What is the purpose of the "register_security_context" function in the invoker client? +To consume the API it is necessary to have a Security Context registered with the data and the authentication method. + + +### Is a user the same as an exposer? +No, a user registers in CAPIF and once done can have the role of invoker, provider or both. + + +### Where can I put my endpoint? +You have to set your endpoint when doing the "publish_service" functionality: + ``` + publish_service capif_ops/config_files/service_api_description_hello.json + ``` + +In the file "service_api_description_hello.json" you configure the service that is going to be exposed and by developing one to suit you, you expose your API. + + + + [Return To Main]: ./README.md#faq-documentation \ No newline at end of file diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..94a9ed024d3859793618152ea559a168bbcbb5e2 --- /dev/null +++ b/LICENSE @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/README.md b/README.md index c150cd3fb3767ff92119cc46c56ea2ff19629dbf..27dc1bdc8fe55f13d31bb386232b10c1a16805ef 100644 --- a/README.md +++ b/README.md @@ -1,92 +1,176 @@ -# capif +# Common API Framework (CAPIF) + +- [Common API Framework (CAPIF)](#common-api-framework-capif) +- [Repository structure](#repository-structure) +- [CAPIF\_API\_Services](#capif_api_services) + - [How to run CAPIF services in this Repository](#how-to-run-capif-services-in-this-repository) + - [Run All CAPIF Services locally with Docker images](#run-all-capif-services-locally-with-docker-images) + - [Run All CAPIF Services locally with Docker images and deploy monitoring stack](#run-all-capif-services-locally-with-docker-images-and-deploy-monitoring-stack) + - [Run each service using Docker](#run-each-service-using-docker) + - [Run each service using Python](#run-each-service-using-python) +- [How to test CAPIF APIs](#how-to-test-capif-apis) + - [Test Plan Documentation](#test-plan-documentation) + - [Robot Framework](#robot-framework) + - [Using Curl](#using-curl) + - [Using PostMan](#using-postman) +- [Important urls:](#important-urls) + - [Mongo DB Dashboard](#mongo-db-dashboard) +- [FAQ Documentation](#faq-documentation) +- [CAPIF Release 1.0](#capif-release-10) + + +# Repository structure +``` +CAPIF_API_Services +└───docs +│ └───test_plan +│ └───testing_with_postman +└───services +└───tests +└───tools + └───robot + └───open_api_script +``` +* **services**: Services developed following CAPIF API specifications. Also, other complementary services (e.g., NGINX and JWTauth services for the authentication of API consuming entities). +* **tools**: Auxiliary tools. Robot Framework related code and OpenAPI scripts. +* **test**: Tests developed using Robot Framework. +* **docs**: Documents related to the code in the repository. + * images: images used in the repository + * test_plan: test plan descriptions for each API service referring to the test that are executed with the Robot Framework. + * testing_with_postman: auxiliary JSON file needed for the Postman-based examples. -## Getting started +# CAPIF_API_Services +This repository has the python-flask Mockup servers created with openapi-generator related with CAPIF APIS defined here: +[Open API Descriptions of 3GPP 5G APIs] -To make it easy for you to get started with GitLab, here's a list of recommended next steps. +## How to run CAPIF services in this Repository +Capif services are developed under /service/ folder. -Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)! +### Run All CAPIF Services locally with Docker images +To run using docker and docker compose, version 2.10 or higher, you must ensure you have that tools installed at your machine. Also to simplify the process, we have 3 script to control docker images to deploy, check and cleanup. -## Add your files +To run all CAPIF APIs locally using docker and docker-compose you can execute: +``` +cd services/ -- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files -- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command: +./run.sh +``` +This will build and run all services using docker images, including mongodb and nginx locally and in background, and import ca.crt to nginx. +Nginx deployed by default use **capifcore** hostname, but can add a parameter when run.sh is executed setting a different hostname, for example, ``` -cd existing_repo -git remote add origin https://labs.etsi.org/rep/ocf/capif.git -git branch -M main -git push -uf origin main +./run.sh openshift.evolved-5g.eu ``` -## Integrate with your tools +If you want to check if all CAPIF services are running properly in local machine after execute run.sh, we can use: +``` +./check_services_are_running.sh +``` +This shell script will return 0 if all services are running properly. -- [ ] [Set up project integrations](https://labs.etsi.org/rep/ocf/capif/-/settings/integrations) +When we need to stop CAPIF services, we can use next bash script: +``` +./clean_capif_docker_services.sh +``` -## Collaborate with your team +or also the next script: +``` +./stop.sh +``` -- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/) -- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html) -- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically) -- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/) -- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html) +This shell script will remove and clean all CAPIF services started previously with run.sh -## Test and Deploy +### Run All CAPIF Services locally with Docker images and deploy monitoring stack +It is now possible to deploy a monitoring stack for CAPIF with Grafana, Prometheus, FluentBit, Loki, Cadvisor, Tempo and Opentelemetry. -Use the built-in continuous integration in GitLab. +To deploy CAPIF together with the monitoring stack, it is only necessary to execute the following. -- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html) -- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/) -- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html) -- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/) -- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html) +``` +./run.sh --m true +``` -*** +After they have been built, the different panels can be consulted in Grafana at the url -# Editing this README +``` +http:<0.0.0.0>:3000 +``` -When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template. +By default, the monitoring option is set to false. Once up, all data sources and dashboards are automatically provisioned -## Suggestions for a good README -Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information. +### Run each service using Docker -## Name -Choose a self-explaining name for your project. +Also you can run service by service using docker: +``` +cd +docker build -t capif_security . +docker run -p 8080:8080 capif_security +``` + +### Run each service using Python + +Run using python +``` +cd +pip3 install -r requirements.txt +python3 -m +``` + +# How to test CAPIF APIs +The above APIs can be tested either with "curl" command, POSTMAN tool or running developed tests with Robot Framework. +## Test Plan Documentation + +Complete documentation of tests is here: [Test Plan Directory] +## Robot Framework -## Description -Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors. +In order to ensure modifications over CAPIF services still accomplish the required functionality, Robot Framework test suite must be success. -## Badges -On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge. +Test suite implemented accomplish requirements described under test plan at [Test Plan Directory] folder. -## Visuals -Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method. +Please go to [Testing with Robot Framework] Section -## Installation -Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection. +## Using Curl -## Usage -Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README. +Please go to [Testing Using Curl] section. -## Support -Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc. +## Using PostMan +You can test the CAPIF flow using the Postman tool. To do this, we have created a collection with some examples of CAPIF requests with everything necessary to carry them out. + +For more information on how to test the APIs with POSTMAN, follow this [Document](docs/testing_with_postman/README.md). +Also you have here the [POSTMAN Collection](docs/testing_with_postman/CAPIF.postman_collection.json) + +# Important urls: + +## Mongo DB Dashboard +``` +http://0.0.0.0:8082/ (if accessed from localhost) + +or + +http://:8082/ (if accessed from another host) +``` +# FAQ Documentation -## Roadmap -If you have ideas for releases in the future, it is a good idea to list them in the README. +Frequently asked questions can be found here: [FAQ Directory] -## Contributing -State if you are open to contributions and what your requirements are for accepting them. +# CAPIF Release 1.0 -For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self. +The APIs included in release 1.0 are: +- JWT Authentication APIs +- CAPIF Invoker Management API +- CAPIF Publish API +- CAPIF Discover API +- CAPIF Security API +- CAPIF Events API +- CAPIF Provider Management API -You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser. +Testing Suite of all services. -## Authors and acknowledgment -Show your appreciation to those who have contributed to the project. -## License -For open source projects, say how it is licensed. -## Project status -If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers. +[Open API Descriptions of 3GPP 5G APIs]: https://forge.3gpp.org/rep/all/5G_APIs "Open API Descriptions of 3GPP 5G APIs" +[Test Plan Directory]: ./docs/test_plan/README.md "Test Plan Directory" +[Testing Using Curl]: ./docs/testing_with_curl/README.md "Testing Using Curl" +[Testing with Robot Framework]: ./docs/testing_with_robot/README.md "Testing with Robot Framework" +[FAQ Directory]: ./FAQ.md "FAQ directory" \ No newline at end of file diff --git a/cicd/exclusions b/cicd/exclusions new file mode 100644 index 0000000000000000000000000000000000000000..84a40a8a92a6c882240ee99fa3dbb01a54520e1a --- /dev/null +++ b/cicd/exclusions @@ -0,0 +1,5 @@ +../helm/capif/README.md +../helm/capif/values.yaml +../services/docker-compose-capif.yml +../docs/ +../monitoring/ \ No newline at end of file diff --git a/cicd/ruff.toml b/cicd/ruff.toml new file mode 100644 index 0000000000000000000000000000000000000000..bff59ea8f304e0f1172c8077fdc4a4d161dce2cb --- /dev/null +++ b/cicd/ruff.toml @@ -0,0 +1,3 @@ +line-length = 120 +target-version = "py39" +select = ["E", "W"] \ No newline at end of file diff --git a/docs/images/flows/01 - Register del AEF.png b/docs/images/flows/01 - Register del AEF.png new file mode 100644 index 0000000000000000000000000000000000000000..f1454c8bffe50c74247e2ddbd9e3ae4a4da77579 Binary files /dev/null and b/docs/images/flows/01 - Register del AEF.png differ diff --git a/docs/images/flows/02 - AEF API Provider registration.png b/docs/images/flows/02 - AEF API Provider registration.png new file mode 100644 index 0000000000000000000000000000000000000000..3b42e21185ae36c97f3c6504624757a80e282ef6 Binary files /dev/null and b/docs/images/flows/02 - AEF API Provider registration.png differ diff --git a/docs/images/flows/03 - AEF Publish.png b/docs/images/flows/03 - AEF Publish.png new file mode 100644 index 0000000000000000000000000000000000000000..dfe3a3e09ff3183fa93eab95e1df107a2dfa9638 Binary files /dev/null and b/docs/images/flows/03 - AEF Publish.png differ diff --git a/docs/images/flows/04 - Invoker Register.png b/docs/images/flows/04 - Invoker Register.png new file mode 100644 index 0000000000000000000000000000000000000000..571f2af8098ba390a31cf5676ba0cd180b55d983 Binary files /dev/null and b/docs/images/flows/04 - Invoker Register.png differ diff --git a/docs/images/flows/05 - Invoker Onboarding.png b/docs/images/flows/05 - Invoker Onboarding.png new file mode 100644 index 0000000000000000000000000000000000000000..9cd4b2d9fc99daa0f285a0f9e5cf77d9c51ea17c Binary files /dev/null and b/docs/images/flows/05 - Invoker Onboarding.png differ diff --git a/docs/images/flows/06 - Invoker Discover AEF.png b/docs/images/flows/06 - Invoker Discover AEF.png new file mode 100644 index 0000000000000000000000000000000000000000..20b2f04f94444833bbea04832620434593f89bc0 Binary files /dev/null and b/docs/images/flows/06 - Invoker Discover AEF.png differ diff --git a/docs/images/flows/07 - Invoker Create Security Context.png b/docs/images/flows/07 - Invoker Create Security Context.png new file mode 100644 index 0000000000000000000000000000000000000000..bb655e0e38e8394b3aaa024ada0f13c2f21c2b4c Binary files /dev/null and b/docs/images/flows/07 - Invoker Create Security Context.png differ diff --git a/docs/images/flows/08 - Invoker Get Token.png b/docs/images/flows/08 - Invoker Get Token.png new file mode 100644 index 0000000000000000000000000000000000000000..2e39f52a3d12efa8c19ea3032957e9f48d890d0e Binary files /dev/null and b/docs/images/flows/08 - Invoker Get Token.png differ diff --git a/docs/images/flows/09 - Invoker Send Request to AEF Service API.png b/docs/images/flows/09 - Invoker Send Request to AEF Service API.png new file mode 100644 index 0000000000000000000000000000000000000000..1e4a87c54b04d2d6524da2617fa615dab7318dc7 Binary files /dev/null and b/docs/images/flows/09 - Invoker Send Request to AEF Service API.png differ diff --git a/docs/images/robot_log_example.png b/docs/images/robot_log_example.png new file mode 100644 index 0000000000000000000000000000000000000000..6c15a031e26eae47fed53b21a1e69e2f7bfa89db Binary files /dev/null and b/docs/images/robot_log_example.png differ diff --git a/docs/images/robot_report_example.png b/docs/images/robot_report_example.png new file mode 100644 index 0000000000000000000000000000000000000000..1cf36d8766753d08938be73c87e958f4c8d56068 Binary files /dev/null and b/docs/images/robot_report_example.png differ diff --git a/docs/test_plan/README.md b/docs/test_plan/README.md new file mode 100644 index 0000000000000000000000000000000000000000..d25099756a27c799848098e6e343564298a2f0c2 --- /dev/null +++ b/docs/test_plan/README.md @@ -0,0 +1,16 @@ +[**[Return To Main]**] + +# Testplans +List of Common API Services implemented: +* [Api Invoker Management](./api_invoker_management/README.md) +* [Api Provider Management](./api_provider_management/README.md) +* [Api Publish Service](./api_publish_service/README.md) +* [Api Discover Service](./api_discover_service/README.md) +* [Api Events Service](./api_events_service/README.md) +* [Api Security Service](./api_security_service/README.md) +* [Api Logging Service](./api_logging_service/README.md) +* [Api Auditing Service](./api_auditing_service/README.md) +* [Api Access Control Policy](./api_access_control_policy/README.md) + + + [Return To Main]: ../../README.md#test-plan-documentation \ No newline at end of file diff --git a/docs/test_plan/api_access_control_policy/README.md b/docs/test_plan/api_access_control_policy/README.md new file mode 100644 index 0000000000000000000000000000000000000000..05a9e63227702c7972790a6ea248d6f9a1dea864 --- /dev/null +++ b/docs/test_plan/api_access_control_policy/README.md @@ -0,0 +1,813 @@ +[**[Return To All Test Plans]**] + +- [Test Plan for CAPIF Api Access Control Policy](#test-plan-for-capif-api-access-control-policy) +- [Tests](#tests) + - [Test Case 1: Retrieve ACL](#test-case-1-retrieve-acl) + - [Test Case 2: Retrieve ACL with 2 Service APIs published](#test-case-2-retrieve-acl-with-2-service-apis-published) + - [Test Case 3: Retrieve ACL with security context created by two different Invokers](#test-case-3-retrieve-acl-with-security-context-created-by-two-different-invokers) + - [Test Case 4: Retrieve ACL filtered by api-invoker-id](#test-case-4-retrieve-acl-filtered-by-api-invoker-id) + - [Test Case 5: Retrieve ACL filtered by supported-features](#test-case-5-retrieve-acl-filtered-by-supported-features) + - [Test Case 6: Retrieve ACL with aef-id not valid](#test-case-6-retrieve-acl-with-aef-id-not-valid) + - [Test Case 7: Retrieve ACL with service-id not valid](#test-case-7-retrieve-acl-with-service-id-not-valid) + - [Test Case 8: Retrieve ACL with service-api-id and aef-id not valid](#test-case-8-retrieve-acl-with-service-api-id-and-aef-id-not-valid) + - [Test Case 9: Retrieve ACL without SecurityContext created previously by Invoker](#test-case-9-retrieve-acl-without-securitycontext-created-previously-by-invoker) + - [Test Case 10: Retrieve ACL filtered by api-invoker-id not present](#test-case-10-retrieve-acl-filtered-by-api-invoker-id-not-present) + - [Test Case 11: Retrieve ACL with APF Certificate](#test-case-11-retrieve-acl-with-apf-certificate) + - [Test Case 12: Retrieve ACL with AMF Certificate](#test-case-12-retrieve-acl-with-amf-certificate) + - [Test Case 13: Retrieve ACL with Invoker Certificate](#test-case-13-retrieve-acl-with-invoker-certificate) + - [Test Case 14: No ACL for invoker after be removed](#test-case-14-no-acl-for-invoker-after-be-removed) + + + + +# Test Plan for CAPIF Api Access Control Policy +At this documentation you will have all information and related files and examples of test plan for this API. + +# Tests + +## Test Case 1: Retrieve ACL +* **Test ID**: ***capif_api_acl-1*** +* **Description**: + + This test case will check that an API Provider can retrieve ACL from CAPIF +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker had a Security Context for Service API published. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId}?aef-id=${aef_id}* + * Use *serviceApiId* and *aefId* + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. ACL Response: + 1. **200 OK** Response. + 2. body returned must accomplish **AccessControlPolicyList** data structure. + 3. apiInvokerPolicies must: + 1. contain only one object. + 2. apiInvokerId must match apiInvokerId registered previously. + + +## Test Case 2: Retrieve ACL with 2 Service APIs published +* **Test ID**: ***capif_api_acl-2*** +* **Description**: + + This test case will check that an API Provider can retrieve ACL from CAPIF for 2 different serviceApis published. +* **Pre-Conditions**: + + * API Provider had two Service API Published on CAPIF + * API Invoker had a Security Context for both Service APIs published. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_2 + * Store *serviceApiId* + * Use APF Certificate + + 4. Perform [Invoker Onboarding] store apiInvokerId + 5. Discover published APIs + 6. Create Security Context for this Invoker for both published APIs + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 7. Provider Retrieve ACL for serviceApiId1 + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId1}?aef-id=${aef_id}* + * Use *serviceApiId* and *aefId* + * Use AEF Provider Certificate + + 8. Provider Retrieve ACL for serviceApiId2 + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId2}?aef-id=${aef_id}* + * Use *serviceApiId* and *aefId* + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 and service_2 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information for service_1. + 7. Provider Get ACL information for service_2. + +* **Expected Result**: + + 1. ACL Response: + 1. **200 OK** Response. + 2. body returned must accomplish **AccessControlPolicyList** data structure. + 3. apiInvokerPolicies must: + 1. contain one object. + 2. apiInvokerId must match apiInvokerId registered previously. + +## Test Case 3: Retrieve ACL with security context created by two different Invokers +* **Test ID**: ***capif_api_acl-3*** +* **Description**: + + This test case will check that an API Provider can retrieve ACL from CAPIF containing 2 objects. +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * Two API Invokers had a Security Context for same Service API published by provider. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker for both published APIs + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Repeat previous 3 steps in order to have a new Invoker. + + 7. Provider Retrieve ACL for serviceApiId + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId1}?aef-id=${aef_id}* + * Use *serviceApiId* and *aefId* + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 and service_2 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. ACL Response: + 1. **200 OK** Response. + 2. body returned must accomplish **AccessControlPolicyList** data structure. + 3. apiInvokerPolicies must: + 1. Contain two objects. + 2. One object must match with apiInvokerId1 and the other one with apiInvokerId2 an registered previously. + +## Test Case 4: Retrieve ACL filtered by api-invoker-id +* **Test ID**: ***capif_api_acl-4*** +* **Description**: + + This test case will check that an API Provider can retrieve ACL filtering by apiInvokerId from CAPIF containing 1 objects. +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * Two API Invokers had a Security Context for same Service API published by provider. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 4. Perform [Invoker Onboarding] store apiInvokerId + 6. Discover published APIs + 7. Create Security Context for this Invoker for both published APIs + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 8. Repeat previous 3 steps in order to have a new Invoker. + + 9. Provider Retrieve ACL for serviceApiId + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId1}?aef-id=${aef_id}&api-invoker-id={apiInvokerId1}* + * Use *serviceApiId*, *aefId* and apiInvokerId1 + * Use AEF Provider Certificate + + 10. Provider Retrieve ACL for serviceApiId + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId1}?aef-id=${aef_id}&api-invoker-id={apiInvokerId2}* + * Use *serviceApiId*, *aefId* and apiInvokerId2 + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 and service_2 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information with query parameter indicating first api-invoker-id. + 7. Provider Get ACL information with query parameter indicating second api-invoker-id. + +* **Expected Result**: + + 1. ACL Response: + 1. **200 OK** Response. + 2. body returned must accomplish **AccessControlPolicyList** data structure. + 3. apiInvokerPolicies must: + 1. Contain one objects. + 2. Object must match with apiInvokerId1. + + 2. ACL Response: + 1. **200 OK** Response. + 2. body returned must accomplish **AccessControlPolicyList** data structure. + 3. apiInvokerPolicies must: + 1. Contain one objects. + 2. Object must match with apiInvokerId2. + +## Test Case 5: Retrieve ACL filtered by supported-features +* **Test ID**: ***capif_api_acl-5*** +* **Description**: + **CURRENTLY NOT SUPPORTED FEATURE** + + This test case will check that an API Provider can retrieve ACL filtering by supportedFeatures from CAPIF containing 1 objects. + +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * Two API Invokers had a Security Context for same Service API published by provider. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker for both published APIs + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Repeat previous 3 steps in order to have a new Invoker. + + 7. Provider Retrieve ACL for serviceApiId + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId1}?aef-id=${aef_id}&supported-features={apiInvokerId1}* + * Use *serviceApiId*, *aefId* and apiInvokerId1 + * Use AEF Provider Certificate + + 8. Provider Retrieve ACL for serviceApiId + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId1}?aef-id=${aef_id}&supported-features={apiInvokerId2}* + * Use *serviceApiId*, *aefId* and apiInvokerId2 + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 and service_2 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information with query parameter indicating first supported-features. + 7. Provider Get ACL information with query parameter indicating second supported-features. + +* **Expected Result**: + + 1. ACL Response: + 1. **200 OK** Response. + 2. body returned must accomplish **AccessControlPolicyList** data structure. + 3. apiInvokerPolicies must: + 1. Contain one objects. + 2. Object must match with supportedFeatures1. + + 2. ACL Response: + 1. **200 OK** Response. + 2. body returned must accomplish **AccessControlPolicyList** data structure. + 3. apiInvokerPolicies must: + 1. Contain one objects. + 2. Object must match with supportedFeatures1. + + +## Test Case 6: Retrieve ACL with aef-id not valid +* **Test ID**: ***capif_api_acl-6*** +* **Description**: + + This test case will check that an API Provider can't retrieve ACL from CAPIF if aef-id is not valid +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker had a Security Context for Service API published. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId}?aef-id=${AEF_ID_NOT_VALID}* + * Use *serviceApiId* and *AEF_ID_NOT_VALID* + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. ACL Response: + 1. **404 Not Found** Response. + 2. body returned must accomplish **Problem Details** data structure. + 3. apiInvokerPolicies must: + * status **404** + * title with message "Not Found" + * detail with message "No ACLs found for the requested service: {service_api_id}, aef_id: {aef_id}, invoker: {api_invoker_id} and supportedFeatures: {supported_features}". + * cause with message "Wrong id". + + +## Test Case 7: Retrieve ACL with service-id not valid +* **Test ID**: ***capif_api_acl-7*** +* **Description**: + + This test case will check that an API Provider can't retrieve ACL from CAPIF if service-api-id is not valid +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker had a Security Context for Service API published. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${NOT_VALID_SERVICE_API_ID}?aef-id=${aef_id}* + * Use *NOT_VALID_SERVICE_API_ID* and *aef_id* + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. ACL Response: + 1. **404 Not Found** Response. + 2. body returned must accomplish **Problem Details** data structure. + 3. apiInvokerPolicies must: + * status **404** + * title with message "Not Found" + * detail with message "No ACLs found for the requested service: {service_api_id}, aef_id: {aef_id}, invoker: {api_invoker_id} and supportedFeatures: {supported_features}". + * cause with message "Wrong id". + +## Test Case 8: Retrieve ACL with service-api-id and aef-id not valid +* **Test ID**: ***capif_api_acl-8*** +* **Description**: + + This test case will check that an API Provider can't retrieve ACL from CAPIF if service-api-id and aef-id are not valid +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker had a Security Context for Service API published. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${NOT_VALID_SERVICE_API_ID}?aef-id=${AEF_ID_NOT_VALID}* + * Use *NOT_VALID_SERVICE_API_ID* and *aef_id* + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. ACL Response: + 1. **404 Not Found** Response. + 2. body returned must accomplish **Problem Details** data structure. + 3. apiInvokerPolicies must: + * status **404** + * title with message "Not Found" + * detail with message "No ACLs found for the requested service: {NOT_VALID_SERVICE_API_ID}, aef_id: {AEF_ID_NOT_VALID}, invoker: {api_invoker_id} and supportedFeatures: {supported_features}". + * cause with message "Wrong id". + + +## Test Case 9: Retrieve ACL without SecurityContext created previously by Invoker +* **Test ID**: ***capif_api_acl-9*** +* **Description**: + + This test case will check that an API Provider can't retrieve ACL if no invoker had requested Security Context to CAPIF +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker created but no Security Context for Service API published had been requested. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + + 5. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId}?aef-id=${aef_id}* + * Use *serviceApiId* and *aefId* + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. ACL Response: + 1. **404 Not Found** Response. + 2. body returned must accomplish **Problem Details** data structure. + 3. apiInvokerPolicies must: + * status **404** + * title with message "Not Found" + * detail with message "No ACLs found for the requested service: {NOT_VALID_SERVICE_API_ID}, aef_id: {AEF_ID_NOT_VALID}, invoker: {api_invoker_id} and supportedFeatures: {supported_features}". + * cause with message "Wrong id". + +## Test Case 10: Retrieve ACL filtered by api-invoker-id not present +* **Test ID**: ***capif_api_acl-10*** +* **Description**: + + This test case will check that an API Provider get not found response if filter by not valid api-invoker-id doesn't match any registered ACL. +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker had a Security Context for Service API published. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId}?aef-id=${aef_id}&api-invoker-id={NOT_VALID_API_INVOKER_ID}* + * Use *serviceApiId*, *aefId* and *NOT_VALID_API_INVOKER_ID* + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. ACL Response: + 1. **404 Not Found** Response. + 2. body returned must accomplish **Problem Details** data structure. + 3. apiInvokerPolicies must: + * status **404** + * title with message "Not Found" + * detail with message "No ACLs found for the requested service: {NOT_VALID_SERVICE_API_ID}, aef_id: {AEF_ID_NOT_VALID}, invoker: {api_invoker_id} and supportedFeatures: {supported_features}". + * cause with message "Wrong id". + +## Test Case 11: Retrieve ACL with APF Certificate +* **Test ID**: ***capif_api_acl-11*** +* **Description**: + + This test case will check that an API Provider can't retrieve ACL from CAPIF using APF Certificate +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker had a Security Context for Service API published. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId}?aef-id=${aef_id}* + * Use *serviceApiId* and *aefId* + * Use APF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. Response to Logging Service must accomplish: + 1. **401 Unauthorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 401 + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "Certificate not authorized". + +## Test Case 12: Retrieve ACL with AMF Certificate +* **Test ID**: ***capif_api_acl-12*** +* **Description**: + + This test case will check that an API Provider can't retrieve ACL from CAPIF using AMF Certificate +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker had a Security Context for Service API published. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId}?aef-id=${aef_id}* + * Use *serviceApiId* and *aefId* + * Use AMF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. Response to Logging Service must accomplish: + 1. **401 Unauthorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 401 + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "Certificate not authorized". + +## Test Case 13: Retrieve ACL with Invoker Certificate +* **Test ID**: ***capif_api_acl-13*** +* **Description**: + + This test case will check that an API Provider can't retrieve ACL from CAPIF using Invoker Certificate +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker had a Security Context for Service API published. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId}?aef-id=${aef_id}* + * Use *serviceApiId* and *aefId* + * Use Invoker Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information. + +* **Expected Result**: + + 1. Response to Logging Service must accomplish: + 1. **401 Unauthorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 401 + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "Certificate not authorized". + +## Test Case 14: No ACL for invoker after be removed +* **Test ID**: ***capif_api_acl-14*** +* **Description**: + + This test case will check that ACLs are removed after invoker is removed. +* **Pre-Conditions**: + + * API Provider had a Service API Published on CAPIF + * API Invoker had a Security Context for Service API published and ACL is present + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Store *serviceApiId* + * Use APF Certificate + + 3. Perform [Invoker Onboarding] store apiInvokerId + 4. Discover published APIs + 5. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + + 6. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId}?aef-id=${aef_id}&api-invoker-id={api-invoker-id}* + * Use *serviceApiId*, *aefId* and *api-invoker-id* + * Use AEF Provider Certificate + 7. Remove Invoker from CAPIF + 8. Provider Retrieve ACL + * Send GET *https://{CAPIF_HOSTNAME}/access-control-policy/v1/accessControlPolicyList/${serviceApiId}?aef-id=${aef_id}&api-invoker-id={api-invoker-id}* + * Use *serviceApiId*, *aefId* and *api-invoker-id* + * Use AEF Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Provider at CCF. + 2. Publish a provider API with name service_1 + 3. Register and onboard Invoker at CCF + 4. Store signed Certificate + 5. Create Security Context + 6. Provider Get ACL information of invoker. + 7. Remove Invoker from CAPIF. + 8. Provider Get ACL information of invoker. + +* **Expected Result**: + 1. ACL Response: + 1. **200 OK** Response. + 2. body returned must accomplish **AccessControlPolicyList** data structure. + 3. apiInvokerPolicies must: + 1. contain only one object. + 2. apiInvokerId must match apiInvokerId registered previously. + + 2. ACL Response: + 1. **404 Not Found** Response. + 2. body returned must accomplish **Problem Details** data structure. + 3. apiInvokerPolicies must: + * status **404** + * title with message "Not Found" + * detail with message "No ACLs found for the requested service: {NOT_VALID_SERVICE_API_ID}, aef_id: {AEF_ID_NOT_VALID}, invoker: None and supportedFeatures: None". + * cause with message "Wrong id". + + + +[Return To All Test Plans]: ../README.md + +[service api description]: ../api_publish_service/service_api_description_post_example.json "Service API Description Request" +[publisher register body]: ../api_publish_service/publisher_register_body.json "Publish register Body" +[service security body]: ./service_security.json "Service Security Request" +[security notification body]: ./security_notification.json "Security Notification Request" +[access token req body]: ./access_token_req.json "Access Token Request" +[example]: ./access_token_req.json "Access Token Request Example" +[invoker onboarding]: ../common_operations/README.md#register-an-invoker "Invoker Onboarding" +[provider registration]: ../common_operations/README.md#register-a-provider "Provider Registration" diff --git a/docs/test_plan/api_access_control_policy/service_api_description_post_example.json b/docs/test_plan/api_access_control_policy/service_api_description_post_example.json new file mode 100644 index 0000000000000000000000000000000000000000..b725b428629509bf39a79c030f1bf93f4b6f18f6 --- /dev/null +++ b/docs/test_plan/api_access_control_policy/service_api_description_post_example.json @@ -0,0 +1,113 @@ +{ + "apiName": "service_1", + "aefProfiles": [ + { + "aefId": "string", + "versions": [ + { + "apiVersion": "v1", + "expiry": "2021-11-30T10:32:02.004Z", + "resources": [ + { + "resourceName": "string", + "commType": "REQUEST_RESPONSE", + "uri": "string", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ], + "custOperations": [ + { + "commType": "REQUEST_RESPONSE", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ] + } + ], + "protocol": "HTTP_1_1", + "dataFormat": "JSON", + "securityMethods": ["PSK"], + "interfaceDescriptions": [ + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + }, + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + } + ] + }, + { + "aefId": "string", + "versions": [ + { + "apiVersion": "v1", + "expiry": "2021-11-30T10:32:02.004Z", + "resources": [ + { + "resourceName": "string", + "commType": "REQUEST_RESPONSE", + "uri": "string", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ], + "custOperations": [ + { + "commType": "REQUEST_RESPONSE", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ] + } + ], + "protocol": "HTTP_1_1", + "dataFormat": "JSON", + "securityMethods": ["PSK"], + "interfaceDescriptions": [ + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + }, + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + } + ] + } + ], + "description": "string", + "supportedFeatures": "fffff", + "shareableInfo": { + "isShareable": true, + "capifProvDoms": [ + "string" + ] + }, + "serviceAPICategory": "string", + "apiSuppFeats": "fffff", + "pubApiPath": { + "ccfIds": [ + "string" + ] + }, + "ccfId": "string" +} \ No newline at end of file diff --git a/docs/test_plan/api_auditing_service/README.md b/docs/test_plan/api_auditing_service/README.md new file mode 100644 index 0000000000000000000000000000000000000000..bd3204c0aa3cc0eeb63b3f85f6f6629914957aa4 --- /dev/null +++ b/docs/test_plan/api_auditing_service/README.md @@ -0,0 +1,244 @@ +[**[Return To All Test Plans]**] + +- [Test Plan for CAPIF Api Auditing Service](#test-plan-for-capif-api-auditing-service) +- [Tests](#tests) + - [Test Case 1: Get a CAPIF Log Entry.](#test-case-1-creates-a-new-individual-capif-log-entry) + + +# Test Plan for CAPIF Api Auditing Service +At this documentation you will have all information and related files and examples of test plan for this API. + +# Tests + +## Test Case 1: Get CAPIF Log Entry. +* Test ID: ***capif_api_auditing-1*** +* Description: + + This test case will check that a CAPIF AMF can get log entry to Logging Service +* Pre-Conditions: + + * CAPIF provider is pre-authorised (has valid AMF cert from CAPIF Authority) + * Service exist in CAPIF + * Invoker exist in CAPIF + * Log Entry exist in CAPIF + +* Information of Test: + + 1. Perform [provider onboarding], [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 3. Create Log Entry: + - Send POST to *https://{CAPIF_HOSTNAME}/api-invocation-logs/v1/{aefId}/logs* + - body [log entry request body] + - Use AEF Certificate + + 4. Get Log: + 1. Send GET to *https://{CAPIF_HOSTNAME}/logs/v1/apiInvocationLogs?aef-id={aefId}&api-invoker-id={api-invoker-id}* + 2. Use AMF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Create Log Entry + 4. Get Log Entry + +* Expected Result: + + 1. Response to Logging Service must accomplish: + 1. **200 OK** + 2. Response Body must follow **InvocationLog** data structure with: + * aefId + * apiInvokerId + * logs + +## Test Case 2: Get CAPIF Log Entry With no Log entry in CAPIF. +* Test ID: ***capif_api_auditing-2*** +* Description: + + This test case will check that a CAPIF AEF can create log entry to Logging Service +* Pre-Conditions: + + * CAPIF provider is pre-authorised (has valid AMF cert from CAPIF Authority) + * Service exist in CAPIF + * Invoker exist in CAPIF + + +* Information of Test: + + 1. Perform [provider onboarding], [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 4. Get Log: + 1. Send GET to *https://{CAPIF_HOSTNAME}/logs/v1/apiInvocationLogs?aef-id={aefId}&api-invoker-id={api-invoker-id}* + 2. Use AMF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Get Log Entry + +* Expected Result: + + 1. Response to Logging Service must accomplish: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found Log Entry in CAPIF". + * cause with message "Not Exist Logs with the filters applied". + + +## Test Case 3: Get CAPIF Log Entry without aef-id and api-invoker-id. +* Test ID: ***capif_api_auditing-3*** +* Description: + + This test case will check that a CAPIF AEF can create log entry to Logging Service +* Pre-Conditions: + + * CAPIF provider is no pre-authorised (has no valid AMF cert from CAPIF Authority) + * Service exist in CAPIF + * Invoker exist in CAPIF + * Log Entry exist in CAPIF + +* Information of Test: + + 1. Perform [provider onboarding], [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 3. Create Log Entry: + - Send POST to *https://{CAPIF_HOSTNAME}/api-invocation-logs/v1/{aefId}/logs* + - body [log entry request body] + - Use AEF Certificate + + 4. Get Log: + 1. Send GET to *https://{CAPIF_HOSTNAME}/logs/v1/apiInvocationLogs + 2. Use AMF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Create Log Entry + 4. Get Log Entry + +* Expected Result: + + 1. Response to Logging Service must accomplish: + 1. **400 Bad Request** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 400 + * title with message "Bad Request" + * detail with message "aef_id and api_invoker_id parameters are mandatory". + * cause with message "Mandatory parameters missing". + + +## Test Case 4: Get CAPIF Log Entry with filtter api-version. +* Test ID: ***capif_api_auditing-4*** +* Description: + + This test case will check that a CAPIF AMF can get log entry to Logging Service +* Pre-Conditions: + + * CAPIF provider is pre-authorised (has valid AMF cert from CAPIF Authority) + * Service exist in CAPIF + * Invoker exist in CAPIF + * Log Entry exist in CAPIF + +* Information of Test: + + 1. Perform [provider onboarding], [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 3. Create Log Entry: + - Send POST to *https://{CAPIF_HOSTNAME}/api-invocation-logs/v1/{aefId}/logs* + - body [log entry request body] + - Use AEF Certificate + + 4. Get Log: + 1. Send GET to *https://{CAPIF_HOSTNAME}/logs/v1/apiInvocationLogs?aef-id={aefId}&api-invoker-id={api-invoker-id}&api-version={v1}* + 2. Use AMF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Create Log Entry + 4. Get Log Entry + +* Expected Result: + + 1. Response to Logging Service must accomplish: + 1. **200 OK** + 2. Response Body must follow **InvocationLog** data structure with: + * aefId + * apiInvokerId + * logs + + +## Test Case 5: Get CAPIF Log Entry with filter api-version but not exist in log entry. +* Test ID: ***capif_api_auditing-4*** +* Description: + + This test case will check that a CAPIF AMF can get log entry to Logging Service +* Pre-Conditions: + + * CAPIF provider is pre-authorised (has valid AMF cert from CAPIF Authority) + * Service exist in CAPIF + * Invoker exist in CAPIF + * Log Entry exist in CAPIF + +* Information of Test: + + 1. Perform [provider onboarding], [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 3. Create Log Entry: + - Send POST to *https://{CAPIF_HOSTNAME}/api-invocation-logs/v1/{aefId}/logs* + - body [log entry request body] + - Use AEF Certificate + + 4. Get Log: + 1. Send GET to *https://{CAPIF_HOSTNAME}/logs/v1/apiInvocationLogs?aef-id={aefId}&api-invoker-id={api-invoker-id}&api-version={v58}* + 2. Use AMF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Create Log Entry + 4. Get Log Entry + +* Expected Result: + + 1. Response to Logging Service must accomplish: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * detail with message "Parameters do not match any log entry" + * cause with message "No logs found". + + + +[log entry request body]: ../api_logging_service/invocation_log.json "Log Request Body" + +[invoker onboarding]: ../common_operations/README.md#register-an-invoker "Invoker Onboarding" + +[provider onboarding]: ../common_operations/README.md#register-a-provider "Provider Onboarding" + +[Return To All Test Plans]: ../README.md \ No newline at end of file diff --git a/docs/test_plan/api_discover_service/README.md b/docs/test_plan/api_discover_service/README.md new file mode 100644 index 0000000000000000000000000000000000000000..3125c888c96eadcbda6a7b6a688aa721f2714000 --- /dev/null +++ b/docs/test_plan/api_discover_service/README.md @@ -0,0 +1,336 @@ +[**[Return To All Test Plans]**] + +- [Test Plan for CAPIF Discover Service](#test-plan-for-capif-discover-service) +- [Tests](#tests) + - [Test Case 1: Discover Published service APIs by Authorised API Invoker](#test-case-1-discover-published-service-apis-by-authorised-api-invoker) + - [Test Case 2: Discover Published service APIs by Non Authorised API Invoker](#test-case-2-discover-published-service-apis-by-non-authorised-api-invoker) + - [Test Case 3: Discover Published service APIs by not registered API Invoker](#test-case-3-discover-published-service-apis-by-not-registered-api-invoker) + - [Test Case 4: Discover Published service APIs by registered API Invoker with 1 result filtered](#test-case-4-discover-published-service-apis-by-registered-api-invoker-with-1-result-filtered) + - [Test Case 5: Discover Published service APIs by registered API Invoker filtered with no match](#test-case-5-discover-published-service-apis-by-registered-api-invoker-filtered-with-no-match) + - [Test Case 6: Discover Published service APIs by registered API Invoker not filtered](#test-case-6-discover-published-service-apis-by-registered-api-invoker-not-filtered) + + +# Test Plan for CAPIF Discover Service +At this documentation you will have all information and related files and examples of test plan for this API. + +# Tests + +## Test Case 1: Discover Published service APIs by Authorised API Invoker +* **Test ID**: ***capif_api_discover_service-1*** +* **Description**: + + This test case will check if NetApp (Invoker) can discover published service APIs. +* **Pre-Conditions**: + * Service APIs are published. + * NetApp was registered previously + * NetApp was onboarded previously with {onboardingId} + +* **Information of Test**: + 1. Perform [Provider Registration] and [Invoker Onboarding] + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Use APF Certificate + 3. Request Discover Published APIs: + * Send GET to *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Use Invoker Certificate + +* **Execution Steps**: + + 1. Register Provider at CCF, store certificates and Publish Service API at CCF + 2. Register Invoker and Onboard Invoker at CCF + 3. Discover Service APIs by Invoker + +* **Expected Result**: + + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + + 2. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + 3. Response to Discover Request By Invoker: + 1. **200 OK** response. + 2. Response body must follow **DiscoveredAPIs** data structure: + * Check if DiscoveredAPIs contains the API Published previously + + +## Test Case 2: Discover Published service APIs by Non Authorised API Invoker +* **Test ID**: ***capif_api_discover_service-2*** +* **Description**: + + This test case will check that an API Publisher can't discover published APIs because is not authorized. + +* **Pre-Conditions**: + * Service APIs are published. + +* **Information of Test**: + 1. Perform [Provider Registration] and [Invoker Onboarding] + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Use APF Certificate + 3. Request Discover Published APIs by no invoker entity: + * Send GET to *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Use not Invoker Certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API at CCF + 2. Register Invoker and Onboard Invoker at CCF + 3. Discover Service APIs by no invoker entity + +* **Expected Result**: + + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + + 2. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + 3. Response to Discover Request By no invoker entity: + 1. **401 Unauthorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 401 + * title with message "Unauthorized" + * detail with message "User not authorized". + * cause with message "Certificate not authorized". + + +## Test Case 3: Discover Published service APIs by not registered API Invoker +* **Test ID**: ***capif_api_discover_service-3*** +* **Description**: + + This test case will check that a not registered invoker is forbidden to discover published APIs. + +* **Pre-Conditions**: + * Service APIs are published. + +* **Information of Test**: + 1. Perform [Provider Registration] and [Invoker Onboarding] + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Use APF Certificate + 3. Request Discover Published APIs with not valid apiInvoker: + * Send GET to *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={INVOKER_NOT_REGISTERED}* + * Param api-invoker-id is mandatory + * Using invoker certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API at CCF + 2. Register Invoker and Onboard Invoker at CCF + 3. Discover Service APIs by Publisher + +* **Expected Result**: + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + + 2. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + 3. Response to Discover Request By Invoker: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "API Invoker does not exist". + * cause with message "API Invoker id not found". + + +## Test Case 4: Discover Published service APIs by registered API Invoker with 1 result filtered +* **Test ID**: ***capif_api_discover_service-4*** +* **Description**: + + This test case will check if NetApp (Invoker) can discover published service APIs. +* **Pre-Conditions**: + * At least 2 Service APIs are published. + * NetApp was registered previously + * NetApp was onboarded previously with {onboardingId} + +* **Information of Test**: + 1. Perform [Provider Registration] and [Invoker Onboarding] + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Use APF Certificate + 3. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_2 + * Use APF Certificate + 4. Request Discover Published APIs filtering by api-name: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}&api-name=service_1* + * Param api-invoker-id is mandatory + * Using invoker certificate + * filter by api-name service_1 + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 and service_2 at CCF + 2. Register Invoker and Onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Discover filtered by api-name service_1 Service APIs by Invoker + +* **Expected Result**: + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + 2. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + 3. Response to Discover Request By Invoker: + 1. **200 OK** response. + 2. Response body must follow **DiscoveredAPIs** data structure: + * Check if DiscoveredAPIs contains previously registered Service APIs published. + 4. Response to Discover Request By Invoker: + 1. **200 OK** response. + 2. Response body must follow **DiscoveredAPIs** data structure: + * Check if DiscoveredAPIs contains only Service API published with api-name service_1 + + +## Test Case 5: Discover Published service APIs by registered API Invoker filtered with no match +* **Test ID**: ***capif_api_discover_service-5*** +* **Description**: + This test case will check if NetApp (Invoker) can discover published service APIs. +* **Pre-Conditions**: + * At least 2 Service APIs are published. + * NetApp was registered previously + * NetApp was onboarded previously with {onboardingId} + +* **Information of Test**: + 1. Perform [Provider Registration] and [Invoker Onboarding] + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Use APF Certificate + 3. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_2 + * Use APF Certificate + 4. Request Discover Published APIs filtering by api-name not published: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}&api-name=NOT_VALID_NAME* + * Param api-invoker-id is mandatory + * Using invoker certificate + * filter by api-name NOT_VALID_NAME + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 and service_2 at CCF + 2. Register Invoker and Onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Discover filtered by api-name not published Service APIs by Invoker + +* **Expected Result**: + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + 2. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + 3. Response to Discover Request By Invoker: + 1. **200 OK** response. + 2. Response body must follow **DiscoveredAPIs** data structure: + * Check if DiscoveredAPIs contains previously registered Service APIs published. + 4. Response to Discover Request By Invoker: + 1. **404 Not Found** response. + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "API Invoker {api_invoker_id} has no API Published that accomplish filter conditions". + * cause with message "No API Published accomplish filter conditions". + + +## Test Case 6: Discover Published service APIs by registered API Invoker not filtered +* **Test ID**: ***capif_api_discover_service-6*** +* **Description**: + + This test case will check if NetApp (Invoker) can discover published service APIs. +* **Pre-Conditions**: + * 2 Service APIs are published. + * NetApp was registered previously + * NetApp was onboarded previously with {onboardingId} + +* **Information of Test**: + 1. Perform [Provider Registration] and [Invoker Onboarding] + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Use APF Certificate + 3. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_2 + * Use APF Certificate + 4. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 and service_2 at CCF + 2. Register Invoker and Onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Discover without filter by Invoker + +* **Expected Result**: + + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + + 2. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + 3. Response to Discover Request By Invoker: + 1. **200 OK** response. + 2. Response body must follow **DiscoveredAPIs** data structure: + * Check if DiscoveredAPIs contains the 2 previously registered Service APIs published. + + + + [service api description]: ./api_publish_service/service_api_description_post_example.json "Service API **Description** Request" + [publisher register body]: ./api_publish_service/publisher_register_body.json "Publish register Body" + [invoker onboarding body]: ../api_invoker_management/invoker_details_post_example.json "API Invoker Request" + [invoker register body]: ../api_invoker_management/invoker_register_body.json "Invoker Register Body" + [provider request body]: ../api_provider_management/provider_details_post_example.json "API Provider Enrolment Request" + [provider request patch body]: ../api_provider_management/provider_details_enrolment_details_patch_example.json "API Provider Enrolment Patch Request" + [provider getauth body]: ../api_provider_management/provider_getauth_example.json "Get Auth Example" + [invoker onboarding]: ../common_operations/README.md#register-an-invoker "Invoker Onboarding" + [provider registration]: ../common_operations/README.md#register-a-provider "Provider Registration" + + +[Return To All Test Plans]: ../README.md diff --git a/docs/test_plan/api_events_service/README.md b/docs/test_plan/api_events_service/README.md new file mode 100644 index 0000000000000000000000000000000000000000..417c1aa623671fde54b4e894b8e359ee03dd9d36 --- /dev/null +++ b/docs/test_plan/api_events_service/README.md @@ -0,0 +1,265 @@ +[**[Return To All Test Plans]**] + +- [Test Plan for CAPIF Api Events Service](#test-plan-for-capif-api-events-service) +- [Tests](#tests) + - [Test Case 1: Creates a new individual CAPIF Event Subscription.](#test-case-1-creates-a-new-individual-capif-event-subscription) + - [Test Case 2: Creates a new individual CAPIF Event Subscription with Invalid SubscriberId](#test-case-2-creates-a-new-individual-capif-event-subscription-with-invalid-subscriberid) + - [Test Case 3: Deletes an individual CAPIF Event Subscription](#test-case-3-deletes-an-individual-capif-event-subscription) + - [Test Case 4: Deletes an individual CAPIF Event Subscription with invalid SubscriberId](#test-case-4-deletes-an-individual-capif-event-subscription-with-invalid-subscriberid) + - [Test Case 5: Deletes an individual CAPIF Event Subscription with invalid SubscriptionId](#test-case-5-deletes-an-individual-capif-event-subscription-with-invalid-subscriptionid) + + + +# Test Plan for CAPIF Api Events Service +At this documentation you will have all information and related files and examples of test plan for this API. + +# Tests + +## Test Case 1: Creates a new individual CAPIF Event Subscription. +* Test ID: ***capif_api_events-1*** +* Description: + + This test case will check that a CAPIF subscriber (Invoker or Publisher) can Subscribe to Events +* Pre-Conditions: + + * CAPIF subscriber is pre-authorised (has valid InvokerId or apfId from CAPIF Authority) + +* Information of Test: + + 1. Perform [Invoker Onboarding] + + 2. Event Subscription: + 1. Send POST to *https://{CAPIF_HOSTNAME}/capif-events/v1/{subscriberId}/subscriptions* + 2. body [event subscription request body] + 3. Use Invoker Certificate + +* Execution Steps: + + 1. Register Invoker and Onboard Invoker at CCF + 2. Subscribe to Events + 3. Retrieve {subscriberId} and {subscriptionId} from Location Header + +* Expected Result: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + 2. Response to Event Subscription must accomplish: + 1. **201 Created** + 2. The URI of the created resource shall be returned in the "Location" HTTP header, following this structure: *{apiRoot}/capif-events/{apiVersion}/{subscriberId}/subscriptions/{subscriptionId} + 3. Response Body must follow **EventSubscription** data structure. + + 3. Event Subscriptions are stored in CAPIF Database + + +## Test Case 2: Creates a new individual CAPIF Event Subscription with Invalid SubscriberId +* Test ID: ***capif_api_events-2*** +* Description: + + This test case will check that a CAPIF subscriber (Invoker or Publisher) cannot Subscribe to Events without valid SubcriberId +* Pre-Conditions: + + * CAPIF subscriber is not pre-authorised (has invalid InvokerId or apfId) + +* Information of Test: + + 1. Perform [Invoker Onboarding] + + 2. Event Subscription: + 1. Send POST to *https://{CAPIF_HOSTNAME}/capif-events/v1/{SUBSCRIBER_NOT_REGISTERED}/subscriptions* + 2. body [event subscription request body] + 3. Use Invoker Certificate + +* Execution Steps: + + 1. Register Invoker and Onboard Invoker at CCF + 2. Subscribe to Events + +* Expected Result: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + 2. Response to Event Subscription must accomplish: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "Invoker or APF or AEF or AMF Not found". + * cause with message "Subscriber Not Found". + + 3. Event Subscriptions are not stored in CAPIF Database + + +## Test Case 3: Deletes an individual CAPIF Event Subscription +* Test ID: ***capif_api_events-3*** +* Description: + + This test case will check that a CAPIF subscriber (Invoker or Publisher) can Delete an Event Subscription +* Pre-Conditions: + + * CAPIF subscriber is pre-authorised (has valid InvokerId or apfId from CAPIF Authority) + +* Information of Test: + + 1. Perform [Invoker Onboarding] + + 2. Event Subscription: + 1. Send POST to *https://{CAPIF_HOSTNAME}/capif-events/v1/{subscriberId}/subscriptions* + 2. body [event subscription request body] + 3. Use Invoker Certificate + + 3. Remove Event Subscription: + 1. Send DELETE to *https://{CAPIF_HOSTNAME}/capif-events/v1/{subscriberId}/subscriptions* + 2. Use Invoker Certificate + +* Execution Steps: + + 1. Register Invoker and Onboard Invoker at CCF + 2. Subscribe to Events + 3. Retrieve {subscriberId} and {subscriptionId} from Location Header + 4. Remove Event Subscription + +* Expected Result: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + 2. Response to Event Subscription must accomplish: + 1. **201 Created** + 2. The URI of the created resource shall be returned in the "Location" HTTP header, following this structure: *{apiRoot}/capif-events/{apiVersion}/{subscriberId}/subscriptions/{subscriptionId} + 3. Response Body must follow **EventSubscription** data structure. + + 3. Event Subscriptions are stored in CAPIF Database + 4. Remove Event Subscription: + 1. **204 No Content** + + 5. Event Subscription is not present at CAPIF Database. + + +## Test Case 4: Deletes an individual CAPIF Event Subscription with invalid SubscriberId +* Test ID: ***capif_api_events-4*** +* Description: + + This test case will check that a CAPIF subscriber (Invoker or Publisher) cannot Delete to Events without valid SubcriberId +* Pre-Conditions: + + * CAPIF subscriber is pre-authorised (has valid InvokerId or apfId). + * CAPIF subscriber is subscribed to Events. + +* Information of Test: + + 1. Perform [Invoker Onboarding] + + 2. Event Subscription: + 1. Send POST to https://{CAPIF_HOSTNAME}/capif-events/v1/{subscriberId}/subscriptions + 2. body [event subscription request body] + 3. Use Invoker Certificate + + 3. Remove Event Subcription with not valid subscriber: + 1. Send DELETE to to https://{CAPIF_HOSTNAME}/capif-events/v1/{SUBSCRIBER_ID_NOT_VALID}/subscriptions/{subcriptionId} + 2. Use Invoker Certificate + +* Execution Steps: + + 1. Register Invoker and Onboard Invoker at CCF + 2. Subscribe to Events + 3. Retrieve Location Header with subscriptionId. + 4. Remove Event Subscribed with not valid Subscriber. + +* Expected Result: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + 2. Response to Event Subscription must accomplish: + 1. 201 Created + 2. The URI of the created resource shall be returned in the "Location" HTTP header, following this structure: *{apiRoot}/capif-events/{apiVersion}/{subscriberId}/subscriptions/{subscriptionId} + 3. Response Body must follow **EventSubscription** data structure. + + 3. Event Subscriptions are stored in CAPIF Database + 4. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "Invoker or APF or AEF or AMF Not found". + * cause with message "Subscriber Not Found". + + +## Test Case 5: Deletes an individual CAPIF Event Subscription with invalid SubscriptionId +* Test ID: ***capif_api_events-5*** +* Description: + + This test case will check that a CAPIF subscriber (Invoker or Publisher) cannot Delete an Event Subscription without valid SubscriptionId +* Pre-Conditions: + + * CAPIF subscriber is pre-authorised (has invalid InvokerId or apfId). + * CAPIF subscriber is subscribed to Events. + +* Information of Test: + + 1. Perform [Invoker Onboarding] + + 2. Event Subscription: + 1. Send POST to https://{CAPIF_HOSTNAME}/capif-events/v1/{subscriberId}/subscriptions + 2. body [event subscription request body] + 3. Use Invoker Certificate + + 3. Remove Event Subcription with not valid subscriber: + 1. Send DELETE to to https://{CAPIF_HOSTNAME}/capif-events/v1/{subcriberId}/subscriptions/{SUBSCRIPTION_ID_NOT_VALID} + 2. Use Invoker Certificate + +* Execution Steps: + + 1. Register Invoker and Onboard Invoker at CCF + 2. Subscribe to Events + 3. Retrieve Location Header with subscriptionId. + 4. Remove Event Subscribed with not valid Subscriber. + +* Expected Result: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + 2. Response to Event Subscription must accomplish: + 1. **201 Created** + 2. The URI of the created resource shall be returned in the "Location" HTTP header, following this structure: *{apiRoot}/capif-events/{apiVersion}/{subscriberId}/subscriptions/{subscriptionId} + 3. Response Body must follow **EventSubscription** data structure. + + 3. Event Subscriptions are stored in CAPIF Database + 4. Remove Event Subscription with not valid subscriber: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * detail with message "Service API not existing". + * cause with message "Event API subscription id not found". + + + + +[invoker register body]: ../api_invoker_management/invoker_register_body.json "Invoker Register Body" +[invoker onboard request body]: ../api_invoker_management/invoker_details_post_example.json "API Invoker Request" +[event subscription request body]: ./event_subscription.json "Event Subscription Request" +[invoker onboarding]: ../common_operations/README.md#register-an-invoker "Invoker Onboarding" + + +[Return To All Test Plans]: ../README.md diff --git a/docs/test_plan/api_events_service/event_subscription.json b/docs/test_plan/api_events_service/event_subscription.json new file mode 100644 index 0000000000000000000000000000000000000000..40dc09bb1ca5236fa9cb23ff1a25ad5dccd28844 --- /dev/null +++ b/docs/test_plan/api_events_service/event_subscription.json @@ -0,0 +1,31 @@ +{ + "eventFilters": [ + { + "aefIds": ["aefIds", "aefIds"], + "apiIds": ["apiIds", "apiIds"], + "apiInvokerIds": ["apiInvokerIds", "apiInvokerIds"] + }, + { + "aefIds": ["aefIds", "aefIds"], + "apiIds": ["apiIds", "apiIds"], + "apiInvokerIds": ["apiInvokerIds", "apiInvokerIds"] + } + ], + "eventReq": { + "grpRepTime": 5, + "immRep": true, + "maxReportNbr": 0, + "monDur": "2000-01-23T04:56:07+00:00", + "partitionCriteria": ["string1", "string2"], + "repPeriod": 6, + "sampRatio": 15 + }, + "events": ["SERVICE_API_AVAILABLE", "API_INVOKER_ONBOARDED"], + "notificationDestination": "http://robot.testing", + "requestTestNotification": true, + "supportedFeatures": "aaa", + "websockNotifConfig": { + "requestWebsocketUri": true, + "websocketUri": "websocketUri" + } +} diff --git a/docs/test_plan/api_invoker_management/README.md b/docs/test_plan/api_invoker_management/README.md new file mode 100644 index 0000000000000000000000000000000000000000..9846c960d9e8fda5d84676a2242f820aa54f53cf --- /dev/null +++ b/docs/test_plan/api_invoker_management/README.md @@ -0,0 +1,306 @@ +[**[Return To All Test Plans]**] + +- [Test Plan for CAPIF Api Invoker Management](#test-plan-for-capif-api-invoker-management) +- [Tests](#tests) + - [Test Case 1: Onboard NetApp](#test-case-1-onboard-netapp) + - [Test Case 2: Onboard NetApp Already onboarded](#test-case-2-onboard-netapp-already-onboarded) + - [Test Case 3: Update Onboarded NetApp](#test-case-3-update-onboarded-netapp) + - [Test Case 4: Update Not Onboarded NetApp](#test-case-4-update-not-onboarded-netapp) + - [Test Case 5: Offboard NetApp](#test-case-5-offboard-netapp) + - [Test Case 6: Offboard Not previsouly Onboarded NetApp](#test-case-6-offboard-not-previsouly-onboarded-netapp) + - [Test Case 7: Update Onboarded NetApp Certificate](#test-case-7-update-onboarded-netapp-certificate) + + +# Test Plan for CAPIF Api Invoker Management +At this documentation you will have all information and related files and examples of test plan for this API. + +# Tests + +## Test Case 1: Onboard NetApp +* **Test ID**: ***capif_api_invoker_management-1*** +* **Description**: + + This test will try to register new NetApp at CAPIF Core. +* **Pre-Conditions**: + + * NetApp was not registered previously + * NetApp was not onboarded previously + +* **Information of Test**: + + 1. Create public and private key at invoker + + 2. Register of Invoker at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [invoker register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [invoker getauth body] + + 4. Onboard Invoker: + * Send POST to *https://{CAPIF_HOSTNAME}/api-invoker-management/v1/onboardedInvokers* + * Reference Request Body: [invoker onboarding body] + * "onboardingInformation"->"apiInvokerPublicKey": must contain public key generated by Invoker. + * Send at Authorization Header the Bearer access_token obtained previously (Authorization:Bearer ${access_token}) + +* **Execution Steps**: + 1. Register Invoker at CCF + 2. Onboard Invoker at CCF + 3. Store signed Certificate + +* **Expected Result**: + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + +## Test Case 2: Onboard NetApp Already onboarded + +* **Test ID**: ***capif_api_invoker_management-2*** +* **Description**: + + This test will check second onboard of same NetApp is not allowed. + +* **Pre-Conditions**: + + * NetApp was registered previously + * NetApp was onboarded previously + +* **Information of Test**: + + 1. Perform [Invoker Onboarding] + + 2. Repeat Onboard Invoker: + * Send POST to *https://{CAPIF_HOSTNAME}/api-invoker-management/v1/onboardedInvokers* + * Reference Request Body: [invoker onboarding body] + * "onboardingInformation"->"apiInvokerPublicKey": must contain public key generated by Invoker. + * Send at Authorization Header the Bearer access_token obtained previously (Authorization:Bearer ${access_token}) + +* **Execution Steps**: + 1. Register NetApp at CCF + 2. Onboard NetApp at CCF + 3. Store signed Certificate at NetApp + 4. Onboard Again the NetApp at CCF + +* **Expected Result**: + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + 2. Response to Second Onboard of NetApp must accomplish: + 1. **403 Forbidden** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 403 + * title with message "Forbidden" + * detail with message "Invoker Already registered". + * cause with message "Identical invoker public key". + + +## Test Case 3: Update Onboarded NetApp +* **Test ID**: ***capif_api_invoker_management-3*** +* **Description**: + + This test will try to update information of previous onboard NetApp at CAPIF Core. +* **Pre-Conditions**: + + * NetApp was registered previously + * NetApp was onboarded previously with {onboardingId} + +* **Information of Test**: + + 1. Perform [Invoker Onboarding] + + 2. Update information of previously onboarded Invoker: + * Send PUT to *https://{CAPIF_HOSTNAME}/api-invoker-management/v1/onboardedInvokers/{onboardingId}* + * Reference Request Body is: [put invoker onboarding body] + * "notificationDestination": "*http://host.docker.internal:8086/netapp_new_callback*", + +* **Execution Steps**: + + 1. Register Invoker at CCF + 2. Onboard Invoker at CCF + 3. Store signed Certificate + 4. Update Onboarding Information at CCF with a minor change on "notificationDestination" + +* **Expected Result**: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + 2. Response to Update Request (PUT) with minor change must contain: + 1. **200 OK** response. + 2. notificationDestination on response must contain the new value + + +## Test Case 4: Update Not Onboarded NetApp +* **Test ID**: ***capif_api_invoker_management-4*** +* **Description**: + + This test will try to update information of not onboarded NetApp at CAPIF Core. +* **Pre-Conditions**: + + * NetApp was registered previously + * NetApp was not onboarded previously + +* **Information of Test**: + + 1. Perform [Invoker Onboarding] + + 2. Update information of not onboarded Invoker: + * Send PUT to *https://{CAPIF_HOSTNAME}/api-invoker-management/v1/onboardedInvokers/{INVOKER_NOT_REGISTERED}* + * Reference Request Body is: [put invoker onboarding body] + +* **Execution Steps**: + + 1. Register Invoker at CCF + 2. Onboard Invoker at CCF + 3. Update Onboarding Information at CCF of not onboarded + +* **Expected Result**: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response to Update Request (PUT) must contain: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "Please provide an existing Netapp ID". + * cause with message "Not exist NetappID". + + + +## Test Case 5: Offboard NetApp +* **Test ID**: ***capif_api_invoker_management-5*** +* **Description**: + + This test case will check that a Registered NetApp can be deleted. +* **Pre-Conditions**: + + * NetApp was registered previously + * NetApp was onboarded previously + +* **Information of Test**: + + 1. Perform [Invoker Onboarding] + + 2. Offboard: + * Send Delete to *https://{CAPIF_HOSTNAME}/api-invoker-management/v1/onboardedInvokers/{onboardingId}* + +* **Execution Steps**: + + 1. Register Invoker at CCF + 2. Onboard Invoker at CCF + 3. Offboard Invoker at CCF + +* **Expected Result**: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response to Offboard Request (DELETE) must contain: + 1. **204 No Content** + + +## Test Case 6: Offboard Not previsouly Onboarded NetApp +* **Test ID**: ***capif_api_invoker_management-6*** +* **Description**: + + This test case will check that a Non-Registered NetApp cannot be deleted +* **Pre-Conditions**: + + * NetApp was registered previously + * NetApp was not onboarded previously + +* **Information of Test**: + + 1. Perform [Invoker Onboarding] + + 2. Offboard: + * Send Delete to *https://{CAPIF_HOSTNAME}/api-invoker-management/v1/onboardedInvokers/{INVOKER_NOT_REGISTERED}* + +* **Execution Steps**: + + 1. Register Invoker at CCF + 2. Offboard Invoker at CCF + +* **Expected Result**: + + 1. Response to Offboard Request (DELETE) must contain: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "Please provide an existing Netapp ID". + * cause with message "Not exist NetappID". + +## Test Case 7: Update Onboarded NetApp Certificate +* **Test ID**: ***capif_api_invoker_management-7*** +* **Description**: + + This test will try to update public key and get a new signed certificate by CAPIF Core. +* **Pre-Conditions**: + + * NetApp was registered previously + * NetApp was onboarded previously with {onboardingId} and {public_key_1} + +* **Information of Test**: + + 1. Perform [Invoker Onboarding] with public_key_1. + + 2. Create {public_key_2} + + 3. Update information of previously onboarded Invoker: + * Send PUT to *https://{CAPIF_HOSTNAME}/api-invoker-management/v1/onboardedInvokers/{onboardingId}* + * Reference Request Body is: [put invoker onboarding body] + * ["onboardingInformation"]["apiInvokerPublicKey"]: {public_key_2}, + * Store new certificate. + + 4. Update information of previously onboarded Invoker Using new certificate: + * Send PUT to *https://{CAPIF_HOSTNAME}/api-invoker-management/v1/onboardedInvokers/{onboardingId}* + * Reference Request Body is: [put invoker onboarding body] + * "notificationDestination": "*http://host.docker.internal:8086/netapp_new_callback*", + * Use new invoker certificate + +* **Execution Steps**: + + 1. Register Invoker at CCF + 2. Onboard Invoker at CCF + 3. Store signed Certificate + 4. Update Onboarding Information at CCF with new public key + 5. Update Onboarding Information at CCF with minor change + +* **Expected Result**: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + 2. Response to Update Request (PUT) with new public key: + 1. **200 OK** response. + 2. apiInvokerCertificate with new certificate on response -> store to use. + 3. Response to Update Request (PUT) with minor change must contain: + 1. **200 OK** response. + 2. notificationDestination on response must contain the new value + + + + +[invoker onboarding body]: ./invoker_details_post_example.json "API Invoker Request" +[invoker register body]: ./invoker_register_body.json "Invoker Register Body" +[put register body]: ./invoker_details_put_example.json "API Invoker Update Request" +[invoker getauth body]: ./invoker_getauth_example.json "Get Auth Example" + +[invoker onboarding]: ../common_operations/README.md#register-an-invoker "Invoker Onboarding" + +[Return To All Test Plans]: ../README.md \ No newline at end of file diff --git a/docs/test_plan/api_invoker_management/invoker_details_post_example.json b/docs/test_plan/api_invoker_management/invoker_details_post_example.json new file mode 100644 index 0000000000000000000000000000000000000000..c306a17e2d04f55da35a5b3638775af9d63e769f --- /dev/null +++ b/docs/test_plan/api_invoker_management/invoker_details_post_example.json @@ -0,0 +1,15 @@ +{ + "notificationDestination": "http://host.docker.internal:8086/netapp_callback", + "supportedFeatures": "fffffff", + "apiInvokerInformation": "ROBOT_TESTING_INVOKER", + "websockNotifConfig": { + "requestWebsocketUri": true, + "websocketUri": "websocketUri" + }, + "onboardingInformation": { + "apiInvokerPublicKey": "{PUBLIC_KEY}", + "onboardingSecret": "onboardingSecret", + "apiInvokerCertificate": "apiInvokerCertificate" + }, + "requestTestNotification": true +} diff --git a/docs/test_plan/api_invoker_management/invoker_details_put_example.json b/docs/test_plan/api_invoker_management/invoker_details_put_example.json new file mode 100644 index 0000000000000000000000000000000000000000..37a1eefbb05a2df1058b20429477cbf17f412cb8 --- /dev/null +++ b/docs/test_plan/api_invoker_management/invoker_details_put_example.json @@ -0,0 +1,393 @@ +{ + "notificationDestination": "http://host.docker.internal:8086/netapp_new_callback", + "supportedFeatures": "fffffff", + "apiInvokerInformation": "ROBOT_TESTING_INVOKER", + "websockNotifConfig": { + "requestWebsocketUri": true, + "websocketUri": "websocketUri" + }, + "onboardingInformation": { + "apiInvokerPublicKey": "{PUBLIC_KEY}", + "onboardingSecret": "onboardingSecret", + "apiInvokerCertificate": "apiInvokerCertificate" + }, + "requestTestNotification": true, + "apiList": [ + { + "serviceAPICategory": "serviceAPICategory", + "ccfId": "ccfId", + "apiName": "apiName", + "shareableInfo": { + "capifProvDoms": ["capifProvDoms", "capifProvDoms"], + "isShareable": true + }, + "supportedFeatures": "fffffff", + "description": "description", + "apiSuppFeats": "fffffff", + "apiId": "apiId", + "aefProfiles": [ + { + "securityMethods": ["PSK"], + "versions": [ + { + "apiVersion": "apiVersion", + "resources": [ + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + } + ], + "custOperations": [ + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + } + ], + "expiry": "2000-01-23T04:56:07.000+00:00" + }, + { + "apiVersion": "apiVersion", + "resources": [ + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + } + ], + "custOperations": [ + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + } + ], + "expiry": "2000-01-23T04:56:07.000+00:00" + } + ], + "aefId": "aefId", + "interfaceDescriptions": [ + { + "securityMethods": ["PSK"], + "port": 5248, + "ipv4Addr": "ipv4Addr" + }, + { "securityMethods": ["PSK"], "port": 5248, "ipv4Addr": "ipv4Addr" } + ] + }, + { + "securityMethods": ["PSK"], + "versions": [ + { + "apiVersion": "apiVersion", + "resources": [ + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + } + ], + "custOperations": [ + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + } + ], + "expiry": "2000-01-23T04:56:07.000+00:00" + }, + { + "apiVersion": "apiVersion", + "resources": [ + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + } + ], + "custOperations": [ + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + } + ], + "expiry": "2000-01-23T04:56:07.000+00:00" + } + ], + "aefId": "aefId", + "interfaceDescriptions": [ + { + "securityMethods": ["PSK"], + "port": 5248, + "ipv4Addr": "ipv4Addr" + }, + { "securityMethods": ["PSK"], "port": 5248, "ipv4Addr": "ipv4Addr" } + ] + } + ], + "pubApiPath": { "ccfIds": ["ccfIds", "ccfIds"] } + }, + { + "serviceAPICategory": "serviceAPICategory", + "ccfId": "ccfId", + "apiName": "apiName2", + "shareableInfo": { + "capifProvDoms": ["capifProvDoms", "capifProvDoms"], + "isShareable": true + }, + "supportedFeatures": "fffffff", + "description": "description", + "apiSuppFeats": "fffffff", + "apiId": "apiId", + "aefProfiles": [ + { + "securityMethods": ["PSK"], + "versions": [ + { + "apiVersion": "apiVersion", + "resources": [ + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + } + ], + "custOperations": [ + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + } + ], + "expiry": "2000-01-23T04:56:07.000+00:00" + }, + { + "apiVersion": "apiVersion", + "resources": [ + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + } + ], + "custOperations": [ + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + } + ], + "expiry": "2000-01-23T04:56:07.000+00:00" + } + ], + "aefId": "aefId", + "interfaceDescriptions": [ + { + "securityMethods": ["PSK"], + "port": 5248, + "ipv4Addr": "ipv4Addr" + }, + { "securityMethods": ["PSK"], "port": 5248, "ipv4Addr": "ipv4Addr" } + ] + }, + { + "securityMethods": ["PSK"], + "versions": [ + { + "apiVersion": "apiVersion", + "resources": [ + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + } + ], + "custOperations": [ + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + } + ], + "expiry": "2000-01-23T04:56:07.000+00:00" + }, + { + "apiVersion": "apiVersion", + "resources": [ + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "resourceName": "resourceName", + "custOpName": "custOpName", + "uri": "uri", + "commType": "REQUEST_RESPONSE" + } + ], + "custOperations": [ + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + }, + { + "operations": ["GET"], + "description": "description", + "custOpName": "custOpName", + "commType": "REQUEST_RESPONSE" + } + ], + "expiry": "2000-01-23T04:56:07.000+00:00" + } + ], + "aefId": "aefId", + "interfaceDescriptions": [ + { + "securityMethods": ["PSK"], + "port": 5248, + "ipv4Addr": "ipv4Addr" + }, + { "securityMethods": ["PSK"], "port": 5248, "ipv4Addr": "ipv4Addr" } + ] + } + ], + "pubApiPath": { "ccfIds": ["ccfIds", "ccfIds"] } + } + ] +} diff --git a/docs/test_plan/api_invoker_management/invoker_getauth_example.json b/docs/test_plan/api_invoker_management/invoker_getauth_example.json new file mode 100644 index 0000000000000000000000000000000000000000..a66dad58adb1894b70b802193164301a429abdc1 --- /dev/null +++ b/docs/test_plan/api_invoker_management/invoker_getauth_example.json @@ -0,0 +1,4 @@ +{ + "username": "ROBOT_TESTING_INVOKER", + "password": "password" +} diff --git a/docs/test_plan/api_invoker_management/invoker_register_body.json b/docs/test_plan/api_invoker_management/invoker_register_body.json new file mode 100644 index 0000000000000000000000000000000000000000..e5bf1fc5b89682c56416c62530a95a5a86037885 --- /dev/null +++ b/docs/test_plan/api_invoker_management/invoker_register_body.json @@ -0,0 +1,7 @@ +{ + "password": "password", + "username": "ROBOT_TESTING_INVOKER", + "role": "invoker", + "description": "Testing", + "cn": "ROBOT_TESTING_INVOKER" +} diff --git a/docs/test_plan/api_logging_service/README.md b/docs/test_plan/api_logging_service/README.md new file mode 100644 index 0000000000000000000000000000000000000000..913a652ba779bc66dcae07c64502a5312bf3fdce --- /dev/null +++ b/docs/test_plan/api_logging_service/README.md @@ -0,0 +1,241 @@ +[**[Return To All Test Plans]**] + +- [Test Plan for CAPIF Api Logging Service](#test-plan-for-capif-api-logging-service) +- [Tests](#tests) + - [Test Case 1: Creates a new individual CAPIF Log Entry.](#test-case-1-creates-a-new-individual-capif-log-entry) + - [Test Case 2: Creates a new individual CAPIF Log Entry with Invalid aefID](#test-case-2-creates-a-new-individual-capif-log-entry-with-invalid-aefid) + - [Test Case 3: Creates a new individual CAPIF Log Entry with Invalid serviceAPI](#test-case-3-creates-a-new-individual-capif-log-entry-with-invalid-serviceapi) + - [Test Case 4: Creates a new individual CAPIF Log Entry with Invalid apiInvokerId](#test-case-4-creates-a-new-individual-capif-log-entry-with-invalid-apiinvokerid) + + - [Test Case 5: Creates a new individual CAPIF Log Entry with differnted aef_id in body and request](#test-case-5-creates-a-new-individual-capif-log-entry-with-invalid-aefid-in-body) + + +# Test Plan for CAPIF Api Logging Service +At this documentation you will have all information and related files and examples of test plan for this API. + +# Tests + +## Test Case 1: Creates a new individual CAPIF Log Entry. +* Test ID: ***capif_api_logging-1*** +* Description: + + This test case will check that a CAPIF AEF can create log entry to Logging Service +* Pre-Conditions: + + * CAPIF provider is pre-authorised (has valid aefId from CAPIF Authority) + * Service exist in CAPIF + * Invoker exist in CAPIF + +* Information of Test: + + 1. Perform [provider onboarding] and [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 3. Log Entry: + 1. Send POST to *https://{CAPIF_HOSTNAME}/api-invocation-logs/v1/{aefId}/logs* + 2. body [log entry request body] + 3. Use AEF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Create Log Entry + +* Expected Result: + + 1. Response to Logging Service must accomplish: + 1. **201 Created** + 2. Response Body must follow **InvocationLog** data structure with: + * aefId + * apiInvokerId + * logs + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invocation-logs/v1/{aefId}/logs/{logId}* + + + + +## Test Case 2: Creates a new individual CAPIF Log Entry with Invalid aefId +* Test ID: ***capif_api_logging-2*** +* Description: + + This test case will check that a CAPIF subscriber (AEF) cannot create Log Entry without valid aefId +* Pre-Conditions: + + * CAPIF provider is not pre-authorised (has not valid aefId from CAPIF Authority) + * Service exist in CAPIF + * Invoker exist in CAPIF + +* Information of Test: + + 1. Perform [provider onboarding] and [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 3. Log Entry: + 1. Send POST to *https://{CAPIF_HOSTNAME}/api-invocation-logs/v1/{not-valid-aefId}/logs* + 2. body [log entry request body] + 3. Use AEF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Create Log Entry + +* Expected Result: + + 1. Response to Logging Service must accomplish: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "Exposer not exist". + * cause with message "Exposer id not found". + +## Test Case 3: Creates a new individual CAPIF Log Entry with Invalid serviceAPI +* Test ID: ***capif_api_logging-3*** +* Description: + + This test case will check that a CAPIF subscriber (AEF) cannot create Log Entry without valid aefId +* Pre-Conditions: + + * CAPIF subscriber is pre-authorised (has valid aefId from CAPIF Authority) + +* Information of Test: + + 1. Perform [provider onboarding] and [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 3. Log Entry: + 1. Send POST to *https://{CAPIF_HOSTNAME}/api-invocation-logs/v1/{aefId}/logs* + 2. body [log entry request body with serviceAPI apiName apiId not valid] + 3. Use AEF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Create Log Entry + +* Expected Result: + + 1. Response to Logging Service must accomplish: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "Invoker not exist". + * cause with message "Invoker id not found". + + + +## Test Case 4: Creates a new individual CAPIF Log Entry with Invalid apiInvokerId +* Test ID: ***capif_api_logging-4*** +* Description: + + This test case will check that a CAPIF subscriber (AEF) cannot create Log Entry without valid aefId +* Pre-Conditions: + + * CAPIF subscriber is pre-authorised (has valid aefId from CAPIF Authority) + +* Information of Test: + + 1. Perform [provider onboarding] and [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 3. Log Entry: + 1. Send POST to *https://{CAPIF_HOSTNAME}/api-invocation-logs/v1/{aefId}/logs* + 2. body [log entry request body with invokerId not valid] + 3. Use AEF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Create Log Entry + +* Expected Result: + + 1. Response to Onboard request must accomplish: + 1. **201 Created** response. + 2. body returned must accomplish **APIProviderEnrolmentDetails** data structure. + 3. For each **apiProvFuncs**, we must check: + 1. **apiProvFuncId** is set + 2. **apiProvCert** under **regInfo** is set properly + 5. Location Header must contain the new resource URL *{apiRoot}/api-provider-management/v1/registrations/{registrationId}* + + 2. Response to Logging Service must accomplish: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "Invoker not exist". + * cause with message "Invoker id not found". + + 3. Log Entry are not stored in CAPIF Database + + +## Test Case 5: Creates a new individual CAPIF Log Entry with Invalid aefId in body +* Test ID: ***capif_api_logging-5*** +* Description: + + This test case will check that a CAPIF subscriber (AEF) cannot create Log Entry without valid aefId in body +* Pre-Conditions: + + * CAPIF provider is pre-authorised (has valid apfId from CAPIF Authority) + * Service exist in CAPIF + * Invoker exist in CAPIF + +* Information of Test: + + 1. Perform [provider onboarding] and [invoker onboarding] + + 2. Publish Service API at CCF: + - Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + - body [service api description] with apiName service_1 + - Use APF Certificate + + 3. Log Entry: + 1. Send POST to *https://{CAPIF_HOSTNAME}/api-invocation-logs/v1/{aefId}/logs* + 2. body [log entry request body with bad aefId] + 3. Use AEF Certificate + +* Execution Steps: + 1. Register Provider and Invoker CCF + 2. Publish Service + 3. Create Log Entry + +* Expected Result: + + 1. Response to Logging Service must accomplish: + 1. **401 Unauthorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 401 + * title with message "Unauthorized" + * detail with message "AEF id not matching in request and body". + * cause with message "Not identical AEF id". + + + + + + +[log entry request body]: ./invocation_log.json "Log Request Body" + +[invoker onboarding]: ../common_operations/README.md#register-an-invoker "Invoker Onboarding" + +[provider onboarding]: ../common_operations/README.md#register-a-provider "Provider Onboarding" + +[Return To All Test Plans]: ../README.md diff --git a/docs/test_plan/api_logging_service/invocation_log.json b/docs/test_plan/api_logging_service/invocation_log.json new file mode 100644 index 0000000000000000000000000000000000000000..ceabcf02016e116566cd2b13c99a87a6adcef5d3 --- /dev/null +++ b/docs/test_plan/api_logging_service/invocation_log.json @@ -0,0 +1,45 @@ +{ + "aefId": "aefId", + "apiInvokerId": "apiInvokerId", + "logs": [ + { + "apiId": "apiId", + "apiName": "apiName", + "apiVersion": "string", + "resourceName": "string", + "uri": "string", + "protocol": "HTTP_1_1", + "operation": "GET", + "result": "string", + "invocationTime": "2023-03-30T10:30:21.404Z", + "invocationLatency": 0, + "inputParameters": "string", + "outputParameters": "string", + "srcInterface": { + "ipv4Addr": "string", + "ipv6Addr": "string", + "fqdn": "string", + "port": 65535, + "apiPrefix": "string", + "securityMethods": [ + "PSK", + "Oauth" + ] + }, + "destInterface": { + "ipv4Addr": "string", + "ipv6Addr": "string", + "fqdn": "string", + "port": 65535, + "apiPrefix": "string", + "securityMethods": [ + "PSK", + "string" + ] + }, + "fwdInterface": "string" + } + ], + "supportedFeatures": "string" + } + \ No newline at end of file diff --git a/docs/test_plan/api_provider_management/README.md b/docs/test_plan/api_provider_management/README.md new file mode 100644 index 0000000000000000000000000000000000000000..547d654d69308c7dbbcc2b89f168df911e9e75dc --- /dev/null +++ b/docs/test_plan/api_provider_management/README.md @@ -0,0 +1,398 @@ +[**[Return To All Test Plans]**] + +- [Test Plan for CAPIF Api Provider Management](#test-plan-for-capif-api-provider-management) +- [Tests](#tests) + - [Test Case 1: Register Api Provider](#test-case-1-register-api-provider) + - [Test Case 2: Register Api Provider Already registered](#test-case-2-register-api-provider-already-registered) + - [Test Case 3: Update Registered Api Provider](#test-case-3-update-registered-api-provider) + - [Test Case 4: Update Not Registered Api Provider](#test-case-4-update-not-registered-api-provider) + - [Test Case 5: Partially Update Registered Api Provider](#test-case-5-partially-update-registered-api-provider) + - [Test Case 6: Partially Update Not Registered Api Provider](#test-case-6-partially-update-not-registered-api-provider) + - [Test Case 7: Delete Registered Api Provider](#test-case-7-delete-registered-api-provider) + - [Test Case 8: Delete Not Registered Api Provider](#test-case-8-delete-not-registered-api-provider) + + +# Test Plan for CAPIF Api Provider Management +At this documentation you will have all information and related files and examples of test plan for this API. + +# Tests + +## Test Case 1: Register Api Provider +* **Test ID**: ***capif_api_provider_management-1*** +* **Description**: + + This test case will check that Api Provider can be registered con CCF +* **Pre-Conditions**: + + * Provider is pre-authorised (has valid certificate from CAPIF Authority) + +* **Information of Test**: + + 1. Create public and private key at provider for provider itself and each function (apf, aef and amf) + + 2. Register of Provider at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [provider register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [provider getauth body] + + 4. Register Provider: + * Send POST *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations* + * body [provider request body] + * Authentication Bearer with access_token + * Store each cert in a file with according name. + +* **Execution Steps**: + + 1. Create private and public key for provider and each function to register. + 2. Register Provider. + +* **Expected Result**: + + 1. Register Provider at Provider Management: + 1. **201 Created** response. + 2. body returned must accomplish **APIProviderEnrolmentDetails** data structure. + 3. For each **apiProvFuncs**, we must check: + 1. **apiProvFuncId** is set + 2. **apiProvCert** under **regInfo** is set properly + 5. Location Header must contain the new resource URL *{apiRoot}/api-provider-management/v1/registrations/{registrationId}* + +## Test Case 2: Register Api Provider Already registered +* **Test ID**: ***capif_api_provider_management-2*** +* **Description**: + + This test case will check that a Api Provider previously registered cannot be re-registered +* **Pre-Conditions**: + + * Api Provider was registered previously and there is a {registerId} for his Api Provider in the DB + +* **Information of Test**: + + 1. Create public and private key at provider for provider itself and each function (apf, aef and amf) + + 2. Register of Provider at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [provider register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [provider getauth body] + + 4. Register Provider: + * Send POST *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations* + * body [provider request body] + * Authentication Bearer with access_token + * Store each cert in a file with according name. + + 5. Re-Register Provider: + * Same regSec than Previous registration + +* **Execution Steps**: + + 1. Create private and public key for provider and each function to register. + 2. Register Provider. + 3. Re-Register Provider. + +* **Expected Result**: + + 1. Re-Register Provider: + 1. **403 Forbidden** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status 403 + * title with message "Forbidden" + * detail with message "Provider already registered". + * cause with message "Identical provider reg sec". + +## Test Case 3: Update Registered Api Provider +* **Test ID**: ***capif_api_provider_management-3*** +* **Description**: + + This test case will check that a Registered Api Provider can be updated +* **Pre-Conditions**: + + * Api Provider was registered previously and there is a {registerId} for his Api Provider in the DB + +* **Information of Test**: + + 1. Create public and private key at provider for provider itself and each function (apf, aef and amf) + + 2. Register of Provider at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [provider register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [provider getauth body] + + 4. Register Provider: + * Send POST *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations* + * body [provider request body] + * Authentication Bearer with access_token + * Get Resource URL from Location + + 5. Update Provider: + * Send PUT to Resource URL returned at registration *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations/{registrationId}* + * body [provider request body] with apiProvDomInfo set to ROBOT_TESTING_MOD + * Use AMF Certificate. + + +* **Execution Steps**: + + 1. Create private and public key for provider and each function to register. + 2. Register Provider + 3. Update Provider + +* **Expected Result**: + 1. Register Provider: + 1. **201 Created** response. + 2. body returned must accomplish **APIProviderEnrolmentDetails** data structure. + 3. Location Header must contain the new resource URL *{apiRoot}/api-provider-management/v1/registrations/{registrationId}* + + + 2. Update Provider: + 1. **200 OK** response. + 2. body returned must accomplish **APIProviderEnrolmentDetails** data structure, with: + * apiProvDomInfo set to ROBOT_TESTING_MOD + + +## Test Case 4: Update Not Registered Api Provider +* **Test ID**: ***capif_api_provider_management-4*** +* **Description**: + + This test case will check that a Non-Registered Api Provider cannot be updated +* **Pre-Conditions**: + + * Api Provider was not registered previously + +* **Information of Test**: + + 1. Create public and private key at provider for provider itself and each function (apf, aef and amf) + + 2. Register of Provider at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [provider register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [provider getauth body] + + 4. Register Provider: + * Send POST *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations* + * body [provider request body] + * Authentication Bearer with access_token + * Store each cert in a file with according name. + + 5. Update Not Registered Provider: + * Send PUT *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations/{API_PROVIDER_NOT_REGISTERED}* + * body [provider request body] + * Use AMF Certificate. + +* **Execution Steps**: + + 1. Register Provider at CCF + 3. Update Not Registered Provider + +* **Expected Result**: + + 1. Update Not Registered Provider: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status 404 + * title with message "Not Found" + * detail with message "Not Exist Provider Enrolment Details". + * cause with message "Not found registrations to send this api provider details". + +## Test Case 5: Partially Update Registered Api Provider +* **Test ID**: ***capif_api_provider_management-5*** +* **Description**: + + This test case will check that a Registered Api Provider can be partially updated +* **Pre-Conditions**: + + * Api Provider was registered previously and there is a {registerId} for his Api Provider in the DB + +* **Information of Test**: + + 1. Create public and private key at provider for provider itself and each function (apf, aef and amf) + + 2. Register of Provider at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [provider register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [provider getauth body] + + 4. Register Provider: + * Send POST *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations* + * body [provider request body] + * Authentication Bearer with access_token + * Store each cert in a file with according name. + + 5. Partial update provider: + * Send PATCH *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations/{registrationId}* + * body [provider request patch body] + * Use AMF Certificate. + +* **Execution Steps**: + + 1. Register Provider at CCF + 2. Register Provider + 3. Partial update provider + +* **Expected Result**: + + 1. Partial update provider at Provider Management: + 1. **200 OK** response. + 2. body returned must accomplish **APIProviderEnrolmentDetails** data structure, with: + * apiProvDomInfo with "ROBOT_TESTING_MOD" + +## Test Case 6: Partially Update Not Registered Api Provider +* **Test ID**: ***capif_api_provider_management-6*** +* **Description**: + + This test case will check that a Non-Registered Api Provider cannot be partially updated + +* **Pre-Conditions**: + + * Api Provider was not registered previously + +* **Information of Test**: + + 1. Create public and private key at provider for provider itself and each function (apf, aef and amf) + + 2. Register of Provider at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [provider register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [provider getauth body] + + 4. Register Provider: + * Send POST *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations* + * body [provider request body] + * Authentication Bearer with access_token + * Store each cert in a file with according name. + + 5. Partial update Provider: + * Send PATCH *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations/{API_API_PROVIDER_NOT_REGISTERED}* + * body [provider request patch body] + * Use AMF Certificate. + + +* **Execution Steps**: + + 1. Register Provider at CCF + 2. Register Provider + 3. Partial update provider + +* **Expected Result**: + + 1. Partial update provider: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status 404 + * title with message "Not Found" + * detail with message "Not Exist Provider Enrolment Details". + * cause with message "Not found registrations to send this api provider details". + +## Test Case 7: Delete Registered Api Provider +* **Test ID**: ***capif_api_provider_management-7*** +* **Description**: + + This test case will check that a Registered Api Provider can be deleted +* **Pre-Conditions**: + + * Api Provider was registered previously + +* **Information of Test**: + + 1. Create public and private key at provider for provider itself and each function (apf, aef and amf) + + 2. Register of Provider at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [provider register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [provider getauth body] + + 4. Register Provider: + * Send POST *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations* + * body [provider request body] + * Authentication Bearer with access_token + * Store each cert in a file with according name. + + 5. Delete registered provider: + * Send DELETE *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations/{registrationId}* + * Use AMF Certificate. + +* **Execution Steps**: + + 1. Register Provider at CCF + 2. Register Provider + 3. Delete Provider + +* **Expected Result**: + + 1. Delete Provider: + 1. **204 No Content** response. + +## Test Case 8: Delete Not Registered Api Provider +* **Test ID**: ***capif_api_provider_management-8*** +* **Description**: + + This test case will check that a Non-Registered Api Provider cannot be deleted +* **Pre-Conditions**: + + * Api Provider was not registered previously + +* **Information of Test**: + + 1. Create public and private key at provider for provider itself and each function (apf, aef and amf) + + 2. Register of Provider at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [provider register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [provider getauth body] + + 4. Register Provider: + * Send POST *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations* + * body [provider request body] + * Authentication Bearer with access_token + * Store each cert in a file with according name. + + 5. Delete registered provider at Provider Management: + * Send DELETE *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations/{API_PROVIDER_NOT_REGISTERED}* + * Use AMF Certificate. + +* **Execution Steps**: + + 1. Register Provider at CCF + 2. Delete Provider + +* **Expected Result**: + + 1. Delete Provider: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status 404 + * title with message "Not Found" + * detail with message "Not Exist Provider Enrolment Details". + * cause with message "Not found registrations to send this api provider details". + +[provider register body]: ./provider_details_post_example.json "API Provider Enrolment Request" + +[provider request body]: ./provider_details_post_example.json "API Provider Enrolment Request" + +[provider request patch body]: ./provider_details_enrolment_details_patch_example.json "API Provider Enrolment Patch Request" + +[provider getauth body]: ./provider_getauth_example.json "Get Auth Example" + +[Return To All Test Plans]: ../README.md diff --git a/docs/test_plan/api_provider_management/provider_details_enrolment_details_patch_example.json b/docs/test_plan/api_provider_management/provider_details_enrolment_details_patch_example.json new file mode 100644 index 0000000000000000000000000000000000000000..4dac4f409eccfc7201c4eaf615b55f70c38b54e4 --- /dev/null +++ b/docs/test_plan/api_provider_management/provider_details_enrolment_details_patch_example.json @@ -0,0 +1,29 @@ +{ + "regSec": "", + "apiProvFuncs": [ + { + "regInfo": { + "apiProvPubKey": "" + }, + "apiProvFuncRole": "APF", + "apiProvFuncInfo": "APF_ROBOT_TESTING_PROVIDER" + }, + { + "regInfo": { + "apiProvPubKey": "" + }, + "apiProvFuncRole": "AEF", + "apiProvFuncInfo": "AEF_ROBOT_TESTING_PROVIDER" + }, + { + "regInfo": { + "apiProvPubKey": "" + }, + "apiProvFuncRole": "AMF", + "apiProvFuncInfo": "AMF_ROBOT_TESTING_PROVIDER" + } + ], + "apiProvDomInfo": "ROBOT_TESTING", + "suppFeat": "string", + "failReason": "string" +} \ No newline at end of file diff --git a/docs/test_plan/api_provider_management/provider_details_post_example.json b/docs/test_plan/api_provider_management/provider_details_post_example.json new file mode 100644 index 0000000000000000000000000000000000000000..48e91bacf24899e55221babf610598b9d4132b61 --- /dev/null +++ b/docs/test_plan/api_provider_management/provider_details_post_example.json @@ -0,0 +1,17 @@ +{ + "regSec": "string", + "apiProvFuncs": [ + { + "apiProvFuncId": "string", + "regInfo": { + "apiProvPubKey": "string", + "apiProvCert": "string" + }, + "apiProvFuncRole": "AEF", + "apiProvFuncInfo": "string" + } + ], + "apiProvDomInfo": "string", + "suppFeat": "string", + "failReason": "string" +} \ No newline at end of file diff --git a/docs/test_plan/api_provider_management/provider_getauth_example.json b/docs/test_plan/api_provider_management/provider_getauth_example.json new file mode 100644 index 0000000000000000000000000000000000000000..8fc82aeebfc9531651fa746bf10be11c6aa347f3 --- /dev/null +++ b/docs/test_plan/api_provider_management/provider_getauth_example.json @@ -0,0 +1,4 @@ +{ + "username": "ROBOT_TESTING_PROVIDER", + "password": "password" +} diff --git a/docs/test_plan/api_provider_management/provider_register_body.json b/docs/test_plan/api_provider_management/provider_register_body.json new file mode 100644 index 0000000000000000000000000000000000000000..fc26db2141eab904b1f2f8d96e963f2ec0efcbe1 --- /dev/null +++ b/docs/test_plan/api_provider_management/provider_register_body.json @@ -0,0 +1,7 @@ +{ + "password": "password", + "username": "ROBOT_TESTING_PUBLISHER", + "role": "provider", + "description": "Testing", + "cn": "ROBOT_TESTING_PUBLISHER" +} diff --git a/docs/test_plan/api_publish_service/README.md b/docs/test_plan/api_publish_service/README.md new file mode 100644 index 0000000000000000000000000000000000000000..8487f5d5cc582ab65c393c5e437267e51b23a089 --- /dev/null +++ b/docs/test_plan/api_publish_service/README.md @@ -0,0 +1,599 @@ +[**[Return To All Test Plans]**] + +- [Test Plan for CAPIF Api Publish Service](#test-plan-for-capif-api-publish-service) +- [Tests](#tests) + - [Test Case 1: Publish API by Authorised API Publisher](#test-case-1-publish-api-by-authorised-api-publisher) + - [Test Case 2: Publish API by NON Authorised API Publisher](#test-case-2-publish-api-by-non-authorised-api-publisher) + - [Test Case 3: Retrieve all APIs Published by Authorised apfId](#test-case-3-retrieve-all-apis-published-by-authorised-apfid) + - [Test Case 4: Retrieve all APIs Published by NON Authorised apfId](#test-case-4-retrieve-all-apis-published-by-non-authorised-apfid) + - [Test Case 5: Retrieve single APIs Published by Authorised apfId](#test-case-5-retrieve-single-apis-published-by-authorised-apfid) + - [Test Case 6: Retrieve single APIs non Published by Authorised apfId](#test-case-6-retrieve-single-apis-non-published-by-authorised-apfid) + - [Test Case 7: Retrieve single APIs Published by NON Authorised apfId](#test-case-7-retrieve-single-apis-published-by-non-authorised-apfid) + - [Test Case 8: Update API Published by Authorised apfId with valid serviceApiId](#test-case-8-update-api-published-by-authorised-apfid-with-valid-serviceapiid) + - [Test Case 9: Update APIs Published by Authorised apfId with invalid serviceApiId](#test-case-9-update-apis-published-by-authorised-apfid-with-invalid-serviceapiid) + - [Test Case 10: Update APIs Published by NON Authorised apfId](#test-case-10-update-apis-published-by-non-authorised-apfid) + - [Test Case 11: Delete API Published by Authorised apfId with valid serviceApiId](#test-case-11-delete-api-published-by-authorised-apfid-with-valid-serviceapiid) + - [Test Case 12: Delete APIs Published by Authorised apfId with invalid serviceApiId](#test-case-12-delete-apis-published-by-authorised-apfid-with-invalid-serviceapiid) + - [Test Case 13: Delete APIs Published by NON Authorised apfId](#test-case-13-delete-apis-published-by-non-authorised-apfid) + + +# Test Plan for CAPIF Api Publish Service +At this documentation you will have all information and related files and examples of test plan for this API. + +# Tests + +## Test Case 1: Publish API by Authorised API Publisher +* **Test ID**: ***capif_api_publish_service-1*** +* **Description**: + + This test case will check that an API Publisher can Publish an API +* **Pre-Conditions**: + + * CAPIF subscriber is pre-authorised (has valid apfId from CAPIF Authority) + +* **Information of Test**: + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Publish Service API + 3. Retrieve {apiId} from body and Location header with new resource created from response + +* **Expected Result**: + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + + 2. Published Service API is stored in CAPIF Database + +## Test Case 2: Publish API by NON Authorised API Publisher +* **Test ID**: ***capif_api_publish_service-2*** +* **Description**: + + This test case will check that an API Publisher cannot Publish an API withot valid apfId +* **Pre-Conditions**: + + * CAPIF subscriber is NOT pre-authorised (has invalid apfId from CAPIF Authority) + +* **Information of Test**: + 1. Perform [Provider Registration] + + 2. Publish Service API with invalid APF ID at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{APF_ID_NOT_VALID}/service-apis* + * body [service api description] with apiName service_1 + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Publish Service API with invalid APF ID + +* **Expected Result**: + 1. Response to Publish request must accomplish: + 1. **401 Unauthorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status **401** + * title with message "Unauthorized" + * detail with message "Publisher not existing". + * cause with message "Publisher id not found". + + 2. Service API is NOT stored in CAPIF Database + + +## Test Case 3: Retrieve all APIs Published by Authorised apfId +* **Test ID**: ***capif_api_publish_service-3*** +* **Description**: + + This test case will check that an API Publisher can Retrieve all API published +* **Pre-Conditions**: + + * CAPIF subscriber is pre-authorised (has valid apfId from CAPIF Authority) + * At least 2 service APIs are published. + +* **Information of Test**: + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Get apiId + * Use APF Certificate + + 3. Publish Other Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_2 + * Get apiId + * Use APF Certificate + + 4. Retrieve all published APIs: + * Send Get to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Publish Service API service_1 + 3. Retrieve {apiId1} from body and Location header with new resource created from response + 4. Publish Service API service_2 + 5. Retrieve {apiId2} from body and Location header with new resource created from response + 6. Retrieve All published APIs and check if both are present. + +* **Expected Result**: + 1. Response to service 1 Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId1}* + + 2. Response to service 2 Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId2}* + + 3. Published Service APIs are stored in CAPIF Database + + 4. Response to Retrieve all published APIs: + 1. **200 OK** + 2. Response body must return an array of **ServiceAPIDescription** data. + 3. Array must contain all previously published APIs. + +## Test Case 4: Retrieve all APIs Published by NON Authorised apfId +* **Test ID**: ***capif_api_publish_service-4*** +* **Description**: + + This test case will check that an API Publisher cannot Retrieve API published when apfId is not authorised +* **Pre-Conditions**: + + * CAPIF subscriber is NOT pre-authorised (has invalid apfId from CAPIF Authority) + +* **Information of Test**: + 1. Perform [Provider Registration] + + 2. Retrieve all published APIs: + * Send Get to *https://{CAPIF_HOSTNAME}/published-apis/v1/{APF_ID_NOT_VALID}/service-apis* + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Retrieve All published APIs + +* **Expected Result**: + 1. Response to Publish request must accomplish: + 1. **401 Non Authorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status **401** + * title with message "Unauthorized" + * detail with message "Provider not existing". + * cause with message "Provider id not found". + + 2. Service API is NOT stored in CAPIF Database + +## Test Case 5: Retrieve single APIs Published by Authorised apfId +* **Test ID**: ***capif_api_publish_service-5*** +* **Description**: + + This test case will check that an API Publisher can Retrieve API published one by one +* **Pre-Conditions**: + + * CAPIF subscriber is pre-authorised (has valid apfId from CAPIF Authority) + * At least 2 service APIs are published. + +* **Information of Test**: + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Get apiId + * Use APF Certificate + + 3. Publish Other Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_2 + * Get apiId + * Use APF Certificate + + 4. Retrieve service_1 published APIs detail: + * Send Get to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{apiId1}* + * Use APF Certificate + + 5. Retrieve service_2 published APIs detail: + * Send Get to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{apiId2}* + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Publish Service API service_1. + 3. Retrieve {apiId1} from body and Location header with new resource created from response. + 4. Publish Service API service_2. + 5. Retrieve {apiId2} from body and Location header with new resource created from response. + 6. Retrieve service_1 API Detail. + 7. Retrieve service_2 API Detail. + +* **Expected Result**: + 1. Response to service 1 Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId1}* + + 2. Response to service 2 Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId2}* + + 3. Published Service APIs are stored in CAPIF Database + + 4. Response to Retrieve service_1 published API using apiId1: + 1. **200 OK** + 2. Response body must return a **ServiceAPIDescription** data. + 3. Array must contain same information than service_1 published registration response. + + 5. Response to Retrieve service_2 published API using apiId2: + 1. **200 OK** + 2. Response body must return a **ServiceAPIDescription** data. + 3. Array must contain same information than service_2 published registration response. + + +## Test Case 6: Retrieve single APIs non Published by Authorised apfId +* **Test ID**: ***capif_api_publish_service-6*** +* **Description**: + + This test case will check that an API Publisher try to get detail of not published api. +* **Pre-Conditions**: + + * CAPIF subscriber is pre-authorised (has valid apfId from CAPIF Authority) + * No published api + +* **Information of Test**: + 1. Perform [Provider Registration] + 2. Retrieve not published APIs detail: + * Send Get to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{SERVICE_API_ID_NOT_VALID}* + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Retrieve not published API Detail. + +* **Expected Result**: + 1. Response to Retrieve for NOT published API must accomplish: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status **404** + * title with message "Not Found" + * detail with message "Service API not found". + * cause with message "No Service with specific credentials exists". + + +## Test Case 7: Retrieve single APIs Published by NON Authorised apfId +* **Test ID**: ***capif_api_publish_service-7*** +* **Description**: + + This test case will check that an API Publisher cannot Retrieve detailed API published when apfId is not authorised +* **Pre-Conditions**: + + * CAPIF subscriber is NOT pre-authorised (has invalid apfId from CAPIF Authority) + +* **Information of Test**: + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Get apiId + * Use APF Certificate + + 3. Retrieve detailed published APIs: + * Send Get to *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/${apiId}* + * Use Invoker certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Publish Service API at CCF + 3. Retrieve {apiId} from body and Location header with new resource created from response. + 4. Register and onboard Invoker at CCF + 5. Store signed Invoker Certificate + 6. Retrieve detailed published API acting as Invoker + +* **Expected Result**: + 1. Response to Retrieve Detailed published API acting as Invoker must accomplish: + 1. **401 Unauthorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status **401** + * title with message "Unauthorized" + * detail with message "User not authorized". + * cause with message "Certificate not authorized". + + 2. Service API is NOT stored in CAPIF Database + + +## Test Case 8: Update API Published by Authorised apfId with valid serviceApiId +* **Test ID**: ***capif_api_publish_service-8*** +* **Description**: + + This test case will check that an API Publisher can Update published API with a valid serviceApiId +* **Pre-Conditions**: + + * CAPIF subscriber is pre-authorised (has valid apfId from CAPIF Authority) + * A service APIs is published. + +* **Information of Test**: + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Get apiId + * get resource url from location Header. + * Use APF Certificate + + 3. Update published API at CCF: + * Send PUT to resource URL *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{serivceApiId}* + * body [service api description] with overrided apiName to service_1_modified + * Use APF Certificate + + 4. Retrieve detail of service API: + * Send Get to resource URL *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{serivceApiId}* + * check apiName is service_1_modified + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Publish Service API + 3. Retrieve {apiId} from body and Location header with new resource url created from response + 4. Update published Service API. + 5. Retrieve detail of Service API + +* **Expected Result**: + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + + 2. Response to Update Published Service API: + 1. **200 OK** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiName service_1_modified + + 3. Response to Retrieve detail of Service API: + 1. **200 OK** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiName service_1_modified. + + +## Test Case 9: Update APIs Published by Authorised apfId with invalid serviceApiId +* **Test ID**: ***capif_api_publish_service-9*** +* **Description**: + + This test case will check that an API Publisher cannot Update published API with a invalid serviceApiId +* **Pre-Conditions**: + + * CAPIF subscriber is pre-authorised (has valid apfId from CAPIF Authority) + +* **Information of Test**: + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Get apiId + * Use APF Certificate + + 3. Update published API at CCF: + * Send PUT to resource URL *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{SERVICE_API_ID_NOT_VALID}* + * body [service api description] with overrided apiName to ***service_1_modified*** + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Update published Service API. + +* **Expected Result**: + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + + 2. Response to Update Published Service API: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status **404** + * title with message "Not Found" + * detail with message "Service API not found". + * cause with message "Service API id not found". + + ## Test Case 10: Update APIs Published by NON Authorised apfId +* **Test ID**: ***capif_api_publish_service-10*** +* **Description**: + + This test case will check that an API Publisher cannot Update API published when apfId is not authorised +* **Pre-Conditions**: + + * CAPIF subscriber is NOT pre-authorised (has invalid apfId from CAPIF Authority) + +* **Information of Test**: + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Get apiId + * Use APF Certificate + + 3. Update published API at CCF: + * Send PUT to resource URL *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + * body [service api description] with overrided apiName to ***service_1_modified*** + * Use invoker certificate + + 4. Retrieve detail of service API: + * Send Get to resource URL *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{serivceApiId}* + * check apiName is service_1 + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Publish Service API at CCF + 3. Retrieve {apiId} from body and Location header with new resource created from response. + 4. Register and onboard Invoker at CCF + 5. Store signed Invoker Certificate + 6. Update published API at CCF as Invoker + 7. Retrieve detail of Service API as publisher + +* **Expected Result**: + 1. Response to Update published API acting as Invoker must accomplish: + 1. **401 Unauthorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status **401** + * title with message "Unauthorized" + * detail with message "User not authorized". + * cause with message "Certificate not authorized". + + 2. Response to Retrieve Detail of Service API: + 1. **200 OK** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiName service_1. + + +## Test Case 11: Delete API Published by Authorised apfId with valid serviceApiId +* **Test ID**: ***capif_api_publish_service-11*** +* **Description**: + + This test case will check that an API Publisher can Delete published API with a valid serviceApiId +* **Pre-Conditions**: + + * CAPIF subscriber is pre-authorised (has valid apfId from CAPIF Authority). + * A service APIs is published. + +* **Information of Test**: + 1. Perform [Provider Registration] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Get apiId + * Use APF Certificate + + 3. Remove published Service API at CCF: + * Send DELETE to resource URL *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + * Use APF Certificate + 4. Retrieve detail of service API: + * Send Get to resource URL *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{serivceApiId}* + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Publish Service API + 3. Retrieve {apiId} from body and Location header with new resource created from response + 4. Remove published API at CCF + 5. Try to retreive deleted service API from CCF + +* **Expected Result**: + 1. Response to Publish request must accomplish: + 1. **201 Created** + 2. Response Body must follow **ServiceAPIDescription** data structure with: + * apiId + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/published-apis/v1/{apfId}/service-apis/{serviceApiId}* + + 2. Published Service API is stored in CAPIF Database + + 3. Response to Remove published Service API at CCF: + 1. **204 No Content** + + 4. Response to Retrieve for DELETED published API must accomplish: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "Service API not found". + * cause with message "No Service with specific credentials exists". + + +## Test Case 12: Delete APIs Published by Authorised apfId with invalid serviceApiId +* **Test ID**: ***capif_api_publish_service-12*** +* **Description**: + + This test case will check that an API Publisher cannot Delete with invalid serviceApiId +* **Pre-Conditions**: + + * CAPIF subscriber is pre-authorised (has valid apfId from CAPIF Authority). + +* **Information of Test**: + 1. Perform [Provider Registration] + + 2. Remove published Service API at CCF with invalid serviceId: + * Send DELETE to resource URL *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{SERVICE_API_ID_NOT_VALID}* + * Use APF Certificate + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Remove published API at CCF with invalid serviceId + +* **Expected Result**: + 1. Response to Remove published Service API at CCF: + 1. **404 Not Found** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status 404 + * title with message "Not Found" + * detail with message "Service API not found". + * cause with message "Service API id not found". + + +## Test Case 13: Delete APIs Published by NON Authorised apfId +* **Test ID**: ***capif_api_publish_service-12*** +* **Description**: + + This test case will check that an API Publisher cannot Delete API published when apfId is not authorised +* **Pre-Conditions**: + + * CAPIF subscriber is pre-authorised (has valid apfId from CAPIF Authority). + +* **Information of Test**: + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis* + * body [service api description] with apiName service_1 + * Get apiId + * Use APF Certificate + + 3. Remove published Service API at CCF with invalid serviceId as Invoker: + * Send DELETE to resource URL *https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis/{SERVICE_API_ID_NOT_VALID}* + * Use invoker certificate. + +* **Execution Steps**: + 1. Register Provider at CCF and store certificates. + 2. Register Invoker and onboard Invoker at CCF + 3. Remove published API at CCF with invalid serviceId as Invoker + +* **Expected Result**: + 1. Response to Remove published Service API at CCF: + 1. **401 Unauthorized** + 2. Error Response Body must accomplish with **ProblemDetails** data structure with: + * status **401** + * title with message "Unauthorized" + * detail with message "User not authorized". + * cause with message "Certificate not authorized". + + + [service api description]: ./service_api_description_post_example.json "Service API Description Request" + [publisher register body]: ./publisher_register_body.json "Publish register Body" + [invoker onboarding body]: ../api_invoker_management/invoker_details_post_example.json "API Invoker Request" + [invoker register body]: ../api_invoker_management/invoker_register_body.json "Invoker Register Body" + [provider request body]: ../api_provider_management/provider_details_post_example.json "API Provider Enrolment Request" + [provider request patch body]: ../api_provider_management/provider_details_enrolment_details_patch_example.json "API Provider Enrolment Patch Request" + [provider getauth body]: ../api_provider_management/provider_getauth_example.json "Get Auth Example" + + [invoker onboarding]: ../common_operations/README.md#register-an-invoker "Invoker Onboarding" + [provider registration]: ../common_operations/README.md#register-a-provider "Provider Registration" + + + [Return To All Test Plans]: ../README.md \ No newline at end of file diff --git a/docs/test_plan/api_publish_service/publisher_register_body.json b/docs/test_plan/api_publish_service/publisher_register_body.json new file mode 100644 index 0000000000000000000000000000000000000000..fc26db2141eab904b1f2f8d96e963f2ec0efcbe1 --- /dev/null +++ b/docs/test_plan/api_publish_service/publisher_register_body.json @@ -0,0 +1,7 @@ +{ + "password": "password", + "username": "ROBOT_TESTING_PUBLISHER", + "role": "provider", + "description": "Testing", + "cn": "ROBOT_TESTING_PUBLISHER" +} diff --git a/docs/test_plan/api_publish_service/service_api_description_post_example.json b/docs/test_plan/api_publish_service/service_api_description_post_example.json new file mode 100644 index 0000000000000000000000000000000000000000..b725b428629509bf39a79c030f1bf93f4b6f18f6 --- /dev/null +++ b/docs/test_plan/api_publish_service/service_api_description_post_example.json @@ -0,0 +1,113 @@ +{ + "apiName": "service_1", + "aefProfiles": [ + { + "aefId": "string", + "versions": [ + { + "apiVersion": "v1", + "expiry": "2021-11-30T10:32:02.004Z", + "resources": [ + { + "resourceName": "string", + "commType": "REQUEST_RESPONSE", + "uri": "string", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ], + "custOperations": [ + { + "commType": "REQUEST_RESPONSE", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ] + } + ], + "protocol": "HTTP_1_1", + "dataFormat": "JSON", + "securityMethods": ["PSK"], + "interfaceDescriptions": [ + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + }, + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + } + ] + }, + { + "aefId": "string", + "versions": [ + { + "apiVersion": "v1", + "expiry": "2021-11-30T10:32:02.004Z", + "resources": [ + { + "resourceName": "string", + "commType": "REQUEST_RESPONSE", + "uri": "string", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ], + "custOperations": [ + { + "commType": "REQUEST_RESPONSE", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ] + } + ], + "protocol": "HTTP_1_1", + "dataFormat": "JSON", + "securityMethods": ["PSK"], + "interfaceDescriptions": [ + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + }, + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + } + ] + } + ], + "description": "string", + "supportedFeatures": "fffff", + "shareableInfo": { + "isShareable": true, + "capifProvDoms": [ + "string" + ] + }, + "serviceAPICategory": "string", + "apiSuppFeats": "fffff", + "pubApiPath": { + "ccfIds": [ + "string" + ] + }, + "ccfId": "string" +} \ No newline at end of file diff --git a/docs/test_plan/api_security_service/README.md b/docs/test_plan/api_security_service/README.md new file mode 100644 index 0000000000000000000000000000000000000000..c0d3e71b06da6e40e9fdaeff50064ad230b76c57 --- /dev/null +++ b/docs/test_plan/api_security_service/README.md @@ -0,0 +1,1244 @@ +[**[Return To All Test Plans]**] + +- [Test Plan for CAPIF Api Security Service](#test-plan-for-capif-api-security-service) +- [Tests](#tests) + - [Test Case 1: Create a security context for an API invoker](#test-case-1-create-a-security-context-for-an-api-invoker) + - [Test Case 2: Create a security context for an API invoker with Provider role](#test-case-2-create-a-security-context-for-an-api-invoker-with-provider-role) + - [Test Case 3: Create a security context for an API invoker with Provider entity role and invalid apiInvokerId](#test-case-3-create-a-security-context-for-an-api-invoker-with-provider-entity-role-and-invalid-apiinvokerid) + - [Test Case 4: Create a security context for an API invoker with Invoker entity role and invalid apiInvokerId](#test-case-4-create-a-security-context-for-an-api-invoker-with-invoker-entity-role-and-invalid-apiinvokerid) + - [Test Case 5: Retrieve the Security Context of an API Invoker](#test-case-5-retrieve-the-security-context-of-an-api-invoker) + - [Test Case 6: Retrieve the Security Context of an API Invoker with invalid apiInvokerID](#test-case-6-retrieve-the-security-context-of-an-api-invoker-with-invalid-apiinvokerid) + - [Test Case 7: Retrieve the Security Context of an API Invoker with invalid apfId](#test-case-7-retrieve-the-security-context-of-an-api-invoker-with-invalid-apfid) + - [Test Case 8: Delete the Security Context of an API Invoker](#test-case-8-delete-the-security-context-of-an-api-invoker) + - [Test Case 9: Delete the Security Context of an API Invoker with Invoker entity role](#test-case-9-delete-the-security-context-of-an-api-invoker-with-invoker-entity-role) + - [Test Case 10: Delete the Security Context of an API Invoker with Invoker entity role and invalid apiInvokerID](#test-case-10-delete-the-security-context-of-an-api-invoker-with-invoker-entity-role-and-invalid-apiinvokerid) + - [Test Case 11: Delete the Security Context of an API Invoker with invalid apiInvokerID](#test-case-11-delete-the-security-context-of-an-api-invoker-with-invalid-apiinvokerid) + - [Test Case 12: Update the Security Context of an API Invoker](#test-case-12-update-the-security-context-of-an-api-invoker) + - [Test Case 13: Update the Security Context of an API Invoker with Provider entity role](#test-case-13-update-the-security-context-of-an-api-invoker-with-provider-entity-role) + - [Test Case 14: Update the Security Context of an API Invoker with AEF entity role and invalid apiInvokerId](#test-case-14-update-the-security-context-of-an-api-invoker-with-aef-entity-role-and-invalid-apiinvokerid) + - [Test Case 15: Update the Security Context of an API Invoker with invalid apiInvokerID](#test-case-15-update-the-security-context-of-an-api-invoker-with-invalid-apiinvokerid) + - [Test Case 16: Revoke the authorization of the API invoker for APIs.](#test-case-16-revoke-the-authorization-of-the-api-invoker-for-apis) + - [Test Case 17: Revoke the authorization of the API invoker for APIs without valid apfID.](#test-case-17-revoke-the-authorization-of-the-api-invoker-for-apis-without-valid-apfid) + - [Test Case 18: Revoke the authorization of the API invoker for APIs with invalid apiInvokerId.](#test-case-18-revoke-the-authorization-of-the-api-invoker-for-apis-with-invalid-apiinvokerid) + - [Test Case 19: Retrieve access token](#test-case-19-retrieve-access-token) + - [Test Case 20: Retrieve access token by Provider](#test-case-20-retrieve-access-token-by-provider) + - [Test Case 21: Retrieve access token by Provider with invalid apiInvokerId](#test-case-21-retrieve-access-token-by-provider-with-invalid-apiinvokerid) + - [Test Case 22: Retrieve access token with invalid apiInvokerId](#test-case-22-retrieve-access-token-with-invalid-apiinvokerid) + - [Test Case 23: Retrieve access token with invalid client\_id](#test-case-23-retrieve-access-token-with-invalid-client_id) + - [Test Case 24: Retrieve access token with unsupported grant\_type](#test-case-24-retrieve-access-token-with-unsupported-grant_type) + - [Test Case 25: Retrieve access token with invalid scope](#test-case-25-retrieve-access-token-with-invalid-scope) + - [Test Case 26: Retrieve access token with invalid aefid at scope](#test-case-26-retrieve-access-token-with-invalid-aefid-at-scope) + - [Test Case 27: Retrieve access token with invalid apiName at scope](#test-case-27-retrieve-access-token-with-invalid-apiname-at-scope) + + + +# Test Plan for CAPIF Api Security Service +At this documentation you will have all information and related files and examples of test plan for this API. + +# Tests + +## Test Case 1: Create a security context for an API invoker +* **Test ID**: ***capif_security_api-1*** +* **Description**: + + This test case will check that an API Invoker can create a Security context +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID from CAPIF Authority) + +* **Information of Test**: + 1. Perform [Invoker Onboarding] + 2. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Use Invoker Certificate + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Store signed Certificate + 3. Create Security Context + +* **Expected Result**: + + 1. Create security context: + 1. **201 Created** response. + 2. body returned must accomplish **ServiceSecurity** data structure. + 3. Location Header must contain the new resource URL *{apiRoot}/capif-security/v1/trustedInvokers/{apiInvokerId}* + + +## Test Case 2: Create a security context for an API invoker with Provider role +* **Test ID**: ***capif_security_api-2*** +* **Description**: + + This test case will check that an Provider cannot create a Security context with valid apiInvokerId. +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID), but user that create Security Context with Provider role + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context for this Invoker but using Provider certificate. + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using AEF certificate + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Register Provider at CCF + 3. Create Security Context using Provider certificate + +* **Expected Result**: + + 1. Create security context using Provider certificate: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **401** + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "User role must be invoker". + + 2. No context stored at DB + +## Test Case 3: Create a security context for an API invoker with Provider entity role and invalid apiInvokerId +* **Test ID**: ***capif_security_api-3*** +* **Description**: + + This test case will check that an Provider cannot create a Security context with invalid apiInvokerID. +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID), but user that create Security Context with Provider role + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Create Security Context for this not valid apiInvokerId and using Provider certificate. + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{API_INVOKER_NOT_VALID}* + * body [service security body] + * Using AEF certificate + +* **Execution Steps**: + + 1. Register Provider at CCF + 2. Create Security Context using Provider certificate + +* **Expected Result**: + + 1. Create security context using Provider certificate: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **401** + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "User role must be invoker". + 2. No context stored at DB + +## Test Case 4: Create a security context for an API invoker with Invoker entity role and invalid apiInvokerId +* **Test ID**: ***capif_security_api-4*** +* **Description**: + + This test case will check that an Invoker cannot create a Security context with valid apiInvokerId. +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID), but user that create Security Context with invalid apiInvokerId + +* **Information of Test**: + 1. Perform [Invoker Onboarding] + + 2. Create Security Context for this Invoker: + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{API_INVOKER_NOT_VALID}* + * body [service security body] + * Use Invoker Certificate + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Create Security Context using Provider certificate + +* **Expected Result**: + + 1. Create security context using Provider certificate: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **404** + * title with message "Not Found" + * detail with message "Invoker not found". + * cause with message "API Invoker not exists or invalid ID". + + 2. No context stored at DB + + +## Test Case 5: Retrieve the Security Context of an API Invoker +* **Test ID**: ***capif_security_api-5*** +* **Description**: + + This test case will check that an provider can retrieve the Security context of an API Invoker +* **Pre-Conditions**: + + * Provider is pre-authorised (has valid apfId from CAPIF Authority) and API Invoker has created a valid Security Context + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context for this Invoker. + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker certificate + + 3. Retrieve Security Context of Invoker by Provider: + * Send GET *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * Using AEF Certificate + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Register Provider at CCF + 3. Create Security Context using Provider certificate + 4. Retrieve Security Context by Provider + +* **Expected Result**: + 1. Retrieve security context: + 1. **200 OK** response. + 2. body returned must accomplish **ServiceSecurity** data structure. + + +## Test Case 6: Retrieve the Security Context of an API Invoker with invalid apiInvokerID +* **Test ID**: ***capif_security_api-6*** +* **Description**: + + This test case will check that an provider can retrieve the Security context of an API Invoker +* **Pre-Conditions**: + + * Provider is pre-authorised (has valid apfId from CAPIF Authority) and API Invoker has created a valid Security Context + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Retrieve Security Context of invalid Invoker by Provider: + * Send GET *https://{CAPIF_HOSTNAME}/trustedInvokers/{API_INVOKER_NOT_VALID}* + * Using AEF Certificate. + +* **Execution Steps**: + + 2. Register Provider at CCF + 3. Create Security Context using Provider certificate + 4. Retrieve Security Context by Provider of invalid invoker + +* **Expected Result**: + 1. Retrieve security context: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **404** + * title with message "Not Found" + * detail with message "Invoker not found". + * cause with message "API Invoker not exists or invalid ID". + + +## Test Case 7: Retrieve the Security Context of an API Invoker with invalid apfId +* **Test ID**: ***capif_security_api-7*** +* **Description**: + + This test case will check that an Provider cannot retrieve the Security context of an API Invoker without valid apfId +* **Pre-Conditions**: + + * API Exposure Function is not pre-authorised (has invalid apfId) + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate + + 3. Retrieve Security Context as Invoker role: + * Send GET *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * Using Invoker Certificate + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Store signed Certificate + 3. Create Security Context + 4. Retrieve Security Context as Provider. + +* **Expected Result**: + + 1. Create security context: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **401** + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "User role must be aef". + + +## Test Case 8: Delete the Security Context of an API Invoker +* **Test ID**: ***capif_security_api-8*** +* **Description**: + + This test case will check that an Provider can delete a Security context +* **Pre-Conditions**: + + * Provider is pre-authorised (has valid apfId from CAPIF Authority) and API Invoker has created a valid Security Context + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context for this Invoker but using Provider certificate. + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using AEF certificate + + 3. Delete Security Context of Invoker by Provider: + * Send DELETE *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * Use AEF certificate + + 4. Retrieve Security Context of Invoker by Provider: + * Send GET *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * Using AEF Certificate + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Register Provider at CCF + 3. Create Security Context using Provider certificate + 4. Delete Security Context by Provider + +* **Expected Result**: + + 1. Delete security context: + 1. **204 No Content** response. + + 2. Retrieve security context: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **404** + * title with message "Not Found" + * detail with message "Security context not found". + * cause with message "API Invoker not exists or invalid ID". + + +## Test Case 9: Delete the Security Context of an API Invoker with Invoker entity role +* **Test ID**: ***capif_security_api-9*** +* **Description**: + + This test case will check that an Invoker cannot delete a Security context +* **Pre-Conditions**: + + * Provider is pre-authorised (has valid apfId from CAPIF Authority) and API Invoker has created a valid Security Context + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context for this Invoker: + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker certificate + + 3. Delete Security Context of Invoker: + * Send DELETE *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * Use Invoker certificate + +* **Execution Steps**: + + 1. Register Provider at CCF + 2. Create Security Context using Provider certificate + 3. Delete Security Context by Invoker + +* **Expected Result**: + + 1. Delete security context: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **401** + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "User role must be aef". + + +## Test Case 10: Delete the Security Context of an API Invoker with Invoker entity role and invalid apiInvokerID +* **Test ID**: ***capif_security_api-10*** +* **Description**: + + This test case will check that an Invoker cannot delete a Security context with invalid +* **Pre-Conditions**: + + * Invoker is pre-authorised. + +* **Information of Test**: + + 1. Perform [Invoker Onboarding] + + 2. Delete Security Context of Invoker: + * Send DELETE *https://{CAPIF_HOSTNAME}/trustedInvokers/{API_INVOKER_NOT_VALID}* + * Use Invoker certificate + +* **Execution Steps**: + + 1. Register Provider at CCF + 2. Delete Security Context by invoker + +* **Expected Result**: + + 1. Delete security context: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **401** + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "User role must be aef". + + +## Test Case 11: Delete the Security Context of an API Invoker with invalid apiInvokerID +* **Test ID**: ***capif_security_api-11*** +* **Description**: + + This test case will check that an Provider cannot delete a Security context of invalid apiInvokerId +* **Pre-Conditions**: + + * Provider is pre-authorised (has valid apfId from CAPIF Authority). + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 2. Delete Security Context of Invoker by Provider: + * Send DELETE *https://{CAPIF_HOSTNAME}/trustedInvokers/{API_INVOKER_NOT_VALID}* + * Use AEF certificate + +* **Execution Steps**: + + 1. Register Provider at CCF + 2. Delete Security Context by provider + +* **Expected Result**: + + 1. Retrieve security context: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **404** + * title with message "Not Found" + * detail with message "Invoker not found". + * cause with message "API Invoker not exists or invalid ID". + + +## Test Case 12: Update the Security Context of an API Invoker +* **Test ID**: ***capif_security_api-12*** +* **Description**: + + This test case will check that an API Invoker can update a Security context +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID from CAPIF Authority) and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context for this Invoker: + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + + 3. Update Security Context of Invoker: + * Send POST *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}/update* + * body [service security body] but with notification destination modified to http://robot.testing2 + * Using Invoker Certificate. + + 4. Retrieve Security Context of Invoker by Provider: + * Send GET *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * Using AEF Certificate. + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Register Provider at CCF + 3. Create Security Context By Invoker + 4. Update Security Context By Invoker + 5. Retrieve Security Context By Provider + +* **Expected Result**: + + 1. Update security context: + 1. **200 OK** response. + 2. body returned must accomplish **ServiceSecurity** data structure. + + 2. Retrieve security context: + 1. **200 OK** response. + 2. body returned must accomplish **ServiceSecurity** data structure. + 1. Check is this returned object match with modified one. + + +## Test Case 13: Update the Security Context of an API Invoker with Provider entity role +* **Test ID**: ***capif_security_api-13*** +* **Description**: + + This test case will check that an Provider cannot update a Security context + +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID from CAPIF Authority) and Provider is also authorized. + * Invoker has created the Security Context previously. + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context for this Invoker: + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + + 3. Update Security Context of Invoker by Provider: + * Send POST *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}/update* + * body [service security body] but with notification destination modified to http://robot.testing2 + * Using AEF Certificate + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Register Provider at CCF + 3. Create Security Context + 4. Update Security Context as Provider + +* **Expected Result**: + + 1. Update security context: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **401** + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "User role must be invoker". + + +## Test Case 14: Update the Security Context of an API Invoker with AEF entity role and invalid apiInvokerId +* **Test ID**: ***capif_security_api-14*** +* **Description**: + + This test case will check that an Provider cannot update a Security context of invalid apiInvokerId + +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID from CAPIF Authority) and Provider is also authorized. + * Invoker has created the Security Context previously. + +* **Information of Test**: + + 1. Perform [Provider Registration] + + 4. Update Security Context of Invoker by Provider: + * Send POST *https://{CAPIF_HOSTNAME}/trustedInvokers/{API_INVOKER_NOT_VALID}/update* + * body [service security body] + * Using AEF Certificate + +* **Execution Steps**: + + 1. Register Provider at CCF + 2. Update Security Context as Provider + +* **Expected Result**: + + 1. Update security context: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **401** + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "User role must be invoker". + + +## Test Case 15: Update the Security Context of an API Invoker with invalid apiInvokerID +* **Test ID**: ***capif_security_api-15*** +* **Description**: + + This test case will check that an API Invoker cannot update a Security context not valid apiInvokerId +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID from CAPIF Authority) + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Update Security Context of Invoker: + * Send POST *https://{CAPIF_HOSTNAME}/trustedInvokers/{API_INVOKER_NOT_VALID}/update* + * body [service security body] + * Using Invoker Certificate. + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Update Security Context + +* **Expected Result**: + +1. Retrieve security context: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **404** + * title with message "Not Found" + * detail with message "Invoker not found". + * cause with message "API Invoker not exists or invalid ID". + + +## Test Case 16: Revoke the authorization of the API invoker for APIs. +* **Test ID**: ***capif_security_api-16*** +* **Description**: + + This test case will check that a Provider can revoke the authorization for APIs + +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID from CAPIF Authority) and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context By Invoker: + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate + + 3. Revoke Authorization by Provider: + * Send POST *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}/delete* + * body [security notification body] + * Using AEF Certificate. + + 4. Retrieve Security Context by Provider: + * Send GET *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * Using AEF Certificate. + + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Register Provider at CCF + 3. Create Security Context by Invoker + 4. Revoke Security Context by Provider + 5. Retrieve Security Context by Provider + +* **Expected Result**: + + 1. Revoke Authorization: + 1. **204 No Content** response. + + 2. Retrieve security context: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **404** + * title with message "Not Found" + * detail with message "Security context not found". + * cause with message "API Invoker has no security context". + + +## Test Case 17: Revoke the authorization of the API invoker for APIs without valid apfID. +* **Test ID**: ***capif_security_api-17*** +* **Description**: + + This test case will check that an Invoker can't revoke the authorization for APIs + +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID from CAPIF Authority) and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context for this Invoker: + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + + 3. Revoke Authorization by invoker: + * Send POST *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}/delete* + * body [security notification body] + * Using Invoker Certificate + + 4. Retrieve Security Context of Invoker by Provider: + * Send GET *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * Using Provider Certificate + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Register Provider at CCF + 3. Create Security Context + 4. Revoke Security Context by invoker + 5. Retrieve Security Context + +* **Expected Result**: + + 1. Revoke Security Context by invoker: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **401** + * title with message "Unauthorized" + * detail with message "Role not authorized for this API route". + * cause with message "User role must be provider". + + 3. Retrieve security context: + 1. **200 OK** response. + 2. body returned must accomplish **ServiceSecurity** data structure. + 1. Check is this returned object match with created one. + + +## Test Case 18: Revoke the authorization of the API invoker for APIs with invalid apiInvokerId. +* **Test ID**: ***capif_security_api-18*** +* **Description**: + + This test case will check that an API Exposure Function cannot revoke the authorization for APIs for invalid apiInvokerId + +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID from CAPIF Authority) and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Create Security Context for this Invoker: + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + + 3. Revoke Authorization by Provider: + * Send POST *https://{CAPIF_HOSTNAME}/trustedInvokers/{API_INVOKER_NOT_VALID}/delete* + * body [security notification body] + * Using AEF Certificate. + + 4. Retrieve Security Context of Invoker by Provider: + * Send GET *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}?authenticationInfo=true&authorizationInfo=true* + * This request will ask with parameter to retrieve authenticationInfo and authorizationInfo + * Using AEF Certificate. + +* **Execution Steps**: + + 1. Register and onboard Invoker at CCF + 2. Register Provider at CCF + 3. Create Security Context + 4. Revoke Security Context by Provider + 5. Retrieve Security Context + +* **Expected Result**: + + 1. Revoke Security Context by invoker: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails** data structure, with: + * status **404** + * title with message "Not Found" + * detail with message "Invoker not found". + * cause with message "API Invoker not exists or invalid ID". + + 3. Retrieve security context: + 1. **200 OK** response. + 2. body returned must accomplish **ServiceSecurity** data structure. + 1. Check is this return one object that match with created one. + + +## Test Case 19: Retrieve access token +* **Test ID**: ***capif_security_api-19*** +* **Description**: + + This test case will check that an API Invoker can retrieve a security access token OAuth 2.0. +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerId) + * Service API of Provider is published + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + * body [service api description] with apiName service_1 + * Use APF Certificate + + 3. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + + 4. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + * Create Security Information Body with one **securityInfo** for each aef present at each serviceAPIDescription present at Discover. + + 5. Request Access Token by invoker: + * Sent POST *https://{CAPIF_HOSTNAME}/securities/{securityId}/token*: + * body [access token req body] and example [example] + * ***securityId*** is apiInvokerId. + * ***grant_type=client_credentials***. + * Create Scope properly for request: ***3gpp#{aef_id}:{api_name}*** + * Using Invoker Certificate. + +* **Execution Steps**: + + 1. Register Provider at CCF, store certificates and Publish Service API service_1 at CCF + 2. Register and onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Create Security Context According to Service APIs discovered. + 5. Request Access Token + +* **Expected Result**: + + 1. Response to Request of Access Token: + 1. **200 OK** + 2. body must follow **AccessTokenRsp** with: + 1. access_token present + 2. token_type=Bearer + +## Test Case 20: Retrieve access token by Provider +* **Test ID**: ***capif_security_api-20*** +* **Description**: + + This test case will check that an API Exposure Function cannot revoke the authorization for APIs for invalid apiInvokerId + +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerID from CAPIF Authority) and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + * body [service api description] with apiName service_1 + * Use APF Certificate + + 3. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + + 4. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + * Create Security Information Body with one **securityInfo** for each aef present at each serviceAPIDescription present at Discover. + + 5. Request Access Token by provider: + * Sent POST *https://{CAPIF_HOSTNAME}/securities/{securityId}/token*: + * body [access token req body] + * ***securityId*** is apiInvokerId + * ***grant_type=client_credentials*** + * Using AEF certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 at CCF + 2. Register and onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Create Security Context According to Service APIs discovered. + 5. Request Access Token by Provider + +* **Expected Result**: + + 1. Response to Request of Access Token: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **AccessTokenErr** data structure, with: + * error unauthorized_client + * error_description=Role not authorized for this API route + +## Test Case 21: Retrieve access token by Provider with invalid apiInvokerId +* **Test ID**: ***capif_security_api-21*** +* **Description**: + + This test case will check that an API Exposure Function cannot retrieve a security access token without valid apiInvokerId + +* **Pre-Conditions**: + + * API Invoker is pre-authorised and Provider is also authorized + + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + * body [service api description] with apiName service_1 + * Use APF Certificate + + 3. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + + 4. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + * Create Security Information Body with one **securityInfo** for each aef present at each serviceAPIDescription present at Discover. + + 5. Request Access Token by provider: + * Sent POST *https://{CAPIF_HOSTNAME}/securities/{API_INVOKER_NOT_VALID}/token*. + * body [access token req body] + * ***securityId*** is apiInvokerId + * ***grant_type=client_credentials*** + * Using AEF certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 at CCF + 2. Register and onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Create Security Context According to Service APIs discovered. + 5. Request Access Token by Provider + +* **Expected Result**: + + 1. Response to Request of Access Token: + 1. **401 Unauthorized** response. + 2. body returned must accomplish **AccessTokenErr** data structure, with: + * error unauthorized_client + * error_description=Role not authorized for this API route + + +## Test Case 22: Retrieve access token with invalid apiInvokerId +* **Test ID**: ***capif_security_api-22*** +* **Description**: + + This test case will check that an API Invoker can't retrieve a security access token without valid apiInvokerId + +* **Pre-Conditions**: + + * API Invoker is pre-authorised (has valid apiInvokerId) + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + * body [service api description] with apiName service_1 + * Use APF Certificate + 3. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + 4. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + * Create Security Information Body with one **securityInfo** for each aef present at each serviceAPIDescription present at Discover. + 5. Request Access Token by invoker: + * Sent POST *https://{CAPIF_HOSTNAME}/securities/{API_INVOKER_NOT_VALID}/token*. + * body [access token req body] + * ***securityId*** is apiInvokerId + * ***grant_type=client_credentials*** + * Using Invoker certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 at CCF + 2. Register and onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Create Security Context According to Service APIs discovered. + 5. Request Access Token by Invoker + +* **Expected Result**: + + 1. Response to Request of Access Token: + 1. **404 Not Found** response. + 2. body returned must accomplish **ProblemDetails29571** data structure, with: + * status 404 + * title Not Found + * detail Security context not found + * cause API Invoker has no security context + + +**NOTE: ProblemDetails29571 is the definition present for this request at swagger of ProblemDetails, and this is different from definition of ProblemDetails across other CAPIF Services** + +## Test Case 23: Retrieve access token with invalid client_id +* **Test ID**: ***capif_security_api-23*** +* **Description**: + + This test case will check that an API Exposure Function cannot retrieve a security access token without valid client_id at body + +* **Pre-Conditions**: + + * API Invoker is pre-authorised and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + * body [service api description] with apiName service_1 + * Use APF Certificate + + 3. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + + 4. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + * Create Security Information Body with one **securityInfo** for each aef present at each serviceAPIDescription present at Discover. + + 5. Request Access Token by invoker: + * Sent POST *https://{CAPIF_HOSTNAME}/securities/{securityId}/token*. + * body [access token req body] + * ***securityId*** is apiInvokerId + * ***grant_type=client_credentials*** + * **client_id is not-valid** + * Using Invoker certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 at CCF + 2. Register and onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Create Security Context According to Service APIs discovered. + 5. Request Access Token by Invoker + +* **Expected Result**: + + 1. Response to Request of Access Token: + 1. **400 Bad Request** response. + 2. body returned must accomplish **AccessTokenErr** data structure, with: + * error invalid_client + * error_description=Client Id not found + + +## Test Case 24: Retrieve access token with unsupported grant_type +* **Test ID**: ***capif_security_api-24*** +* **Description**: + + This test case will check that an API Exposure Function cannot retrieve a security access token with unsupported grant_type + +* **Pre-Conditions**: + + * API Invoker is pre-authorised and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + * body [service api description] with apiName service_1 + * Use APF Certificate + + 3. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + + 4. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + * Create Security Information Body with one **securityInfo** for each aef present at each serviceAPIDescription present at Discover. + + 5. Request Access Token by invoker: + * Sent POST *https://{CAPIF_HOSTNAME}/securities/{securityId}/token*. + * body [access token req body] + * ***securityId*** is apiInvokerId + * ***grant_type=not_valid*** + * Using Invoker certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 at CCF + 2. Register and onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Create Security Context According to Service APIs discovered. + 5. Request Access Token by Invoker + +* **Expected Result**: + + 1. Response to Request of Access Token: + 1. **400 Bad Request** response. + 2. body returned must accomplish **AccessTokenErr** data structure, with: + * error unsupported_grant_type + * error_description=Invalid value for `grant_type` \\(${grant_type}\\), must be one of \\['client_credentials'\\] - 'grant_type' + +## Test Case 25: Retrieve access token with invalid scope +* **Test ID**: ***capif_security_api-25*** +* **Description**: + + This test case will check that an API Exposure Function cannot retrieve a security access token with complete invalid scope + +* **Pre-Conditions**: + + * API Invoker is pre-authorised and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + * body [service api description] with apiName service_1 + * Use APF Certificate + + 3. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + + 4. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + * Create Security Information Body with one **securityInfo** for each aef present at each serviceAPIDescription present at Discover. + + 5. Request Access Token by invoker: + * Sent POST *https://{CAPIF_HOSTNAME}/securities/{securityId}/token*. + * body [access token req body] + * ***securityId*** is apiInvokerId + * ***grant_type=client_credentials*** + * ***scope=not-valid-scope*** + * Using Invoker certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 at CCF + 2. Register and onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Create Security Context According to Service APIs discovered. + 5. Request Access Token by Invoker + +* **Expected Result**: + + 1. Response to Request of Access Token: + 1. **400 Bad Request** response. + 2. body returned must accomplish **AccessTokenErr** data structure, with: + * error invalid_scope + * error_description=The first characters must be '3gpp' + + +## Test Case 26: Retrieve access token with invalid aefid at scope +* **Test ID**: ***capif_security_api-26*** +* **Description**: + + This test case will check that an API Exposure Function cannot retrieve a security access token with invalid aefId at scope + +* **Pre-Conditions**: + + * API Invoker is pre-authorised and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + * body [service api description] with apiName service_1 + * Use APF Certificate + + 3. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + + 4. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + * Create Security Information Body with one **securityInfo** for each aef present at each serviceAPIDescription present at Discover. + + 5. Request Access Token by invoker: + * Sent POST *https://{CAPIF_HOSTNAME}/securities/{securityId}/token*. + * body [access token req body] + * ***securityId*** is apiInvokerId + * ***grant_type=client_credentials*** + * ***scope=3gpp#1234:service_1*** + * Using Invoker certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 at CCF + 2. Register and onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Create Security Context According to Service APIs discovered. + 5. Request Access Token by Invoker + +* **Expected Result**: + + 1. Response to Request of Access Token: + 1. **400 Bad Request** response. + 2. body returned must accomplish **AccessTokenErr** data structure, with: + * error invalid_scope + * error_description=One of aef_id not belongs of your security context + + +## Test Case 27: Retrieve access token with invalid apiName at scope +* **Test ID**: ***capif_security_api-27*** +* **Description**: + + This test case will check that an API Exposure Function cannot retrieve a security access token with invalid apiName at scope + +* **Pre-Conditions**: + + * API Invoker is pre-authorised and Provider is also authorized + +* **Information of Test**: + + 1. Perform [Provider Registration] and [Invoker Onboarding] + + 2. Publish Service API at CCF: + * Send Post to ccf_publish_url https://{CAPIF_HOSTNAME}/published-apis/v1/{apfId}/service-apis + * body [service api description] with apiName service_1 + * Use APF Certificate + + 3. Request Discover Published APIs not filtered: + * Send GET to ccf_discover_url *https://{CAPIF_HOSTNAME}/service-apis/v1/allServiceAPIs?api-invoker-id={apiInvokerId}* + * Param api-invoker-id is mandatory + * Using invoker certificate + + 4. Create Security Context for this Invoker + * Send PUT *https://{CAPIF_HOSTNAME}/trustedInvokers/{apiInvokerId}* + * body [service security body] + * Using Invoker Certificate. + * Create Security Information Body with one **securityInfo** for each aef present at each serviceAPIDescription present at Discover. + + 5. Request Access Token by invoker: + * Sent POST *https://{CAPIF_HOSTNAME}/securities/{securityId}/token*. + * body [access token req body] + * ***securityId*** is apiInvokerId + * ***grant_type=client_credentials*** + * ***scope=3gpp#{aef_id}:not-valid*** + * Using Invoker certificate + +* **Execution Steps**: + 1. Register Provider at CCF, store certificates and Publish Service API service_1 at CCF + 2. Register and onboard Invoker at CCF + 3. Discover Service APIs by Invoker. + 4. Create Security Context According to Service APIs discovered. + 5. Request Access Token by Invoker + +* **Expected Result**: + + 1. Response to Request of Access Token: + 1. **400 Bad Request** response. + 2. body returned must accomplish **AccessTokenErr** data structure, with: + * error invalid_scope + * error_description=One of the api names does not exist or is not associated with the aef id provided + + + [Return To All Test Plans]: ../README.md + + + + [service security body]: ./service_security.json "Service Security Request" + [security notification body]: ./security_notification.json "Security Notification Request" + [access token req body]: ./access_token_req.json "Access Token Request" + [example]: ./access_token_req.json "Access Token Request Example" + + [invoker onboarding]: ../common_operations/README.md#register-an-invoker "Invoker Onboarding" + [provider registration]: ../common_operations/README.md#register-a-provider "Provider Registration" + + diff --git a/docs/test_plan/api_security_service/access_token_req.json b/docs/test_plan/api_security_service/access_token_req.json new file mode 100644 index 0000000000000000000000000000000000000000..8504736e1fb40d49c3bcb4c6a8bca4dbb6d9f855 --- /dev/null +++ b/docs/test_plan/api_security_service/access_token_req.json @@ -0,0 +1,6 @@ +{ + "client_id": "client_id", + "client_secret": "client_secret", + "grant_type": "client_credentials", + "scope": "scope" +} \ No newline at end of file diff --git a/docs/test_plan/api_security_service/access_token_req_example.json b/docs/test_plan/api_security_service/access_token_req_example.json new file mode 100644 index 0000000000000000000000000000000000000000..070a717db975b76a80792117b50957ce4267cf6b --- /dev/null +++ b/docs/test_plan/api_security_service/access_token_req_example.json @@ -0,0 +1,5 @@ +{ + "client_id": "bb260b4d0b3a0f954fa23f42d979ca", + "grant_type": "client_credentials", + "scope": "3gpp#af7e4cf70063814770e7b00b87273e:service_1" +} diff --git a/docs/test_plan/api_security_service/security_notification.json b/docs/test_plan/api_security_service/security_notification.json new file mode 100644 index 0000000000000000000000000000000000000000..6b94eb5497ed8b6bde547dda6a4abc71783bc998 --- /dev/null +++ b/docs/test_plan/api_security_service/security_notification.json @@ -0,0 +1,9 @@ +{ + "aefId": "aefId", + "apiIds": [ + "apiIds", + "apiIds" + ], + "apiInvokerId": "api_invoker_id", + "cause": "OVERLIMIT_USAGE" +} \ No newline at end of file diff --git a/docs/test_plan/api_security_service/service_security.json b/docs/test_plan/api_security_service/service_security.json new file mode 100644 index 0000000000000000000000000000000000000000..ad7bc1ad5c64f6dc979a294044b5b44f5f43c68a --- /dev/null +++ b/docs/test_plan/api_security_service/service_security.json @@ -0,0 +1,25 @@ +{ + "notificationDestination": "http://robot.testing", + "supportedFeatures": "fffffff", + "securityInfo": [{ + "authenticationInfo": "authenticationInfo", + "authorizationInfo": "authorizationInfo", + "interfaceDetails": { + "ipv4Addr": "127.0.0.1", + "securityMethods": ["PSK"], + "port": 5248 + }, + "prefSecurityMethods": ["PSK", "PKI", "OAUTH"], + }, + { + "authenticationInfo": "authenticationInfo", + "authorizationInfo": "authorizationInfo", + "prefSecurityMethods": ["PSK", "PKI", "OAUTH"], + "aefId": "aefId" + }], + "websockNotifConfig": { + "requestWebsocketUri": true, + "websocketUri": "websocketUri" + }, + "requestTestNotification": true +} diff --git a/docs/test_plan/common_operations/README.md b/docs/test_plan/common_operations/README.md new file mode 100644 index 0000000000000000000000000000000000000000..ff39d943278ac7760f370dac2810fde776a28d5d --- /dev/null +++ b/docs/test_plan/common_operations/README.md @@ -0,0 +1,86 @@ + +# Register an Invoker + +## Steps to perform operation + 1. Create public and private key at invoker + 2. Register of Invoker at CCF: + * Send POST to http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register + * Body [invoker register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [invoker getauth body] + + 4. Onboard Invoker: + * Send POST to https://{CAPIF_HOSTNAME}/api-invoker-management/v1/onboardedInvokers + * Reference Request Body: [invoker onboarding body] + * "onboardingInformation"->"apiInvokerPublicKey": must contain public key generated by Invoker. + * Send at Authorization Header the Bearer access_token obtained previously (Authorization:Bearer ${access_token}) + +## Checks to ensure onboarding + 1. Response to Register: + 1. **201 Created** + + 2. Response to Get Auth: + 1. **200 OK** + 2. ***access_token*** returned. + + 3. Response to Onboard request must accomplish: + 1. **201 Created** + 2. Response Body must follow **APIInvokerEnrolmentDetails** data structure with: + * apiInvokerId + * onboardingInformation->apiInvokerCertificate must contain the public key signed. + 3. Response Header **Location** must be received with URI to new resource created, following this structure: *{apiRoot}/api-invoker-management/{apiVersion}/onboardedInvokers/{onboardingId}* + + +# Register a Provider + +## Steps to Perform operation + 1. Create public and private key at provider for provider itself and each function (apf, aef and amf) + 2. Register of Provider at CCF: + * Send POST to *http://{CAPIF_HOSTNAME}:{CAPIF_HTTP_PORT}/register* + * body [provider register body] + + 3. Obtain Access Token: + * Send POST to *http://{CAPIF_HOSTNAME}/getauth* + * Body [provider getauth body] + + 4. Register Provider: + * Send POST *https://{CAPIF_HOSTNAME}/api-provider-management/v1/registrations* + * body [provider request body] + * Send at Authorization Header the Bearer access_token obtained previously (Authorization:Bearer ${access_token}) + * Store each cert in a file with according name. + +## Checks to ensure provider registration + 1. Response to Register: + 1. **201 Created** + + 2. Response to Get Auth: + 1. **200 OK** + 2. ***access_token*** returned. + + 3. Register Provider at Provider Management: + 1. **201 Created** response. + 2. body returned must accomplish **APIProviderEnrolmentDetails** data structure. + 3. For each **apiProvFuncs**, we must check: + 1. **apiProvFuncId** is set + 2. **apiProvCert** under **regInfo** is set properly + 4. Location Header must contain the new resource URL *{apiRoot}/api-provider-management/v1/registrations/{registrationId}* + + + + + +[invoker register body]: ../api_invoker_management/invoker_register_body.json "Invoker Register Body" +[invoker onboarding body]: ../api_invoker_management/invoker_details_post_example.json "API Invoker Request" +[invoker getauth body]: ../api_invoker_management/invoker_getauth_example.json "Get Auth Example" + +[provider register body]: ../api_provider_management/provider_register_body.json "Provider Register Body" +[provider request body]: ../api_provider_management/provider_details_post_example.json "API Provider Enrolment Request" +[provider getauth body]: ../api_provider_management/provider_getauth_example.json "Get Auth Example" + + + + + +[Return To All Test Plans]: ../README.md diff --git a/docs/testing_with_curl/README.md b/docs/testing_with_curl/README.md new file mode 100644 index 0000000000000000000000000000000000000000..d8c3982ab8166055ca8d3c6b0c32cab8bff1bf0b --- /dev/null +++ b/docs/testing_with_curl/README.md @@ -0,0 +1,369 @@ +[**[Return To Main]**] +# Testing Using Curl + +- [Testing Using Curl](#testing-using-curl) + - [cURL scripts (TLS supported)](#curl-scripts-tls-supported) + - [cURL manual execution](#curl-manual-execution) + - [Authentication](#authentication) + - [Invoker](#invoker) + - [Provider](#provider) + - [JWT Authentication APIs](#jwt-authentication-apis) + - [Register an entity](#register-an-entity) + - [Get access token for an existing entity](#get-access-token-for-an-existing-entity) + - [Retrieve and store CA certificate](#retrieve-and-store-ca-certificate) + - [Sign provider certificate](#sign-provider-certificate) + - [Invoker Management APIs](#invoker-management-apis) + - [Onboard an Invoker](#onboard-an-invoker) + - [Update Invoker Details](#update-invoker-details) + - [Offboard an Invoker](#offboard-an-invoker) + - [Publish APIs](#publish-apis) + - [Publish a new API.](#publish-a-new-api) + - [Update a published service API.](#update-a-published-service-api) + - [Unpublish a published service API.](#unpublish-a-published-service-api) + - [Retrieve all published APIs](#retrieve-all-published-apis) + - [Retrieve a published service API.](#retrieve-a-published-service-api) + - [Discover API](#discover-api) + - [Discover published service APIs and retrieve a collection of APIs according to certain filter criteria.](#discover-published-service-apis-and-retrieve-a-collection-of-apis-according-to-certain-filter-criteria) + +## cURL scripts (TLS supported) +Also you can follow the instructions and run the commands of the bash scripts: +* [provider](./capif_tls_curls_exposer.sh) to test CAPIF as provider with TLS support. +* [invoker](./capif_tls_curls_invoker.sh) to test CAPIF as invoker with TLS support. + +## cURL manual execution + +### Authentication +This version will use TLS communication, for that purpose we have 2 different scenarios, according to role: +* Invoker +* Provider + +#### Invoker +To authenticate an invoker user, we must perform next steps: +- Retrieve CA certificate from platform. [Retrieve and store CA certificate](#retrieve-and-store-ca-certificate) +- Register on the CAPIF with invoker role. [Register an entity](#register-an-entity) +- Get a Json Web Token (JWT) in order to request onboarding [Get access token for an existing entity](#get-access-token-for-an-existing-entity) +- Request onboarding adding public key to request. [Onboard an Invoker](#onboard-an-invoker) +- Store certificate signed by CAPIF platform to allow TLS onwards. + +**Flow:** + +![Flow](../images/flows/04%20-%20Invoker%20Register.png) +![Flow](../images/flows/05%20-%20Invoker%20Onboarding.png) + +#### Provider +To authenticate an provider user, we must perform next steps: +- Retrieve CA certificate from platform. [Retrieve and store CA certificate](#retrieve-and-store-ca-certificate) +- Register on the CAPIF with provider role. [Register an entity](#register-an-entity) +- Request sign the public key to CAPIF including beared with JWT. [Sign provider certificate](#sign-provider-certificate) +- Store certificate signed by CAPIF platform to allow TLS onwards. + +**Flow:** + +![Flow](../images/flows/01%20-%20Register%20del%20AEF.png) +![Flow](../images/flows/02%20-%20AEF%20API%20Provider%20registration.png) +![Flow](../images/flows/03%20-%20AEF%20Publish.png) + +### JWT Authentication APIs +These APIs are triggered by an entity (Invoker or Provider for release 1.0) to: +- register on the CAPIF Framework +- get a Json Web Token (JWT) in order to be authorized to call CAPIF APIs + +#### Register an entity +Request +```shell +curl --request POST 'http://:/register' --header 'Content-Type: application/json' --data '{ + "username":"...", + "password":"...", + "role":"...", + "description":"...", + "cn":"..." +}' +``` + +* Role: invoker or publisher +* cn: common name + +Response body +```json +{ + "id": "Entity ID", + "message": "Informative message" +} +``` + +#### Get access token for an existing entity +Request +```shell +curl --request POST 'http://:/gettoken' --header 'Content-Type: application/json' --data '{ + "username":"...", + "password":"...", + "role":"..." +}' +``` + +Response body +```json +{ + "access_token": "JSON Web Token for CAPIF APIs", + "message": "Informative message" +} +``` + +#### Retrieve and store CA certificate +```shell +curl --request GET 'http://:/ca-root' 2>/dev/null | jq -r '.certificate' -j > +``` + +#### Sign provider certificate +```shell +curl --request POST 'http:///sign-csr' --header 'Authorization: Bearer ' --header 'Content-Type: application/json' --data-raw '{ + "csr": "RAW PUBLIC KEY CREATED BY PUBLISHER", + "mode": "client", + "filename": provider +}' +``` +Response +``` json +{ + "certificate": "PUBLISHER CERTIFICATE" +} +``` +PUBLISHER CERTIFICATE value must be stored by Provider entity to next request to CAPIF (provider.crt for example) + +### Invoker Management APIs + +These APIs are triggered by a NetApp (i.e. Invoker) + +#### Onboard an Invoker + +```shell +curl --cacert --request POST 'https:///api-invoker-management/v1/onboardedInvokers' --header 'Authorization: Bearer ' --header 'Content-Type: application/json' --data-raw '{ + "notificationDestination" : "http://X:Y/netapp_callback", + "supportedFeatures" : "fffffff", + "apiInvokerInformation" : , + "websockNotifConfig" : { + "requestWebsocketUri" : true, + "websocketUri" : "websocketUri" + }, + "onboardingInformation" : { + "apiInvokerPublicKey" : + }, + "requestTestNotification" : true +}' +``` + +Response Body + +``` json +{ + "apiInvokerId": "7da0a8d4172d7d86c536c0fbc9c372", + "onboardingInformation": { + "apiInvokerPublicKey": "RAW PUBLIC KEY CREATED BY INVOKER", + "apiInvokerCertificate": "INVOKER CERTIFICATE", + "onboardingSecret": "onboardingSecret" + }, + "notificationDestination": "http://host.docker.internal:8086/netapp_callback", + "requestTestNotification": true, + ... +} +``` + +INVOKER CERTIFICATE value must be stored by Invoker entity to next request to CAPIF (invoker.crt for example) + +#### Update Invoker Details + +```shell +curl --location --request PUT 'https:///api-invoker-management/v1/onboardedInvokers/' --cert --key --cacert --header 'Content-Type: application/json' --data '{ + "notificationDestination" : "http://X:Y/netapp_callback2", + "supportedFeatures" : "fffffff", + "apiInvokerInformation" : , + "websockNotifConfig" : { + "requestWebsocketUri" : true, + "websocketUri" : "websocketUri2" + }, + "onboardingInformation" : { + "apiInvokerPublicKey" : + }, + "requestTestNotification" : true +}' +``` + +#### Offboard an Invoker + +```shell +curl --cert --key --cacert --request DELETE 'https:///api-invoker-management/v1/onboardedInvokers/' +``` + +### Publish APIs + +These APIs are triggered by the API Publishing Function (APF) of an Provider + +#### Publish a new API. +```shell +curl --cert --key --cacert --request POST 'https:///published-apis/v1//service-apis' --header 'Content-Type: application/json' --data '{ + "apiName": "3gpp-monitoring-event", + "aefProfiles": [ + { + "aefId": "string", + "versions": [ + { + "apiVersion": "v1", + "expiry": "2021-11-30T10:32:02.004Z", + "resources": [ + { + "resourceName": "string", + "commType": "REQUEST_RESPONSE", + "uri": "string", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ], + "custOperations": [ + { + "commType": "REQUEST_RESPONSE", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ] + } + ], + "protocol": "HTTP_1_1", + "dataFormat": "JSON", + "securityMethods": ["PSK"], + "interfaceDescriptions": [ + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + }, + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + } + ] + } + ], + "description": "string", + "supportedFeatures": "fffff", + "shareableInfo": { + "isShareable": true, + "capifProvDoms": [ + "string" + ] + }, + "serviceAPICategory": "string", + "apiSuppFeats": "fffff", + "pubApiPath": { + "ccfIds": [ + "string" + ] + }, + "ccfId": "string" +}' +``` + +#### Update a published service API. +```shell +curl --cert --key --cacert --request PUT 'https:///published-apis/v1//service-apis/' --header 'Content-Type: application/json' --data '{ + "apiName": "3gpp-monitoring-event", + "aefProfiles": [ + { + "aefId": "string1", + "versions": [ + { + "apiVersion": "v1", + "expiry": "2021-11-30T10:32:02.004Z", + "resources": [ + { + "resourceName": "string", + "commType": "REQUEST_RESPONSE", + "uri": "string", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ], + "custOperations": [ + { + "commType": "REQUEST_RESPONSE", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ] + } + ], + "protocol": "HTTP_1_1", + "dataFormat": "JSON", + "securityMethods": ["PSK"], + "interfaceDescriptions": [ + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + }, + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + } + ] + } + ], + "description": "string", + "supportedFeatures": "fffff", + "shareableInfo": { + "isShareable": true, + "capifProvDoms": [ + "string" + ] + }, + "serviceAPICategory": "string", + "apiSuppFeats": "fffff", + "pubApiPath": { + "ccfIds": [ + "string" + ] + }, + "ccfId": "string" +}' +``` + +#### Unpublish a published service API. +```shell +curl --cert --key --cacert --request DELETE 'https:///published-apis/v1//service-apis/' +``` + +#### Retrieve all published APIs +```shell +curl --cert --key --cacert --request GET 'https:///published-apis/v1//service-apis' +``` + +#### Retrieve a published service API. +```shell +curl --cert --key --cacert --request GET 'https:///published-apis/v1//service-apis/' +``` + +### Discover API + +This API is triggered by a NetApp (or Invoker) + +#### Discover published service APIs and retrieve a collection of APIs according to certain filter criteria. +```shell +curl --cert --key --cacert --request GET 'https:///service-apis/v1/allServiceAPIs?api-invoker-id=&api-name=&api-version=&aef-id=&api-cat=&supported-features=&api-supported-features=' +``` + + + +[Return To Main]: ../../README.md#using-curl \ No newline at end of file diff --git a/docs/testing_with_curl/capif_tls_curls_exposer.sh b/docs/testing_with_curl/capif_tls_curls_exposer.sh new file mode 100755 index 0000000000000000000000000000000000000000..5b81712eeebf75965dc0fd6e4b38f1f6a89a1ef7 --- /dev/null +++ b/docs/testing_with_curl/capif_tls_curls_exposer.sh @@ -0,0 +1,205 @@ +##### Execute Exposer curls locally + +##### Configure machine + +##### Add in /etc/hosts: 127.0.0.1 capifcore + + +##### Set environment variables +capifhost="capifcore" +capifhttpport="8080" + +exposerpk="-----BEGIN CERTIFICATE REQUEST-----\nMIIC0TCCAbkCAQAwgYsxEDAOBgNVBAMMB2V4cG9zZXIxFzAVBgNVBAoMDlRlbGVm\nb25pY2EgSStEMRMwEQYDVQQLDApJbm5vdmF0aW9uMQ8wDQYDVQQHDAZNYWRyaWQx\nDzANBgNVBAgMBk1hZHJpZDELMAkGA1UEBhMCRVMxGjAYBgkqhkiG9w0BCQEWC2lu\nbm9AdGlkLmVzMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkpJ7FzAI\nkzFYxLKbW54lIsQBNIQz5zQIvRZDFcrO4QLR2jQUps9giBWEDih++47JiBJyM+z1\nWkEh7b+moZhQThj7L9PKgJHRhU1oeHpSE1x/r7479J5F+CFRqFo5v9dC+2zGfP4E\nsSrNfp3MK/KQHsHhMzSt881xAHs+p2/bcM+sd/BlXC4J6E1y6Hk3ogI7kq443fcY\noUHZx9ClUSboOvXa1ZSPVxdCV6xKRraUdAKfhMGn+pYtJDsNp8Gg/BN8NXmYUzl9\ntDhjeuIxr4N38LgW3gRHLNIa8acO9eBctWw9AD20JWzFAXvvmsboBPc2wsOVcsml\ncCbisMRKX4JyKQIDAQABoAAwDQYJKoZIhvcNAQELBQADggEBAIxZ1Sec9ATbqjhi\nRz4rvhX8+myXhyfEw2MQ62jz5tpH4qIVZFtn+cZvU/ULySY10WHaBijGgx8fTaMh\nvjQbc+p3PXmgtnmt1QmoOGjDTFa6vghqpxPLSUjjCUe8yj5y24gkOImY6Cv5rzzQ\nlnTMkNvnGgpDgUeiqWcQNbwwge3zkzp9bVRgogTT+EDxiFnjTTF6iUG80sRtXMGr\nD6sygLsF2zijGGfWoKRo/7aZTQxuCiCixceVFXegMfr+eACkOjV25Kso7hYBoEdP\nkgUf5PNpl5uK3/rmPIrl/TeE0SnGGfCYP7QajE9ELRsBVmVDZJb7ZxUl1A4YydFY\ni0QOM3Y=\n-----END CERTIFICATE REQUEST-----\n" + + +##### Retrieve and store CA certificate + +curl --request GET "http://$capifhost:$capifhttpport/ca-root" 2>/dev/null | jq -r '.certificate' -j > ca.crt + + +##### Register an entity + +exposerid=$(curl --request POST "http://$capifhost:$capifhttpport/register" --header 'Content-Type: application/json' --data '{ + "username":"exposer", + "password":"exposer", + "role":"exposer", + "description":"Exposer", + "cn":"exposer" +}' 2>/dev/null | jq -r '.id' -j) + + +##### Get access token + +exposertoken=$(curl --request POST "http://$capifhost:$capifhttpport/gettoken" --header 'Content-Type: application/json' --data '{ + "username":"exposer", + "password":"exposer", + "role":"exposer" +}' 2>/dev/null | jq -r '.access_token' -j) + + +##### Sign exposer certificate + +curl --request POST "http://$capifhost:$capifhttpport/sign-csr" --header "Authorization: Bearer $exposertoken" --header 'Content-Type: application/json' --data-raw "{ + \"csr\": \"$exposerpk\", + \"mode\": \"client\", + \"filename\": \"exposer\" +}" 2>/dev/null | jq -r '.certificate' -j > exposer.crt + + +##### Publish service +curl --cert exposer.crt --key exposer.key --cacert ca.crt --request POST "https://$capifhost/published-apis/v1/$exposerid/service-apis" --header 'Content-Type: application/json' --data '{ + "apiName": "3gpp-monitoring-event", + "aefProfiles": [ + { + "aefId": "string", + "versions": [ + { + "apiVersion": "v1", + "expiry": "2021-11-30T10:32:02.004Z", + "resources": [ + { + "resourceName": "string", + "commType": "REQUEST_RESPONSE", + "uri": "string", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ], + "custOperations": [ + { + "commType": "REQUEST_RESPONSE", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ] + } + ], + "protocol": "HTTP_1_1", + "dataFormat": "JSON", + "securityMethods": ["PSK"], + "interfaceDescriptions": [ + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + }, + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + } + ] + } + ], + "description": "string", + "supportedFeatures": "fffff", + "shareableInfo": { + "isShareable": true, + "capifProvDoms": [ + "string" + ] + }, + "serviceAPICategory": "string", + "apiSuppFeats": "fffff", + "pubApiPath": { + "ccfIds": [ + "string" + ] + }, + "ccfId": "string" +}' > response.json + +apiserviceid=$(cat response.json | jq -r '.apiId' -j) + + +##### Update a published service API +curl --cert exposer.crt --key exposer.key --cacert ca.crt --request PUT "https://$capifhost/published-apis/v1/$exposerid/service-apis/$apiserviceid" --header 'Content-Type: application/json' --data '{ + "apiName": "3gpp-monitoring-event", + "aefProfiles": [ + { + "aefId": "string1", + "versions": [ + { + "apiVersion": "v1", + "expiry": "2021-11-30T10:32:02.004Z", + "resources": [ + { + "resourceName": "string", + "commType": "REQUEST_RESPONSE", + "uri": "string", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ], + "custOperations": [ + { + "commType": "REQUEST_RESPONSE", + "custOpName": "string", + "operations": [ + "GET" + ], + "description": "string" + } + ] + } + ], + "protocol": "HTTP_1_1", + "dataFormat": "JSON", + "securityMethods": ["PSK"], + "interfaceDescriptions": [ + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + }, + { + "ipv4Addr": "string", + "port": 65535, + "securityMethods": ["PSK"] + } + ] + } + ], + "description": "string", + "supportedFeatures": "fffff", + "shareableInfo": { + "isShareable": true, + "capifProvDoms": [ + "string" + ] + }, + "serviceAPICategory": "string", + "apiSuppFeats": "fffff", + "pubApiPath": { + "ccfIds": [ + "string" + ] + }, + "ccfId": "string" +}' + + +##### Retrieve all published APIs + +curl --cert exposer.crt --key exposer.key --cacert ca.crt --request GET "https://$capifhost/published-apis/v1/$exposerid/service-apis" + + +##### Retrieve a published service API + +curl --cert exposer.crt --key exposer.key --cacert ca.crt --request GET "https://$capifhost/published-apis/v1/$exposerid/service-apis/$apiserviceid" + + +##### Unpublish a published service API + +curl --cert exposer.crt --key exposer.key --cacert ca.crt --request DELETE "https://$capifhost/published-apis/v1/$exposerid/service-apis/$apiserviceid" + + diff --git a/docs/testing_with_curl/capif_tls_curls_invoker.sh b/docs/testing_with_curl/capif_tls_curls_invoker.sh new file mode 100755 index 0000000000000000000000000000000000000000..d6c287a91ee27891cf652c65e6d5b25f8ca84a63 --- /dev/null +++ b/docs/testing_with_curl/capif_tls_curls_invoker.sh @@ -0,0 +1,86 @@ +##### Execute Invoker curls locally + +##### Configure machine + +##### Add in /etc/hosts: 127.0.0.1 capifcore + + +##### Set environment variables + +capifhost="capifcore" +capifhttpport="8080" + +invokerpk="-----BEGIN CERTIFICATE REQUEST-----\nMIIC0TCCAbkCAQAwgYsxEDAOBgNVBAMMB2ludm9rZXIxFzAVBgNVBAoMDlRlbGVm\nb25pY2EgSStEMRMwEQYDVQQLDApJbm5vdmF0aW9uMQ8wDQYDVQQHDAZNYWRyaWQx\nDzANBgNVBAgMBk1hZHJpZDELMAkGA1UEBhMCRVMxGjAYBgkqhkiG9w0BCQEWC2lu\nbm9AdGlkLmVzMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArfITEb3/\nJ5KDt7ia2WsQrd8iSrlH8kh6D9YNPEF+KaIGQ9w8QhmOW416uvIAASzOaCKMNqgb\nCI0NqsbVF9lfaiBgB71vcwX0yKatjACn3Nl3Lnubi+tH4Jb5zGQQXOuxpMHMmgyn\nNTsSc/MeMzX3iUWqLmmhnTC31Mu1ESUPTBa+CitQAj2wYMvBS970WICKrDlxWkR8\nZZBkRBZaxMfqY21VWmREtR+Kl6GCMBtUCUBH6uWjFiOpxYbCxdygxxrA4a3IzmiO\ntXOyLs7iuOP/CLSYfk71MHX2qKlpAyjdRK2W0w0GioV90Hk4uT/YUYy9zjWWN+mm\nrQ9GBy8iRZm7YwIDAQABoAAwDQYJKoZIhvcNAQELBQADggEBAI0btA7KDMvkY4Ib\n0eMteeeT40bm11Yw8/6V48IaIPi9EpZMI+jWyCebw8PBFUs3l3ImWeO8Gma96gyf\np0WB/64MRkUSdOxUWOWGMPIMEF+BH3eiHthx+EbAETtJ0D4KzmH6raxl14qvwLS5\nwxtxPGxu/R5ue5RVJpAzzJ6OX36p05GYSzL+pTotVPpowSdoeNsV+xPgPA0diV8a\nB7Zn/ujwMpsh7IjQPKpOEkhQdxc478Si8dmRbzXkVar1Oa8/QSJ8ZAaFI4VGowjR\nmtxps7AvS5OG9iMPtFQHpqxHVO50CJU5cbsXsYdu9EipGhgIKJDKewBX7tCKk0Ot\nBLU03CY=\n-----END CERTIFICATE REQUEST-----\n" + + +##### Retrieve and store CA certificate + +curl --request GET "http://$capifhost:$capifhttpport/ca-root" 2>/dev/null | jq -r '.certificate' -j > ca.crt + + +##### Register an entity + +invokerid=$(curl --request POST "http://$capifhost:$capifhttpport/register" --header 'Content-Type: application/json' --data '{ + "username":"invoker", + "password":"invoker", + "role":"invoker", + "description":"Invoker", + "cn":"invoker" +}' 2>/dev/null | jq -r '.id' -j) + + +##### Get access token + +invokertoken=$(curl --request POST "http://$capifhost:$capifhttpport/gettoken" --header 'Content-Type: application/json' --data '{ + "username":"invoker", + "password":"invoker", + "role":"invoker" +}' 2>/dev/null | jq -r '.access_token' -j) + + +##### Onboard an Invoker + +curl --cacert ca.crt --request POST "https://$capifhost/api-invoker-management/v1/onboardedInvokers" --header "Authorization: Bearer $invokertoken" --header 'Content-Type: application/json' --data-raw "{ + \"notificationDestination\" : \"http://X:Y/netapp_callback\", + \"supportedFeatures\" : \"fffffff\", + \"apiInvokerInformation\" : \"invoker\", + \"websockNotifConfig\" : { + \"requestWebsocketUri\" : true, + \"websocketUri\" : \"websocketUri\" + }, + \"onboardingInformation\" : { + \"apiInvokerPublicKey\" : \"$invokerpk\" + }, + \"requestTestNotification\" : true +}" > response.json + +cat response.json | jq -r '.onboardingInformation.apiInvokerCertificate' -j > invoker.crt +apiinvokerid=$(cat response.json | jq -r '.apiInvokerId' -j) + + +##### Update Invoker Details + +curl --location --request PUT "https://$capifhost/api-invoker-management/v1/onboardedInvokers/$apiinvokerid" --cert invoker.crt --key invoker.key --cacert ca.crt --header 'Content-Type: application/json' --data "{ + \"notificationDestination\" : \"http://X:Y/netapp_callback2\", + \"supportedFeatures\" : \"fffffff\", + \"apiInvokerInformation\" : \"test\", + \"websockNotifConfig\" : { + \"requestWebsocketUri\" : true, + \"websocketUri\" : \"websocketUri2\" + }, + \"onboardingInformation\" : { + \"apiInvokerPublicKey\" : \"$invokerpk\" + }, + \"requestTestNotification\" : true +}" + + +##### Discover API + +curl --cert invoker.crt --key invoker.key --cacert ca.crt --request GET "https://$capifhost/service-apis/v1/allServiceAPIs?api-invoker-id=$apiinvokerid" + + +##### Offboard an Invoker + +curl --cert invoker.crt --key invoker.key --cacert ca.crt --request DELETE "https://$capifhost/api-invoker-management/v1/onboardedInvokers/$apiinvokerid" + diff --git a/docs/testing_with_curl/exposer.key b/docs/testing_with_curl/exposer.key new file mode 100644 index 0000000000000000000000000000000000000000..e84c8c4c2796860f0a3a1b0937d5b444a68692e2 --- /dev/null +++ b/docs/testing_with_curl/exposer.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCSknsXMAiTMVjE +sptbniUixAE0hDPnNAi9FkMVys7hAtHaNBSmz2CIFYQOKH77jsmIEnIz7PVaQSHt +v6ahmFBOGPsv08qAkdGFTWh4elITXH+vvjv0nkX4IVGoWjm/10L7bMZ8/gSxKs1+ +ncwr8pAeweEzNK3zzXEAez6nb9twz6x38GVcLgnoTXLoeTeiAjuSrjjd9xihQdnH +0KVRJug69drVlI9XF0JXrEpGtpR0Ap+Ewaf6li0kOw2nwaD8E3w1eZhTOX20OGN6 +4jGvg3fwuBbeBEcs0hrxpw714Fy1bD0APbQlbMUBe++axugE9zbCw5VyyaVwJuKw +xEpfgnIpAgMBAAECggEACs11TqlcIG5qd/N1Ts8ni9noACpe4ZiXV578lRkW8++E +xEZtX+P4iIm+wK+3DYGhvyp430naGsD30rF62FMaVr8xmCijC/nIoutTGqS38t8G +Ns+C/2Lrjj+fvemJyGasSaKOjdIc9L/OWG7MiE/+05LU2bTKvfrIwXvT4NGg2ei1 +NDO8vS5fRHYZ1LyCyrCDetP2aYrTlPao20hmU4IDyh4N17wLuPgijC+AuqR2Xic0 +Mk4ofZ/6Y3oN0rrov2yG7IXjMJQI469IQ6TJLlyFc8tQIF5Y3CMMCMuVMq5m33bq +/6bow4/VYFG8mPzy7lQLQ8YeEPsgDKL0pB4zqDr7ZwKBgQDJRJoG2PSaEOt6DIKV +84to73oD9x9lOSrmaH2/NzL3mwLXP2Is4nmLzEDQvA0UhTZe9c0n6OoE3uRZ1gAu +JIe3zXTJSK4/ysmePUZL1js5bKtuHBrcSCOupWRuJXbaXK5uqISDHUgHiRw3bq8y +g8SZY/JOBPyJhVlKhmhNCYMi9wKBgQC6bjJ//tLpH6EG4ux0O2StzUoHrvV2cyUj +RRxGvAt92sdsZaVKmIW/SlLy8tv5HJqblfn6m7aY/vUYbN3AfMJ4teLZz5Y//CH3 +jPchHyk/uhh7gxufiD65i5bfVyRt54tDbyVDc2/1prUyD5W4q4UNOmvhXym5saIc +U5WNCnSr3wKBgQCs8MaM5bVgAPPlfoRixs9ejo/AgoK2nqWvL9AFEzA3NDn/rJX2 +TW/1YL+83Ck9Ha33cKwlA+y53LBIRSsIexknJWKZZltbsysFTk9t8JoZILg5N+sY +puAKPFGMl6KFxSeZLDIY23s+BmF5fCEMfc7botbclUpN/IgaEl3i/C5zRwKBgHsx +lKdmEaNBZlwxmgTYtpfvH2tiXwwN3M2ovp2zZ3icGMn1hTt8/GzCxXuLpnbAQx5r +BcxoF0qUuAuS7RpklvHDZ4t9FJFloGCAQ1Ic0FovNDxyD8/k7WYY6vLdF9KUfj9q +c9pVrvdKWVQiXlKw7PQn1eAQzXbK/g/v39Raw2xLAoGBAILTLY3sGBNkFCVhJlyZ +DaIwkbtnpCBT2T7DUupw51aLhh4rnuJ5wA3uGdRqoKVYSc9DuOwB/yNFGuQDElxQ +jfKlX0X5xItaxZ5FR4EvGCnqBJl6JM3QekzhXtq5VdY5zIf/HHqFYebcMFrkEicZ +uuAZd4wa+jn9SR9mUYtS+Lq+ +-----END PRIVATE KEY----- diff --git a/docs/testing_with_curl/invoker.key b/docs/testing_with_curl/invoker.key new file mode 100644 index 0000000000000000000000000000000000000000..15b96bbd0d00ef4ee73a5557e79b42c4b6d6598f --- /dev/null +++ b/docs/testing_with_curl/invoker.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCt8hMRvf8nkoO3 +uJrZaxCt3yJKuUfySHoP1g08QX4pogZD3DxCGY5bjXq68gABLM5oIow2qBsIjQ2q +xtUX2V9qIGAHvW9zBfTIpq2MAKfc2Xcue5uL60fglvnMZBBc67GkwcyaDKc1OxJz +8x4zNfeJRaouaaGdMLfUy7URJQ9MFr4KK1ACPbBgy8FL3vRYgIqsOXFaRHxlkGRE +FlrEx+pjbVVaZES1H4qXoYIwG1QJQEfq5aMWI6nFhsLF3KDHGsDhrcjOaI61c7Iu +zuK44/8ItJh+TvUwdfaoqWkDKN1ErZbTDQaKhX3QeTi5P9hRjL3ONZY36aatD0YH +LyJFmbtjAgMBAAECggEAAyR5OxdJ1W5jnSD9kBCvO6jDMIUuIcU+SAZUfGaxYybn +EeNCtBiPGV8tWWLHJJ0bL6iKpAv+gOKeSpKOmwU7XkHZEWVlRAfpiNfen2bcTCiw +fg3D4bgRMmDwwyMH368QFlJ56UFMCuqb0x+oCeMRIdNjwfbcPVCpZDYNGwTDBzoy +72Aj5TssEu+Ft5VVGwhsvq0v6bd6OWmW34PI9SHzXzRlRw4b4ZtZekW8o/QpO1gO +F+ARbCGE2qjqHWRU/vzINMmAucqhDM6/f7Un5XXr+Zm+8u4PGa5eLWkebJHhfwKX +Ag0WToD/FmDPRqlnjZdzraJlhuXLGdhRAlzdnIQNRQKBgQDvhiVewu7CTzgB66dA +cdrJkXVJPZUGvUYmXkwPaSju7hjDc87pNz+szH2QP+Qm+pD1mV9OswIim4Oi7C1l +lEe423QGjtsn5txzcRk+ZzyX/Z2ltcnXi8N/MNeOZ2qFAgP/IIOTcgowKftuUT6w +2A1DQFj6xxu6vrzxOqIL6tXy7wKBgQC56SM80udTqyb9+wk/KuDSgym3bSaZ8i5q +dNVV5wOxCotLGG9Any61TVOIP/SUjar4f4+FznLZjJYXIZvpbS32PUOtlnKtOmp6 +OBKIpEXq2zq0u/o/i8EyOb6laNqehfffRYqqYU9mJXVjiTUNcOVqfLljeeui1r1P +txSRBlTuzQKBgQDUgB/hbXHjw+J9mbM9soUXtUvn2ZHAc+Wrnpc+SN6+80/W/4R/ +VbvRM27mrjhc+InoytRKfvgS+gOUZJJ1/1KOR2wtcUovoVrNtHZf7blNYv0dCiXz +bBTaX9uthER1km83RoJVKqStTGG74qqKvHMvygPnIQSR7iy0m38usX500wKBgGeM +koLzWcOBhhNa+tiDMnwucFLpaeG/QdkrwBO7u5OlstYeAwF0aFi1fDxcmwcPLVaB +/lfiGJhRtNunbacDl+EaWJLcRH12Fw6CItiW3xakCzvVo9o3JmGqRiTtlS9MoTZs +DoM99jKH1K2fI7yb0DySwdPFedjWUNWQvNTWOQJVAoGAYr9Kuo7s83Qe9CaHQW/Y +PPL0dYBA63guuw2mNQjBL5LuqMZPz6vVB0hIVlYb5Xgw48OWUThHksJ0qltJK7kR +OPRyOxiWpJVo5rZPVzS0Ofbmau9z1VYr358RqR2N2EqG5KDr5QZT9nQq7k8EJvrF +NM/zMhxmgtNYez417Q/3U+M= +-----END PRIVATE KEY----- diff --git a/docs/testing_with_postman/CAPIF.postman_collection.json b/docs/testing_with_postman/CAPIF.postman_collection.json new file mode 100644 index 0000000000000000000000000000000000000000..e65c826994c986b46367eeb223d65444a38cebf9 --- /dev/null +++ b/docs/testing_with_postman/CAPIF.postman_collection.json @@ -0,0 +1,982 @@ +{ + "info": { + "_postman_id": "5cfdf0d7-3b3c-4961-9cb9-84c2bf85056c", + "name": "CAPIF", + "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json", + "_exporter_id": "31608242", + "_collection_link": "https://red-comet-993867.postman.co/workspace/Team-Workspace~bfc7c442-a60c-4bb1-8730-fdabc2df89b9/collection/31608242-5cfdf0d7-3b3c-4961-9cb9-84c2bf85056c?action=share&source=collection_link&creator=31608242" + }, + "item": [ + { + "name": "01-register_user_provider", + "event": [ + { + "listen": "test", + "script": { + "exec": [ + "var res = JSON.parse(responseBody);", + "pm.environment.set('ONBOARDING_URL', res.ccf_api_onboarding_url);", + "pm.environment.set('PUBLISH_URL', res.ccf_publish_url);", + "pm.environment.set('USER_ID', res.id);", + "" + ], + "type": "text/javascript" + } + } + ], + "request": { + "method": "POST", + "header": [], + "body": { + "mode": "raw", + "raw": "{\n\"password\": \"{{PASSWORD}}\",\n\"username\": \"{{USERNAME}}\",\n\"description\": \"provider\",\n\"role\": \"provider\",\n\"cn\": \"provider\"\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{REGISTER_HOSTNAME}}:{{REGISTER_PORT}}/register", + "protocol": "https", + "host": [ + "{{REGISTER_HOSTNAME}}" + ], + "port": "{{REGISTER_PORT}}", + "path": [ + "register" + ] + } + }, + "response": [] + }, + { + "name": "02-getauth_provider", + "event": [ + { + "listen": "test", + "script": { + "exec": [ + "var res = JSON.parse(responseBody);", + "", + "pm.environment.set('CA_ROOT', res.ca_root);", + "pm.environment.set('ACCESS_TOKEN', res.access_token);", + "", + "setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/write_ca',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: res", + " }", + " }, function (err, res) {", + " console.log(res);", + " });", + " }, 5000);" + ], + "type": "text/javascript" + } + } + ], + "request": { + "method": "POST", + "header": [], + "body": { + "mode": "raw", + "raw": "{\n\"password\": \"{{PASSWORD}}\",\n\"username\": \"{{USERNAME}}\"\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{REGISTER_HOSTNAME}}:{{REGISTER_PORT}}/getauth", + "protocol": "https", + "host": [ + "{{REGISTER_HOSTNAME}}" + ], + "port": "{{REGISTER_PORT}}", + "path": [ + "getauth" + ] + } + }, + "response": [] + }, + { + "name": "03-onboard_provider", + "event": [ + { + "listen": "test", + "script": { + "exec": [ + "var res = JSON.parse(responseBody);", + "if (pm.response.code == 201){", + " ", + " pm.environment.set('PROVIDER_ID', res.apiProvDomId);", + "", + " const roleVariableMapping = {", + " \"AEF\": { id: 'AEF_ID', cert: 'AEF_CERT' },", + " \"APF\": { id: 'APF_ID', cert: 'APF_CERT' },", + " \"AMF\": { id: 'AMF_ID', cert: 'AMF_CERT' }", + " };", + "", + " res.apiProvFuncs.forEach(function(elemento) {", + " const role = elemento.apiProvFuncRole;", + " if (roleVariableMapping.hasOwnProperty(role)) {", + " const variables = roleVariableMapping[role];", + " pm.environment.set(variables.id, elemento.apiProvFuncId);", + " pm.environment.set(variables.cert, elemento.regInfo.apiProvCert);", + "", + " }", + " });", + "", + "}", + "", + "", + "" + ], + "type": "text/javascript" + } + }, + { + "listen": "prerequest", + "script": { + "exec": [ + "", + "var res = JSON.parse(pm.request.body.raw);", + "", + "res.apiProvFuncs.forEach(function(elemento) {", + "", + " setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/generate_csr',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: elemento", + " }", + " }, function (err, response) {", + " j_file = JSON.parse(response.text());", + " elemento.regInfo.apiProvPubKey = j_file.csr;", + " pm.environment.set(elemento.apiProvFuncRole+'_KEY', j_file.key);", + " });", + " }, 5000);", + "", + "});", + "", + "pm.request.body.raw = res;" + ], + "type": "text/javascript" + } + } + ], + "request": { + "auth": { + "type": "bearer", + "bearer": [ + { + "key": "token", + "value": "{{ACCESS_TOKEN}}", + "type": "string" + } + ] + }, + "method": "POST", + "header": [ + { + "key": "", + "value": "", + "type": "text", + "disabled": true + } + ], + "body": { + "mode": "raw", + "raw": "{\n\"apiProvFuncs\": [\n {\n \"regInfo\": {\n \"apiProvPubKey\": \"\"\n },\n \"apiProvFuncRole\": \"AEF\",\n \"apiProvFuncInfo\": \"dummy_aef\"\n },\n {\n \"regInfo\": {\n \"apiProvPubKey\": \"\"\n },\n \"apiProvFuncRole\": \"APF\",\n \"apiProvFuncInfo\": \"dummy_apf\"\n },\n {\n \"regInfo\": {\n \"apiProvPubKey\": \"\"\n },\n \"apiProvFuncRole\": \"AMF\",\n \"apiProvFuncInfo\": \"dummy_amf\"\n }\n],\n\"apiProvDomInfo\": \"This is provider\",\n\"suppFeat\": \"fff\",\n\"failReason\": \"string\",\n\"regSec\": \"{{ACCESS_TOKEN}}\"\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{CAPIF_HOSTNAME}}/{{ONBOARDING_URL}}", + "protocol": "https", + "host": [ + "{{CAPIF_HOSTNAME}}" + ], + "path": [ + "{{ONBOARDING_URL}}" + ] + } + }, + "response": [] + }, + { + "name": "04-publish_api", + "event": [ + { + "listen": "prerequest", + "script": { + "exec": [ + "setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/write_cert',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: {cert: pm.environment.get('APF_CERT'), key:pm.environment.get('APF_KEY')}", + " }", + " }, function (err, response) {", + " console.log(response)", + " });", + " }, 5000);", + "", + "", + "" + ], + "type": "text/javascript" + } + }, + { + "listen": "test", + "script": { + "exec": [ + "" + ], + "type": "text/javascript" + } + } + ], + "protocolProfileBehavior": { + "strictSSL": true + }, + "request": { + "auth": { + "type": "noauth" + }, + "method": "POST", + "header": [ + { + "key": "", + "value": "", + "type": "text", + "disabled": true + } + ], + "body": { + "mode": "raw", + "raw": "\n{\n \"apiName\": \"hello_api_demo_v2\",\n \"aefProfiles\": [\n {\n \"aefId\": \"{{AEF_ID}}\",\n \"versions\": [\n {\n \"apiVersion\": \"v1\",\n \"expiry\": \"2021-11-30T10:32:02.004Z\",\n \"resources\": [\n {\n \"resourceName\": \"hello-endpoint\",\n \"commType\": \"REQUEST_RESPONSE\",\n \"uri\": \"/hello\",\n \"custOpName\": \"string\",\n \"operations\": [\n \"POST\"\n ],\n \"description\": \"Endpoint to receive a welcome message\"\n }\n ],\n \"custOperations\": [\n {\n \"commType\": \"REQUEST_RESPONSE\",\n \"custOpName\": \"string\",\n \"operations\": [\n \"POST\"\n ],\n \"description\": \"string\"\n }\n ]\n }\n ],\n \"protocol\": \"HTTP_1_1\",\n \"dataFormat\": \"JSON\",\n \"securityMethods\": [\"Oauth\"],\n \"interfaceDescriptions\": [\n {\n \"ipv4Addr\": \"localhost\",\n \"port\": 8088,\n \"securityMethods\": [\"Oauth\"]\n }\n ]\n }\n ],\n \"description\": \"Hello api services\",\n \"supportedFeatures\": \"fffff\",\n \"shareableInfo\": {\n \"isShareable\": true,\n \"capifProvDoms\": [\n \"string\"\n ]\n },\n \"serviceAPICategory\": \"string\",\n \"apiSuppFeats\": \"fffff\",\n \"pubApiPath\": {\n \"ccfIds\": [\n \"string\"\n ]\n },\n \"ccfId\": \"string\"\n }", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{CAPIF_HOSTNAME}}/published-apis/v1/{{APF_ID}}/service-apis", + "protocol": "https", + "host": [ + "{{CAPIF_HOSTNAME}}" + ], + "path": [ + "published-apis", + "v1", + "{{APF_ID}}", + "service-apis" + ] + } + }, + "response": [] + }, + { + "name": "05-register_user_invoker", + "event": [ + { + "listen": "test", + "script": { + "exec": [ + "var res = JSON.parse(responseBody);", + "pm.environment.set('ONBOARDING_URL_INVOKER', res.ccf_onboarding_url);", + "pm.environment.set('DISCOVER_URL', res.ccf_discover_url);", + "pm.environment.set('USER_INVOKER_ID', res.id);", + "" + ], + "type": "text/javascript" + } + } + ], + "request": { + "method": "POST", + "header": [], + "body": { + "mode": "raw", + "raw": "{\n\"password\": \"{{PASSWORD}}\",\n\"username\": \"{{USERNAME_INVOKER}}\",\n\"description\": \"invoker\",\n\"role\": \"invoker\",\n\"cn\": \"invoker\"\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{REGISTER_HOSTNAME}}:{{REGISTER_PORT}}/register", + "protocol": "https", + "host": [ + "{{REGISTER_HOSTNAME}}" + ], + "port": "{{REGISTER_PORT}}", + "path": [ + "register" + ] + } + }, + "response": [] + }, + { + "name": "06-getauth_invoker", + "event": [ + { + "listen": "test", + "script": { + "exec": [ + "var res = JSON.parse(responseBody);", + "", + "pm.environment.set('CA_ROOT', res.ca_root);", + "pm.environment.set('ACCESS_TOKEN_INVOKER', res.access_token);", + "", + "setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/write_ca',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: res", + " }", + " }, function (err, res) {", + " console.log(res);", + " });", + " }, 5000);" + ], + "type": "text/javascript" + } + } + ], + "request": { + "method": "POST", + "header": [], + "body": { + "mode": "raw", + "raw": "{\n\"password\": \"{{PASSWORD}}\",\n\"username\": \"{{USERNAME_INVOKER}}\"\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{REGISTER_HOSTNAME}}:{{REGISTER_PORT}}/getauth", + "protocol": "https", + "host": [ + "{{REGISTER_HOSTNAME}}" + ], + "port": "{{REGISTER_PORT}}", + "path": [ + "getauth" + ] + } + }, + "response": [] + }, + { + "name": "07-onboard_invoker", + "event": [ + { + "listen": "test", + "script": { + "exec": [ + "var res = JSON.parse(responseBody);", + "if (pm.response.code == 201){", + " ", + " pm.environment.set('INVOKER_ID', res.apiInvokerId);", + " pm.environment.set('INVOKER_CERT', res.onboardingInformation.apiInvokerCertificate);", + "}", + "", + "", + "" + ], + "type": "text/javascript" + } + }, + { + "listen": "prerequest", + "script": { + "exec": [ + "", + "var res = JSON.parse(pm.request.body.raw);", + "", + "", + "setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/generate_csr_invoker',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: {}", + " }", + " }, function (err, response) {", + " j_file = JSON.parse(response.text());", + " res.onboardingInformation.apiInvokerPublicKey = j_file.csr;", + " pm.environment.set('INVOKER_KEY', j_file.key);", + " });", + " }, 5000);", + "", + "", + "pm.request.body.raw = res;" + ], + "type": "text/javascript" + } + } + ], + "request": { + "auth": { + "type": "bearer", + "bearer": [ + { + "key": "token", + "value": "{{ACCESS_TOKEN_INVOKER}}", + "type": "string" + } + ] + }, + "method": "POST", + "header": [ + { + "key": "", + "value": "", + "type": "text", + "disabled": true + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"notificationDestination\" : \"http://host.docker.internal:8086/netapp_callback\",\n \"supportedFeatures\" : \"fffffff\",\n \"apiInvokerInformation\" : \"dummy\",\n \"websockNotifConfig\" : {\n \"requestWebsocketUri\" : true,\n \"websocketUri\" : \"websocketUri\"\n },\n \"onboardingInformation\" : {\n \"apiInvokerPublicKey\" : \"\"\n },\n \"requestTestNotification\" : true\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{CAPIF_HOSTNAME}}/{{ONBOARDING_URL_INVOKER}}", + "protocol": "https", + "host": [ + "{{CAPIF_HOSTNAME}}" + ], + "path": [ + "{{ONBOARDING_URL_INVOKER}}" + ] + } + }, + "response": [] + }, + { + "name": "08-discover", + "event": [ + { + "listen": "prerequest", + "script": { + "exec": [ + "setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/write_cert',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: {cert: pm.environment.get('INVOKER_CERT'), key:pm.environment.get('INVOKER_KEY')}", + " }", + " }, function (err, response) {", + " console.log(response)", + " });", + " }, 5000);", + "", + "", + "" + ], + "type": "text/javascript" + } + }, + { + "listen": "test", + "script": { + "exec": [ + "var res = JSON.parse(responseBody);", + "if (pm.response.code == 200){", + "", + " res.serviceAPIDescriptions.forEach(function(api) {", + " pm.environment.set('API_SERVICE_ID', api.apiId);", + " pm.environment.set('API_NAME', api.apiName);", + " pm.environment.set('API_AEF_ID', api.aefProfiles[0].aefId);", + " pm.environment.set('IPV4ADDR', api.aefProfiles[0].interfaceDescriptions[0].ipv4Addr);", + " pm.environment.set('PORT', api.aefProfiles[0].interfaceDescriptions[0].port);", + " pm.environment.set('URI', api.aefProfiles[0].versions[0].resources[0].uri);", + " });", + "}" + ], + "type": "text/javascript" + } + } + ], + "protocolProfileBehavior": { + "disableBodyPruning": true, + "strictSSL": true + }, + "request": { + "auth": { + "type": "noauth" + }, + "method": "GET", + "header": [ + { + "key": "", + "value": "", + "type": "text", + "disabled": true + } + ], + "body": { + "mode": "raw", + "raw": "", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{CAPIF_HOSTNAME}}/{{DISCOVER_URL}}{{INVOKER_ID}}", + "protocol": "https", + "host": [ + "{{CAPIF_HOSTNAME}}" + ], + "path": [ + "{{DISCOVER_URL}}{{INVOKER_ID}}" + ] + } + }, + "response": [] + }, + { + "name": "09-security_context", + "event": [ + { + "listen": "prerequest", + "script": { + "exec": [ + "setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/write_cert',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: {cert: pm.environment.get('INVOKER_CERT'), key:pm.environment.get('INVOKER_KEY')}", + " }", + " }, function (err, response) {", + " console.log(response)", + " });", + " }, 5000);", + "", + "", + "" + ], + "type": "text/javascript" + } + }, + { + "listen": "test", + "script": { + "exec": [ + "" + ], + "type": "text/javascript" + } + } + ], + "protocolProfileBehavior": { + "strictSSL": true + }, + "request": { + "auth": { + "type": "noauth" + }, + "method": "PUT", + "header": [ + { + "key": "", + "value": "", + "type": "text", + "disabled": true + } + ], + "body": { + "mode": "raw", + "raw": "{\n \"securityInfo\": [\n {\n \"prefSecurityMethods\": [\n \"Oauth\"\n ],\n \"authenticationInfo\": \"string\",\n \"authorizationInfo\": \"string\",\n \"aefId\": \"{{API_AEF_ID}}\",\n \"apiId\": \"{{API_SERVICE_ID}}\"\n }\n ],\n \"notificationDestination\": \"https://mynotificationdest.com\",\n \"requestTestNotification\": true,\n \"websockNotifConfig\": {\n \"websocketUri\": \"string\",\n \"requestWebsocketUri\": true\n },\n \"supportedFeatures\": \"fff\"\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{CAPIF_HOSTNAME}}/capif-security/v1/trustedInvokers/{{INVOKER_ID}}", + "protocol": "https", + "host": [ + "{{CAPIF_HOSTNAME}}" + ], + "path": [ + "capif-security", + "v1", + "trustedInvokers", + "{{INVOKER_ID}}" + ] + } + }, + "response": [] + }, + { + "name": "10-get_token", + "event": [ + { + "listen": "prerequest", + "script": { + "exec": [ + "setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/write_cert',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: {cert: pm.environment.get('INVOKER_CERT'), key:pm.environment.get('INVOKER_KEY')}", + " }", + " }, function (err, response) {", + " console.log(response)", + " });", + " }, 5000);", + "", + "", + "" + ], + "type": "text/javascript" + } + }, + { + "listen": "test", + "script": { + "exec": [ + "var res = JSON.parse(responseBody);", + "if (pm.response.code == 200){", + " pm.environment.set('NETAPP_SERVICE_TOKEN', res.access_token);", + "}" + ], + "type": "text/javascript" + } + } + ], + "protocolProfileBehavior": { + "strictSSL": true, + "disabledSystemHeaders": {} + }, + "request": { + "auth": { + "type": "noauth" + }, + "method": "POST", + "header": [ + { + "key": "", + "value": "", + "type": "text", + "disabled": true + } + ], + "body": { + "mode": "urlencoded", + "urlencoded": [ + { + "key": "client_id", + "value": "{{INVOKER_ID}}", + "type": "text" + }, + { + "key": "grant_type", + "value": "client_credentials", + "type": "text" + }, + { + "key": "client_secret", + "value": "string", + "type": "text" + }, + { + "key": "scope", + "value": "3gpp#{{API_AEF_ID}}:{{API_NAME}}", + "type": "text" + } + ] + }, + "url": { + "raw": "https://{{CAPIF_HOSTNAME}}/capif-security/v1/securities/{{INVOKER_ID}}/token", + "protocol": "https", + "host": [ + "{{CAPIF_HOSTNAME}}" + ], + "path": [ + "capif-security", + "v1", + "securities", + "{{INVOKER_ID}}", + "token" + ] + } + }, + "response": [] + }, + { + "name": "11-call_service", + "event": [ + { + "listen": "prerequest", + "script": { + "exec": [ + "" + ], + "type": "text/javascript" + } + }, + { + "listen": "test", + "script": { + "exec": [ + "" + ], + "type": "text/javascript" + } + } + ], + "protocolProfileBehavior": { + "strictSSL": false + }, + "request": { + "auth": { + "type": "bearer", + "bearer": [ + { + "key": "token", + "value": "{{NETAPP_SERVICE_TOKEN}}", + "type": "string" + } + ] + }, + "method": "POST", + "header": [ + { + "key": "", + "value": "", + "type": "text", + "disabled": true + } + ], + "body": { + "mode": "raw", + "raw": "{\n\"name\": {{USERNAME_INVOKER}}\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "http://{{IPV4ADDR}}:{{PORT}}{{URI}}", + "protocol": "http", + "host": [ + "{{IPV4ADDR}}" + ], + "port": "{{PORT}}{{URI}}" + } + }, + "response": [] + }, + { + "name": "offboard_provider", + "event": [ + { + "listen": "prerequest", + "script": { + "exec": [ + "setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/write_cert',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: {cert: pm.environment.get('AMF_CERT'), key:pm.environment.get('AMF_KEY')}", + " }", + " }, function (err, response) {", + " console.log(response)", + " });", + " }, 5000);" + ], + "type": "text/javascript" + } + } + ], + "protocolProfileBehavior": { + "strictSSL": true + }, + "request": { + "auth": { + "type": "noauth" + }, + "method": "DELETE", + "header": [], + "url": { + "raw": "https://{{CAPIF_HOSTNAME}}/{{ONBOARDING_URL}}/{{PROVIDER_ID}}", + "protocol": "https", + "host": [ + "{{CAPIF_HOSTNAME}}" + ], + "path": [ + "{{ONBOARDING_URL}}", + "{{PROVIDER_ID}}" + ] + } + }, + "response": [] + }, + { + "name": "offboard_invoker", + "event": [ + { + "listen": "prerequest", + "script": { + "exec": [ + "setTimeout(() => {", + " pm.sendRequest({", + " url: 'http://localhost:3000/write_cert',", + " method: 'POST',", + " header: 'Content-Type:application/json',", + " encoding: 'binary',", + " body: {", + " mode: 'raw',", + " raw: {cert: pm.environment.get('INVOKER_CERT'), key:pm.environment.get('INVOKER_KEY')}", + " }", + " }, function (err, response) {", + " console.log(response)", + " });", + " }, 5000);" + ], + "type": "text/javascript" + } + } + ], + "protocolProfileBehavior": { + "strictSSL": true + }, + "request": { + "auth": { + "type": "noauth" + }, + "method": "DELETE", + "header": [], + "url": { + "raw": "https://{{CAPIF_HOSTNAME}}/{{ONBOARDING_URL_INVOKER}}/{{INVOKER_ID}}", + "protocol": "https", + "host": [ + "{{CAPIF_HOSTNAME}}" + ], + "path": [ + "{{ONBOARDING_URL_INVOKER}}", + "{{INVOKER_ID}}" + ] + } + }, + "response": [] + }, + { + "name": "remove_user_invoker", + "request": { + "method": "DELETE", + "header": [], + "body": { + "mode": "raw", + "raw": "{\n\"password\": \"{{PASSWORD}}\",\n\"username\": \"{{USERNAME_INVOKER}}\"\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{REGISTER_HOSTNAME}}:{{REGISTER_PORT}}/remove", + "protocol": "https", + "host": [ + "{{REGISTER_HOSTNAME}}" + ], + "port": "{{REGISTER_PORT}}", + "path": [ + "remove" + ] + } + }, + "response": [] + }, + { + "name": "remove_user_provider", + "request": { + "method": "DELETE", + "header": [], + "body": { + "mode": "raw", + "raw": "{\n\"password\": \"{{PASSWORD}}\",\n\"username\": \"{{USERNAME}}\"\n}", + "options": { + "raw": { + "language": "json" + } + } + }, + "url": { + "raw": "https://{{REGISTER_HOSTNAME}}:{{REGISTER_PORT}}/remove", + "protocol": "https", + "host": [ + "{{REGISTER_HOSTNAME}}" + ], + "port": "{{REGISTER_PORT}}", + "path": [ + "remove" + ] + } + }, + "response": [] + } + ] +} \ No newline at end of file diff --git a/docs/testing_with_postman/CAPIF.postman_environment.json b/docs/testing_with_postman/CAPIF.postman_environment.json new file mode 100644 index 0000000000000000000000000000000000000000..ab3839e9e78b498312a14db675316a76455747b0 --- /dev/null +++ b/docs/testing_with_postman/CAPIF.postman_environment.json @@ -0,0 +1,237 @@ +{ + "id": "f2daf431-63c4-4275-8755-4cc5de2e566d", + "name": "CAPIF", + "values": [ + { + "key": "CAPIF_HOSTNAME", + "value": "capifcore", + "type": "default", + "enabled": true + }, + { + "key": "CAPIF_PORT", + "value": "8080", + "type": "default", + "enabled": true + }, + { + "key": "REGISTER_HOSTNAME", + "value": "localhost", + "type": "default", + "enabled": true + }, + { + "key": "REGISTER_PORT", + "value": "8084", + "type": "default", + "enabled": true + }, + { + "key": "USERNAME", + "value": "ProviderONE", + "type": "default", + "enabled": true + }, + { + "key": "PASSWORD", + "value": "pass", + "type": "default", + "enabled": true + }, + { + "key": "CALLBACK_IP", + "value": "host.docker.internal", + "type": "default", + "enabled": true + }, + { + "key": "CALLBACK_PORT", + "value": "8087", + "type": "default", + "enabled": true + }, + { + "key": "ONBOARDING_URL", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "PUBLISH_URL", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "USER_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "CA_ROOT", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "ACCESS_TOKEN", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "APF_KEY", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "AMF_KEY", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "AEF_KEY", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "PROVIDER_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "AEF_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "AEF_CERT", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "APF_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "APF_CERT", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "AMF_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "AMF_CERT", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "ONBOARDING_URL_INVOKER", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "DISCOVER_URL", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "USER_INVOKER_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "ACCESS_TOKEN_INVOKER", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "INVOKER_KEY", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "INVOKER_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "INVOKER_CERT", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "API_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "API_NAME", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "IPV4ADDR", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "PORT", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "URI", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "API_SERVICE_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "API_AEF_ID", + "value": "", + "type": "any", + "enabled": true + }, + { + "key": "NETAPP_SERVICE_TOKEN", + "value": "", + "type": "any", + "enabled": true + } + ], + "_postman_variable_scope": "environment", + "_postman_exported_at": "2023-12-20T10:47:32.128Z", + "_postman_exported_using": "Postman/10.21.4" +} \ No newline at end of file diff --git a/docs/testing_with_postman/README.md b/docs/testing_with_postman/README.md new file mode 100644 index 0000000000000000000000000000000000000000..6d36b3216bd894637462c7034f21179f3dcae7d4 --- /dev/null +++ b/docs/testing_with_postman/README.md @@ -0,0 +1,108 @@ +[**[Return To Main]**] + +# CAPIF in Postman +In this section we can use Postman to publish an API as a provider and use it as an invoker. + +## Requisites + +- We will need to have Node.js installed since we will use a small script to create the CSRs of the certificates. +- An instance of CAPIF (If it is not local, certain variables would have to be modified both in the Node.js script and in the Postman environment variables). + +## First steps + +1. Install the Node dependencies to run the script with: + +``` +npm i +``` + +2. Run the script.js with the following command: + +``` +node script.js +``` + +3. Import Postman collection and environment variables (CAPIF.postman_collection.json and CAPIF.postman_environment.json) + +## Not Local CAPIF + +If the CAPIF is not local, the host and port of both the CAPIF and the register would have to be specified in the variables, and the CAPIF_HOSTNAME in the script, necessary to obtain the server certificate. + +**Enviroments in Postman** +``` +CAPIF_HOSTNAME capifcore +CAPIF_PORT 8080 +REGISTER_HOSTNAME register +REGISTER_PORT 8084 +``` + +**Const in script.js** +``` +CAPIF_HOSTNAME capifcore +``` + +## CAPIF Flow +Once the first steps have been taken, we can now use Postman requests. These requests are numbered in the order that must be followed to obtain everything necessary from CAPIF. + +### Publication of an API + +- **01-register_user_provider** +- **02-getauth_provider** +- **03-onboard_provider** + +At this point we move on to using certificate authentication in CAPIF. In Postman it is necessary to add the certificates manually and using more than one certificate for the same host as we do in CAPIF complicates things. For this reason, we use the script to overwrite a certificate and a key when it is necessary to have a specific one. + +To configure go to **settings** in Postman and open the **certificates** section. + +- Here, activate the **CA certificates** option and add the **ca_cert.pem** file found in the **Responses** folder. +- Adds a client certificate specifying the CAPIF host being used and the files **client_cert.crt** and **client_key.key** in the **Responses** folder. + + +Once this is done, the node script will be in charge of changing the certificate that is necessary in each request. + +- **04-publish_api** + +Once the api is published, we can start it. In this case we have a test one created in python that can be executed with the following command: + +``` +python3 hello_api.py +``` + +The API publication interface is set to localhost with port 8088, so the service must be set up locally. If you wanted to build it on another site, you would have to change the interface description in the body of publish_api. + +With this the provider part would be finished. + +### Calling the API + +- **05-register_user_invoker** +- **06-getauth_invoker** +- **07-onboard_invoker** + +At this point we move on to using certificate authentication in CAPIF. **If you did not configure the provider's certificates, you would have to do it now**. + +- **08-discover** +- **09-security_context** +- **10-get_token** +- **11-call_service** + +With this, we would have made the API call and finished the flow. + +### Other requests + +Other requests that we have added are the following: + +- **offboard_provider** Performs offboarding of the provider, thereby eliminating the published APIs. +- **offboard_invoker** Offboards the invoker, also eliminating access to the APIs of that invoker. +- **remove_user_invoker** Delete the user created for the invoker. +- **remove_user_provider** Delete the user created for the provider. + +## Notes + +- This process is designed to teach how requests are made in Postman and the flow that should be followed to publish and use an API. +- It is possible that if external CAPIFs are used (Public CAPIF) the test data may already be used or the API already registered. +- It is necessary to have the Node service running to make the certificate change for the requests, otherwise it will not work. +- We are working on adding more requests to the Postman collection. +- This collection is a testing guide and is recommended for testing purposes only. + +[Return To Main]: ../../README.md#using-postman + diff --git a/docs/testing_with_postman/hello_api.py b/docs/testing_with_postman/hello_api.py new file mode 100644 index 0000000000000000000000000000000000000000..0b2a35989ecfaa144489d1b6012f61453c91bd68 --- /dev/null +++ b/docs/testing_with_postman/hello_api.py @@ -0,0 +1,38 @@ +from flask import Flask, jsonify, request +from flask_jwt_extended import jwt_required, JWTManager, get_jwt_identity, get_jwt +import ssl +from werkzeug import serving +import socket, ssl +import OpenSSL +from OpenSSL import crypto +import jwt +import pyone + +app = Flask(__name__) + +jwt_flask = JWTManager(app) + + +with open("Responses/cert_server.pem", "rb") as cert_file: + cert= cert_file.read() + +crtObj = crypto.load_certificate(crypto.FILETYPE_PEM, cert) +pubKeyObject = crtObj.get_pubkey() +pubKeyString = crypto.dump_publickey(crypto.FILETYPE_PEM,pubKeyObject) + +app.config['JWT_ALGORITHM'] = 'RS256' +app.config['JWT_PUBLIC_KEY'] = pubKeyString + + +@app.route("/hello", methods=["POST"]) +@jwt_required() +def hello(): + + request_data = request.get_json() + + user_name = request_data['name'] + + return jsonify(f"Hello: {user_name}, welcome to CAPIF.") + +if __name__ == '__main__': + serving.run_simple("0.0.0.0", 8088, app) diff --git a/docs/testing_with_postman/package.json b/docs/testing_with_postman/package.json new file mode 100644 index 0000000000000000000000000000000000000000..6d612a702d5a9fa9a112f1a2f47c44f75725fa00 --- /dev/null +++ b/docs/testing_with_postman/package.json @@ -0,0 +1,16 @@ +{ + "name": "node-server", + "version": "1.0.0", + "description": "", + "main": "index.js", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + }, + "author": "", + "license": "ISC", + "dependencies": { + "body-parser": "^1.18.3", + "express": "^4.16.3", + "shelljs": "^0.8.2" + } + } \ No newline at end of file diff --git a/docs/testing_with_postman/script.js b/docs/testing_with_postman/script.js new file mode 100644 index 0000000000000000000000000000000000000000..980f81f33b4bda4f48f55dcd13d8f436dface9a5 --- /dev/null +++ b/docs/testing_with_postman/script.js @@ -0,0 +1,199 @@ +// Change this variable if another host is used for CAPIF +const CAPIF_HOSTNAME = 'capifcore'; + +const express = require('express'), + app = express(), + fs = require('fs'), + shell = require('shelljs'), + + + folderPath = './Responses/', + bodyParser = require('body-parser'), + path = require('path'); + +const { exec } = require('child_process'); + +// Create the folder path in case it doesn't exist +shell.mkdir('-p', folderPath); + + // Change the limits according to your response size +app.use(bodyParser.json({limit: '50mb', extended: true})); +app.use(bodyParser.urlencoded({ limit: '50mb', extended: true })); +var opensslCommand = '' + +if (CAPIF_HOSTNAME.includes(':')){ + opensslCommand = `openssl s_client -connect ${CAPIF_HOSTNAME} | openssl x509 -text > ./Responses/cert_server.pem`; +} +else{ + opensslCommand = `openssl s_client -connect ${CAPIF_HOSTNAME}:443 | openssl x509 -text > ./Responses/cert_server.pem`; +} + +exec(opensslCommand, (error, stdout, stderr) => { + if (error) { + console.error(`Error generating CSR: ${stderr}`); + } +}); + +fs.writeFileSync('./Responses/client_cert.crt', ''); +fs.writeFileSync('./Responses/client_key.key', ''); + +app.get('/', (req, res) => res.send('Hello, I write data to file. Send them requests!')); + +app.post('/generate_csr', (req, res) => { + + console.log(req.body); + const csrFilePath = 'Responses/'+req.body.apiProvFuncRole+'_csr.pem'; + const privateKeyFilePath = 'Responses/'+req.body.apiProvFuncRole+'_key.key'; + + const subjectInfo = { + country: 'ES', + state: 'Madrid', + locality: 'Madrid', + organization: 'Telefonica I+D', + organizationalUnit: 'IT Department', + emailAddress: 'admin@example.com', + }; + + const opensslCommand = `openssl req -newkey rsa:2048 -nodes -keyout ${privateKeyFilePath} -out ${csrFilePath} -subj "/C=${subjectInfo.country}/ST=${subjectInfo.state}/L=${subjectInfo.locality}/O=${subjectInfo.organization}/OU=${subjectInfo.organizationalUnit}/emailAddress=${subjectInfo.emailAddress}"`; + + exec(opensslCommand, (error, stdout, stderr) => { + if (error) { + console.error(`Error generating CSR: ${stderr}`); + } else { + console.log('CSR generated successfully:'); + fs.readFile(csrFilePath, 'utf8', (readError, csrContent) => { + if (readError) { + console.error(`Error reading CSR: ${readError}`); + res.status(500).send('Error reading CSR'); + } else { + console.log('CSR read successfully:'); + // Send the CSR content in the response + fs.readFile(privateKeyFilePath, 'utf8', (readError, keyContent) => { + if (readError) { + console.error(`Error reading KEY: ${readError}`); + res.status(500).send('Error reading KEY'); + } else { + console.log('KEY read successfully:'); + // Send the CSR content in the response + fs.unlink(csrFilePath, (err) => { + if (err) { + console.error(`Error deleting file: ${err.message}`); + } + }); + fs.unlink(privateKeyFilePath, (err) => { + if (err) { + console.error(`Error deleting file: ${err.message}`); + } + }); + res.send({csr: csrContent, key: keyContent}); + } + }); + } + }); + } + }); +}); + +app.post('/generate_csr_invoker', (req, res) => { + + console.log(req.body); + const csrFilePath = 'Responses/invoker_csr.pem'; + const privateKeyFilePath = 'Responses/invoker_key.key'; + + const subjectInfo = { + country: 'ES', + state: 'Madrid', + locality: 'Madrid', + organization: 'Telefonica I+D', + organizationalUnit: 'IT Department', + emailAddress: 'admin@example.com', + }; + + const opensslCommand = `openssl req -newkey rsa:2048 -nodes -keyout ${privateKeyFilePath} -out ${csrFilePath} -subj "/C=${subjectInfo.country}/ST=${subjectInfo.state}/L=${subjectInfo.locality}/O=${subjectInfo.organization}/OU=${subjectInfo.organizationalUnit}/emailAddress=${subjectInfo.emailAddress}"`; + + exec(opensslCommand, (error, stdout, stderr) => { + if (error) { + console.error(`Error generating CSR: ${stderr}`); + } else { + console.log('CSR generated successfully:'); + fs.readFile(csrFilePath, 'utf8', (readError, csrContent) => { + if (readError) { + console.error(`Error reading CSR: ${readError}`); + res.status(500).send('Error reading CSR'); + } else { + console.log('CSR read successfuly:'); + // Send the CSR content in the response + fs.readFile(privateKeyFilePath, 'utf8', (readError, keyContent) => { + if (readError) { + console.error(`Error reading KEY: ${readError}`); + res.status(500).send('Error reading KEY'); + } else { + console.log('KEY read successfully:'); + // Send the CSR content in the response + fs.unlink(csrFilePath, (err) => { + if (err) { + console.error(`Error deleting file: ${err.message}`); + } + }); + fs.unlink(privateKeyFilePath, (err) => { + if (err) { + console.error(`Error deleting file: ${err.message}`); + } + }); + res.send({csr: csrContent, key: keyContent}); + } + }); + } + }); + } + }); +}); + + +app.post('/write_cert', (req, res) => { + let extension = 'crt', + fsMode = 'writeFile', + filename = "client_cert", + filePath = `${path.join(folderPath, filename)}.${extension}`, + options = {encoding: 'binary'}; + fs[fsMode](filePath, req.body.cert, options, (err) => { + if (err) { + console.log(err); + res.send('Error'); + } + }); + extension = 'key'; + filename = "client_key"; + filePath = `${path.join(folderPath, filename)}.${extension}`; + fs[fsMode](filePath, req.body.key, options, (err) => { + if (err) { + console.log(err); + res.send('Error'); + } + else { + res.send('Success'); + } + }); +}); + +app.post('/write_ca', (req, res) => { + let extension = 'pem', + fsMode = 'writeFile', + filename = "ca_cert", + filePath = `${path.join(folderPath, filename)}.${extension}`, + options = {encoding: 'binary'}; + fs[fsMode](filePath, req.body.ca_root, options, (err) => { + if (err) { + console.log(err); + res.send('Error'); + } + else { + res.send('Success'); + } + }); +}); + +app.listen(3000, () => { + console.log('ResponsesToFile App is listening now! Send them requests my way!'); + console.log(`Data is being stored at location: ${path.join(process.cwd(), folderPath)}`); +}); \ No newline at end of file diff --git a/docs/testing_with_robot/README.md b/docs/testing_with_robot/README.md new file mode 100644 index 0000000000000000000000000000000000000000..71504c1fc5ab76377c54bf868516f33a5617f731 --- /dev/null +++ b/docs/testing_with_robot/README.md @@ -0,0 +1,74 @@ +[**[Return To Main]**] +# Testing With Robot Framework + +- [Testing With Robot Framework](#testing-with-robot-framework) + - [Steps to Test](#steps-to-test) + - [Script Test Execution](#script-test-execution) + - [Manual Build And Test Execution](#manual-build-and-test-execution) + - [Test result review](#test-result-review) + +## Steps to Test + +To run any test locally you will need *docker* and *docker-compose* installed in order run services and execute test plan. Steps will be: +* **Run All Services**: See section [Run All CAPIF Services](../../README.md#run-all-capif-services-locally-with-docker-images) +* **Run desired tests**: At this point we have 2 options: + * **Using helper script**: [Script Test Execution](#script-test-execution) + * **Build robot docker image and execute manually robot docker**: [Manual Build And Test Execution](#manual-build-and-test-execution) + + +## Script Test Execution +This script will build robot docker image if it's need and execute tests selected by "include" option. Just go to service folder, execute and follow steps. +``` +./runCapifTests.sh --include +``` +Results will be stored at /results + +Please check parameters (include) under *Test Execution* at [Manual Build And Test Execution](#manual-build-and-test-execution). + +## Manual Build And Test Execution + +* **Build Robot docker image**: +``` +cd tools/robot +docker build . -t 5gnow-robot-test:latest +``` + +* **Tests Execution**: + +Execute all tests locally: +``` +=path in local machine to repository cloned. +=path to a folder on local machine to store results of Robot Framework execution. +=Is the hostname set when run.sh is executed, by default it will be capifcore. +=This is the port to reach when robot framework want to reach CAPIF deployment using http, this should be set to port without TLS set on Nginx, 8080 by default. + +To execute all tests run : +docker run -ti --rm --network="host" -v /tests:/opt/robot-tests/tests -v :/opt/robot-tests/results 5gnow-robot-test:latest --variable CAPIF_HOSTNAME:capifcore --variable CAPIF_HTTP_PORT:8080 --include all +``` + +Execute specific tests locally: +``` +To run more specific tests, for example, only one functionality: +=Select one from list: + "capif_api_discover_service", + "capif_api_invoker_management", + "capif_api_publish_service", + "capif_api_events", + "capif_security_api + +And Run: +docker run -ti --rm --network="host" -v /tests:/opt/robot-tests/tests -v :/opt/robot-tests/results 5gnow-robot-test:latest --variable CAPIF_HOSTNAME:capifcore --variable CAPIF_HTTP_PORT:8080 --include +``` + +## Test result review + +In order to Review results after tests, you can check general report at /report.html or if you need more detailed information /log.html, example: +* Report: +![Report](../images/robot_report_example.png) +* Detailed information: +![Log](../images/robot_log_example.png) + +**NOTE: If you need more detail at Robot Framework Logs you can set log level option just adding to command --loglevel DEBUG** + + +[Return To Main]: ../../README.md#robot-framework \ No newline at end of file diff --git a/helm/README-vault.md b/helm/README-vault.md new file mode 100644 index 0000000000000000000000000000000000000000..0fab95888fe99817695f93e6126f211d8412d365 --- /dev/null +++ b/helm/README-vault.md @@ -0,0 +1,52 @@ +# Install vault +``` +$ helm repo add hashicorp https://helm.releases.hashicorp.com +$ helm upgrade --install vault hashicorp/vault -n mon --set server.standalone.enabled=true --create-namespace + +# if you are using ingress controller, please use: +$ helm upgrade --install vault hashicorp/vault -n mon --set server.ingress.enabled=true --set server.ingress.hosts[0].host="vault.mon.int" --set server.ingress.ingressClassName=nginx --set server.standalone.enabled=true --create-namespace + +# verify pods are running +$ kubectl -n mon get pods + +``` +NOTA: If using ingressRoute. Please, create a file with: + +``` +--- + +apiVersion: traefik.containo.us/v1alpha1 +kind: IngressRoute +metadata: + name: vault-ingress-route + namespace: mon +spec: + entryPoints: [web] + routes: + - kind: Rule + match: Host(`vault.mon.int`) + services: + - kind: Service + name: vault-internal + port: 8200 + scheme: http + +``` +``` +# deploy ingress route + +$ kubectl apply -f ingress-route.yaml +``` +# Creating vault PKI and certificates + +## Considerations: + - If you change values by default in the `capi/values.yaml`. Please, consider have a look of some topics: + - You will need to create PKI and certificates, therefore. The `VAULT_TOKEN` provided must have sufficient permissions in Vault to create it + - Modify: + - `namespace` in `vault-job/vault-job.yaml`. The namespace should be changed in the entire file. By default is `mon` (same namespace when capif is deployed) + - `export VAULT_ADDR` using the service deployed to vault. By default is `http://vault-internal:8200` + - `export VAULT_TOKEN` using the token created to vault. By default is `dev-only-token` + - `DOMAIN1` - variable used for generate certificate (CSR) to capif `(ex: DOMAIN1=capif.mobile.cloud)`. +``` +$ kubectl apply -f vault-job/ +``` \ No newline at end of file diff --git a/helm/README.md b/helm/README.md new file mode 100644 index 0000000000000000000000000000000000000000..3fd0ff3192ae80d745c7e475ed47ca38a6ddd402 --- /dev/null +++ b/helm/README.md @@ -0,0 +1,47 @@ +# Install CAPIF in Kubernetes using HELM + +## Dependencies +- Helm +- `Ingress` already in cluster (if configured in capif - `values.yaml`) + - ``` + # OPTIONAL - if not exists Ingress in cluster, use this command to install it + $ helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --set rbac.create=true --set controller.service.type=NodePort + + # OPTIONAL - if you need specify the nodePort in cluster use + $ helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --set rbac.create=true --set controller.service.type=NodePort --set controller.service.nodePorts.http=32080 --set controller.service.nodePorts.https=32443 --namespace ingress-nginx --create-namespace --set controller.extraArgs."enable-ssl-passthrough=true" --kubeconfig ../oneke-new.kubeconfig + + # Check if ssl-passthrough is enabled in nginx controller. + $ kubectl -n ingress-nginx get deploy -o yaml | grep passthrough + ``` +- `PersistentVolumeClain` already in cluster (if configured in capif) + +## Considerations before to install/deploy: +- **Prometheus**: + - You can install prometheus but you will need permissions to deploy prometheus in the cluster. The helm creates a ClusterRole to access to all resources in the cluster. + - If you don't have permission or there is already provided a Prometheus + in the cluster. in `capif/values.yaml` gives the field `monitoring.prometheus.enable: ""` + - Grafana will need the endpoint to prometheus. Please keep in mind setup the grafana's field in `capif/values.yaml` +- **Vault**: + - You will need a instance of vault already deployed. If the cluster doesn't provide the vault instance. You can install it following the next [steps](https://github.com/Telefonica/CAPIF_Future_Network_Lab/blob/main/helm/README-vault.md) + - Once the vault is provided in the cluster. You need to create the PKI and certificates. Follow the vault-job [step](https://github.com/Telefonica/CAPIF_Future_Network_Lab/blob/main/helm/README-vault.md#creating-vault-pki-and-certificates) to create it + - Setup the `parametersVault.env.VaultHostname`: This is the endPoint to vault. This endpoint can be a service/ingress of kubernetes + - Setup `parametersVault.env.VaultPort`: This is the port listenting to vault instance + - Setup `parametersVault.env.vaultAccessToken`: This is the token used for capif to create the certificates in vault. If vault owns of you. Use the token created in [Vault readme](https://github.com/Telefonica/CAPIF_Future_Network_Lab/blob/main/helm/README-vault.md#creating-vault-pki-and-certificates) . Otherwise, the admin + of the cluster will provide you the token. This token will need sufficient permissions to create PKI and certificates. +- **CAPIF** + - Please, have a look of [`values.yaml`](https://github.com/Telefonica/CAPIF_Future_Network_Lab/blob/main/helm/capif/values.yaml) file and setup according to the conditions + ``` + # download dependencies + $ helm dependency build capif/ + +# check ingress_ip.oneke +kubectl get svc -A | grep nginx + +# install capif +$ helm upgrade --install -n mon monitoring-capif capif/ --set nginx.nginx.env.capifHostname=mon-capif.monitoring.int --set ingress_ip.oneke="10.17.173.127" --atomic --create-namespace + ``` + +NOTA: The deployment can take until 8 minutes to be ready. Please, if it fails, re-install CAPIF + +## Troubleshooting +- [`Mongo stuck`](https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/20.0.x?topic=troubleshooting-mongodb-pod-fails-start-container-exit-code-14-100) \ No newline at end of file diff --git a/helm/capif/.helmignore b/helm/capif/.helmignore new file mode 100644 index 0000000000000000000000000000000000000000..0e8a0eb36f4ca2c939201c0d54b5d82a1ea34778 --- /dev/null +++ b/helm/capif/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/helm/capif/Chart.yaml b/helm/capif/Chart.yaml new file mode 100644 index 0000000000000000000000000000000000000000..625f95844b7f45365ff19a5df4e2da0b8492cd3f --- /dev/null +++ b/helm/capif/Chart.yaml @@ -0,0 +1,26 @@ +apiVersion: v2 +name: capif +description: A Helm chart to CAPIF in Kubernetes +# A chart can be either an 'application' or a 'library' chart. +# +# Application charts are a collection of templates that can be packaged into versioned archives +# to be deployed. +# +# Library charts provide useful utilities or functions for the chart developer. They're included as +# a dependency of application charts to inject those utilities and functions into the rendering +# pipeline. Library charts do not define any templates and therefore cannot be deployed. +type: application +# This is the chart version. This version number should be incremented each time you make changes +# to the chart and its templates, including the app version. +# Versions are expected to follow Semantic Versioning (https://semver.org/) +version: v3.1.4 +# This is the version number of the application being deployed. This version number should be +# incremented each time you make changes to the application. Versions are not expected to +# follow Semantic Versioning. They should reflect the version the application is using. +# It is recommended to use it with quotes. +appVersion: "v3.1.4" +dependencies: + - name: "tempo" + condition: tempo.enabled + repository: "https://grafana.github.io/helm-charts" + version: "^1.3.1" diff --git a/helm/capif/README.md b/helm/capif/README.md new file mode 100644 index 0000000000000000000000000000000000000000..a78b060c3aff3b0fbc789d06e70b886c0f230208 --- /dev/null +++ b/helm/capif/README.md @@ -0,0 +1,304 @@ +# Helm of CAPIF + +![Version: v1.0.0](https://img.shields.io/badge/Version-v1.0.0-informational?style=for-the-badge) +![Type: application](https://img.shields.io/badge/Type-application-informational?style=for-the-badge) +![AppVersion: v1.0.0](https://img.shields.io/badge/AppVersion-v1.0.0-informational?style=for-the-badge) + +## Description + +A Helm chart to CAPIF in Kubernetes + +## Usage + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| CapifClient.enable | string | `"true"` | If enable capif client. | +| CapifClient.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| CapifClient.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/client"` | The docker image repository to use | +| CapifClient.image.tag | string | `""` | The docker image tag to use @default Chart version | +| CapifClient.ports[0].name | string | `"8080"` | | +| CapifClient.ports[0].port | int | `8080` | | +| CapifClient.ports[0].targetPort | int | `8080` | | +| CapifClient.replicas | int | `1` | | +| CapifClient.resources.limits.cpu | string | `"100m"` | | +| CapifClient.resources.limits.memory | string | `"128Mi"` | | +| CapifClient.resources.requests.cpu | string | `"100m"` | | +| CapifClient.resources.requests.memory | string | `"128Mi"` | | +| CapifClient.type | string | `"ClusterIP"` | | +| accessControlPolicy.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| accessControlPolicy.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| accessControlPolicy.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/access-control-policy"` | The docker image repository to use | +| accessControlPolicy.image.tag | string | `""` | The docker image tag to use @default Chart version | +| accessControlPolicy.ports[0].name | string | `"8080"` | | +| accessControlPolicy.ports[0].port | int | `8080` | | +| accessControlPolicy.ports[0].targetPort | int | `8080` | | +| accessControlPolicy.replicas | int | `1` | | +| accessControlPolicy.resources.limits.cpu | string | `"100m"` | | +| accessControlPolicy.resources.limits.memory | string | `"128Mi"` | | +| accessControlPolicy.resources.requests.cpu | string | `"100m"` | | +| accessControlPolicy.resources.requests.memory | string | `"128Mi"` | | +| accessControlPolicy.type | string | `"ClusterIP"` | | +| apiInvocationLogs.apiInvocationLogs.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| apiInvocationLogs.apiInvocationLogs.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| apiInvocationLogs.apiInvocationLogs.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/api-invocation-logs-api"` | The docker image repository to use | +| apiInvocationLogs.apiInvocationLogs.image.tag | string | `""` | The docker image tag to use @default Chart version | +| apiInvocationLogs.apiInvocationLogs.resources.limits.cpu | string | `"100m"` | | +| apiInvocationLogs.apiInvocationLogs.resources.limits.memory | string | `"128Mi"` | | +| apiInvocationLogs.apiInvocationLogs.resources.requests.cpu | string | `"100m"` | | +| apiInvocationLogs.apiInvocationLogs.resources.requests.memory | string | `"128Mi"` | | +| apiInvocationLogs.ports[0].name | string | `"8080"` | | +| apiInvocationLogs.ports[0].port | int | `8080` | | +| apiInvocationLogs.ports[0].targetPort | int | `8080` | | +| apiInvocationLogs.replicas | int | `1` | | +| apiInvocationLogs.type | string | `"ClusterIP"` | | +| apiInvokerManagement.apiInvokerManagement.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| apiInvokerManagement.apiInvokerManagement.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| apiInvokerManagement.apiInvokerManagement.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/api-invoker-management-api"` | The docker image repository to use | +| apiInvokerManagement.apiInvokerManagement.image.tag | string | `""` | The docker image tag to use @default Chart version | +| apiInvokerManagement.apiInvokerManagement.resources.limits.cpu | string | `"100m"` | | +| apiInvokerManagement.apiInvokerManagement.resources.limits.memory | string | `"128Mi"` | | +| apiInvokerManagement.apiInvokerManagement.resources.requests.cpu | string | `"100m"` | | +| apiInvokerManagement.apiInvokerManagement.resources.requests.memory | string | `"128Mi"` | | +| apiInvokerManagement.ports[0].name | string | `"8080"` | | +| apiInvokerManagement.ports[0].port | int | `8080` | | +| apiInvokerManagement.ports[0].targetPort | int | `8080` | | +| apiInvokerManagement.replicas | int | `1` | | +| apiInvokerManagement.type | string | `"ClusterIP"` | | +| apiProviderManagement.apiProviderManagement.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| apiProviderManagement.apiProviderManagement.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| apiProviderManagement.apiProviderManagement.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/api-provider-management-api"` | The docker image repository to use | +| apiProviderManagement.apiProviderManagement.image.tag | string | `""` | The docker image tag to use @default Chart version | +| apiProviderManagement.apiProviderManagement.resources.limits.cpu | string | `"100m"` | | +| apiProviderManagement.apiProviderManagement.resources.limits.memory | string | `"128Mi"` | | +| apiProviderManagement.apiProviderManagement.resources.requests.cpu | string | `"100m"` | | +| apiProviderManagement.apiProviderManagement.resources.requests.memory | string | `"128Mi"` | | +| apiProviderManagement.ports[0].name | string | `"8080"` | | +| apiProviderManagement.ports[0].port | int | `8080` | | +| apiProviderManagement.ports[0].targetPort | int | `8080` | | +| apiProviderManagement.replicas | int | `1` | | +| apiProviderManagement.type | string | `"ClusterIP"` | | +| capifEvents.capifEvents.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| capifEvents.capifEvents.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| capifEvents.capifEvents.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/events-api"` | The docker image repository to use | +| capifEvents.capifEvents.image.tag | string | `""` | The docker image tag to use @default Chart version | +| capifEvents.capifEvents.resources.limits.cpu | string | `"100m"` | | +| capifEvents.capifEvents.resources.limits.memory | string | `"128Mi"` | | +| capifEvents.capifEvents.resources.requests.cpu | string | `"100m"` | | +| capifEvents.capifEvents.resources.requests.memory | string | `"128Mi"` | | +| capifEvents.ports[0].name | string | `"8080"` | | +| capifEvents.ports[0].port | int | `8080` | | +| capifEvents.ports[0].targetPort | int | `8080` | | +| capifEvents.replicas | int | `1` | | +| capifEvents.type | string | `"ClusterIP"` | | +| capifRoutingInfo.capifRoutingInfo.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| capifRoutingInfo.capifRoutingInfo.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| capifRoutingInfo.capifRoutingInfo.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/routing-info-api"` | The docker image repository to use | +| capifRoutingInfo.capifRoutingInfo.image.tag | string | `""` | The docker image tag to use @default Chart version | +| capifRoutingInfo.capifRoutingInfo.resources.limits.cpu | string | `"100m"` | | +| capifRoutingInfo.capifRoutingInfo.resources.limits.memory | string | `"128Mi"` | | +| capifRoutingInfo.capifRoutingInfo.resources.requests.cpu | string | `"100m"` | | +| capifRoutingInfo.capifRoutingInfo.resources.requests.memory | string | `"128Mi"` | | +| capifRoutingInfo.ports[0].name | string | `"8080"` | | +| capifRoutingInfo.ports[0].port | int | `8080` | | +| capifRoutingInfo.ports[0].targetPort | int | `8080` | | +| capifRoutingInfo.replicas | int | `1` | | +| capifRoutingInfo.type | string | `"ClusterIP"` | | +| capifSecurity.capifSecurity.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| capifSecurity.capifSecurity.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| capifSecurity.capifSecurity.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/security-api"` | The docker image repository to use | +| capifSecurity.capifSecurity.image.tag | string | `""` | The docker image tag to use @default Chart version | +| capifSecurity.capifSecurity.resources.limits.cpu | string | `"100m"` | | +| capifSecurity.capifSecurity.resources.limits.memory | string | `"128Mi"` | | +| capifSecurity.capifSecurity.resources.requests.cpu | string | `"100m"` | | +| capifSecurity.capifSecurity.resources.requests.memory | string | `"128Mi"` | | +| capifSecurity.ports[0].name | string | `"8080"` | | +| capifSecurity.ports[0].port | int | `8080` | | +| capifSecurity.ports[0].targetPort | int | `8080` | | +| capifSecurity.replicas | int | `1` | | +| capifSecurity.type | string | `"ClusterIP"` | | +| env | string | `""` | The Environment variable. Use openshift if you are deploying in Openshift cluster. anotherwise use the field empty | +| ingress.ip | string | `"10.17.173.127"` | | +| kubernetesClusterDomain | string | `"cluster.local"` | | +| logs.enable | string | `"true"` | If register enabled. enable: true, enable: "" = not enabled | +| logs.logs.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| logs.logs.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| logs.logs.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/auditing-api"` | The docker image repository to use | +| logs.logs.image.tag | string | `""` | The docker image tag to use @default Chart version | +| logs.logs.resources.limits.cpu | string | `"100m"` | | +| logs.logs.resources.limits.memory | string | `"128Mi"` | | +| logs.logs.resources.requests.cpu | string | `"100m"` | | +| logs.logs.resources.requests.memory | string | `"128Mi"` | | +| logs.ports[0].name | string | `"8080"` | | +| logs.ports[0].port | int | `8080` | | +| logs.ports[0].targetPort | int | `8080` | | +| logs.replicas | int | `1` | | +| logs.type | string | `"ClusterIP"` | | +| mongo.mongo.env.mongoInitdbRootPassword | string | `"example"` | | +| mongo.mongo.env.mongoInitdbRootUsername | string | `"root"` | | +| mongo.mongo.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| mongo.mongo.image.repository | string | `"mongo"` | The docker image repository to use | +| mongo.mongo.image.tag | string | `"6.0.2"` | The docker image tag to use @default Chart version | +| mongo.mongo.resources | object | `{}` | | +| mongo.persistence | object | `{"enable":"true","storage":"8Gi"}` | If mongo.persistence enabled. enable: true, enable: "" = not enabled | +| mongo.ports[0].name | string | `"27017"` | | +| mongo.ports[0].port | int | `27017` | | +| mongo.ports[0].targetPort | int | `27017` | | +| mongo.replicas | int | `1` | | +| mongo.type | string | `"ClusterIP"` | | +| mongoExpress.mongoExpress.env.meConfigMongodbAdminpassword | string | `"example"` | | +| mongoExpress.mongoExpress.env.meConfigMongodbAdminusername | string | `"root"` | | +| mongoExpress.mongoExpress.env.meConfigMongodbUrl | string | `"mongodb://root:example@mongo:27017/"` | | +| mongoExpress.mongoExpress.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| mongoExpress.mongoExpress.image.repository | string | `"mongo-express"` | The docker image repository to use | +| mongoExpress.mongoExpress.image.tag | string | `"1.0.0-alpha.4"` | The docker image tag to use @default Chart version | +| mongoExpress.mongoExpress.resources.limits.cpu | string | `"100m"` | | +| mongoExpress.mongoExpress.resources.limits.memory | string | `"128Mi"` | | +| mongoExpress.mongoExpress.resources.requests.cpu | string | `"100m"` | | +| mongoExpress.mongoExpress.resources.requests.memory | string | `"128Mi"` | | +| mongoExpress.ports[0].name | string | `"8082"` | | +| mongoExpress.ports[0].port | int | `8082` | | +| mongoExpress.ports[0].targetPort | int | `8081` | | +| mongoExpress.replicas | int | `1` | | +| mongoExpress.type | string | `"ClusterIP"` | | +| mongoRegister.mongo.env.mongoInitdbRootPassword | string | `"example"` | | +| mongoRegister.mongo.env.mongoInitdbRootUsername | string | `"root"` | | +| mongoRegister.mongo.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| mongoRegister.mongo.image.repository | string | `"mongo"` | The docker image repository to use | +| mongoRegister.mongo.image.tag | string | `"6.0.2"` | The docker image tag to use @default Chart version | +| mongoRegister.mongo.resources | object | `{}` | | +| mongoRegister.ports[0].name | string | `"27017"` | | +| mongoRegister.ports[0].port | int | `27017` | | +| mongoRegister.ports[0].targetPort | int | `27017` | | +| mongoRegister.replicas | int | `1` | | +| mongoRegister.type | string | `"ClusterIP"` | | +| monitoring.enable | string | `"true"` | | +| monitoring.enable | string | `"true"` | If monitoring enabled. enable: true, enable: "" = not enabled | +| monitoring.fluentBit.env.lokiUrl | string | `"http://loki:3100/loki/api/v1/push"` | | +| monitoring.fluentBit.image.repository | string | `"grafana/fluent-bit-plugin-loki"` | The docker image repository to use | +| monitoring.fluentBit.image.tag | string | `"latest"` | The docker image tag to use @default Chart version | +| monitoring.fluentBit.resources | object | `{}` | | +| monitoring.grafana.env.gfAuthAnonymousEnable | bool | `true` | | +| monitoring.grafana.env.gfAuthAnonymousOrgRole | string | `"Admin"` | | +| monitoring.grafana.env.gfSecurityAdminPassword | string | `"secure_pass"` | | +| monitoring.grafana.env.gfSecurityAllowEmbedding | bool | `true` | | +| monitoring.grafana.env.lokiUrl | string | `"http://loki:3100"` | | +| monitoring.grafana.env.prometheusUrl | string | `"http://prometheus.mon.svc.cluster.local:9090"` | | +| monitoring.grafana.env.tempoUrl | string | `"http://monitoring-capif-tempo:3100"` | | +| monitoring.grafana.image.repository | string | `"grafana/grafana"` | The docker image repository to use | +| monitoring.grafana.image.tag | string | `"latest"` | The docker image tag to use @default Chart version | +| monitoring.grafana.ingress | object | `{"annotations":null,"enabled":true,"hosts":[{"host":"grafana.5gnacar.int","paths":[{"path":"/","pathType":"Prefix"}]}],"ingressClassName":"nginx","tls":[]}` | If ingress enabled=true, use monitoring.grafana.ingressRoute.enable="" | +| monitoring.grafana.ingressRoute | object | `{"enable":"","host":"grafana.5gnacar.int"}` | If ingressRoute enable=true, use monitoring.grafana.ingress.enabled="" | +| monitoring.grafana.persistence | object | `{"enable":"true","storage":"100Mi"}` | If grafana.persistence enabled. enable: true, enable: "" = not enabled | +| monitoring.grafana.resources | object | `{}` | | +| monitoring.grafana.service.port | int | `3000` | | +| monitoring.grafana.service.type | string | `"ClusterIP"` | | +| monitoring.loki.image.repository | string | `"grafana/loki"` | The docker image repository to use | +| monitoring.loki.image.tag | string | `"2.8.0"` | The docker image tag to use @default Chart version | +| monitoring.loki.persistence | object | `{"enable":"true","storage":"100Mi"}` | If grafana.persistence enabled. enable: true, enable: "" = not enabled | +| monitoring.loki.resources | object | `{}` | | +| monitoring.otel.configMap.tempoEndpoint | string | `"monitoring-capif-tempo:4317"` | | +| monitoring.otel.image.repository | string | `"otel/opentelemetry-collector"` | The docker image repository to use | +| monitoring.otel.image.tag | string | `"0.81.0"` | The docker image tag to use @default Chart version | +| monitoring.otel.resources | object | `{}` | | +| monitoring.prometheus.enable | string | `"true"` | It will deploy prometheus | +| monitoring.prometheus.image.repository | string | `"prom/prometheus"` | The docker image repository to use | +| monitoring.prometheus.image.tag | string | `"latest"` | The docker image tag to use @default Chart version | +| monitoring.prometheus.ingress.annotations | string | `nil` | | +| monitoring.prometheus.ingress.enabled | bool | `true` | | +| monitoring.prometheus.ingress.hosts[0].host | string | `"prometheus.5gnacar.int"` | | +| monitoring.prometheus.ingress.hosts[0].paths[0].path | string | `"/"` | | +| monitoring.prometheus.ingress.hosts[0].paths[0].pathType | string | `"Prefix"` | | +| monitoring.prometheus.ingress.ingressClassName | string | `"nginx"` | | +| monitoring.prometheus.ingress.tls | list | `[]` | | +| monitoring.prometheus.ingressRoute | object | `{"enable":"","host":"prometheus.5gnacar.int"}` | If ingressRoute enable=true, use monitoring.prometheus.ingress.enabled="" | +| monitoring.prometheus.persistence.enable | string | `"true"` | | +| monitoring.prometheus.persistence.storage | string | `"8Gi"` | | +| monitoring.prometheus.resources | object | `{}` | | +| monitoring.prometheus.service.port | int | `9090` | | +| monitoring.prometheus.service.type | string | `"ClusterIP"` | | +| monitoring.renderer.env.enableMetrics | string | `"true"` | | +| monitoring.renderer.image.repository | string | `"grafana/grafana-image-renderer"` | The docker image repository to use | +| monitoring.renderer.image.tag | string | `"latest"` | The docker image tag to use @default Chart version | +| monitoring.renderer.resources | object | `{}` | | +| nginx.annotations."nginx.ingress.kubernetes.io/backend-protocol" | string | `"HTTPS"` | | +| nginx.annotations."nginx.ingress.kubernetes.io/ssl-passthrough" | string | `"true"` | | +| nginx.annotations."nginx.ingress.kubernetes.io/ssl-redirect" | string | `"true"` | | +| nginx.ingressClassName | string | `"nginx"` | | +| nginx.ingressType | string | `"Ingress"` | if nginx.ingressType: "Ingress". set up monitoring.prometheus.ingress: true and monitoring.grafana.ingress: true Use IngressRoute if you want to use Gateway API. ex traefix | +| nginx.nginx.env.capifHostname | string | `"my-capif.apps.ocp-epg.hi.inet"` | Ingress's host to Capif | +| nginx.nginx.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| nginx.nginx.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/nginx"` | The docker image repository to use | +| nginx.nginx.image.tag | string | `""` | The docker image tag to use @default Chart version | +| nginx.nginx.resources.limits.cpu | string | `"100m"` | | +| nginx.nginx.resources.limits.memory | string | `"128Mi"` | | +| nginx.nginx.resources.requests.cpu | string | `"100m"` | | +| nginx.nginx.resources.requests.memory | string | `"128Mi"` | | +| nginx.ports[0].name | string | `"8080"` | | +| nginx.ports[0].port | int | `8080` | | +| nginx.ports[0].targetPort | int | `8080` | | +| nginx.ports[1].name | string | `"443"` | | +| nginx.ports[1].port | int | `443` | | +| nginx.ports[1].targetPort | int | `443` | | +| nginx.replicas | int | `1` | | +| nginx.type | string | `"ClusterIP"` | | +| parametersVault.env.vaultAccessToken | string | `"dev-only-token"` | | +| parametersVault.env.vaultHostname | string | `"vault-internal.mon.svc.cluster.local"` | | +| parametersVault.env.vaultPort | int | `8200` | | +| publishedApis.ports[0].name | string | `"8080"` | | +| publishedApis.ports[0].port | int | `8080` | | +| publishedApis.ports[0].targetPort | int | `8080` | | +| publishedApis.publishedApis.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| publishedApis.publishedApis.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| publishedApis.publishedApis.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/publish-service-api"` | The docker image repository to use | +| publishedApis.publishedApis.image.tag | string | `""` | The docker image tag to use @default Chart version | +| publishedApis.publishedApis.resources.limits.cpu | string | `"100m"` | | +| publishedApis.publishedApis.resources.limits.memory | string | `"128Mi"` | | +| publishedApis.publishedApis.resources.requests.cpu | string | `"100m"` | | +| publishedApis.publishedApis.resources.requests.memory | string | `"128Mi"` | | +| publishedApis.replicas | int | `1` | | +| publishedApis.type | string | `"ClusterIP"` | | +| redis.ports[0].name | string | `"6379"` | | +| redis.ports[0].port | int | `6379` | | +| redis.ports[0].targetPort | int | `6379` | | +| redis.redis.env.redisReplicationMode | string | `"master"` | | +| redis.redis.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| redis.redis.image.repository | string | `"redis"` | The docker image repository to use | +| redis.redis.image.tag | string | `"alpine"` | The docker image tag to use @default Chart version | +| redis.redis.resources.limits.cpu | string | `"100m"` | | +| redis.redis.resources.limits.memory | string | `"128Mi"` | | +| redis.redis.resources.requests.cpu | string | `"100m"` | | +| redis.redis.resources.requests.memory | string | `"128Mi"` | | +| redis.replicas | int | `1` | | +| redis.type | string | `"ClusterIP"` | | +| register.enable | string | `"true"` | If register enabled. enable: true, enable: "" = not enabled | +| register.ports[0].name | string | `"8080"` | | +| register.ports[0].port | int | `8084` | | +| register.ports[0].targetPort | int | `8080` | | +| register.register.env.mongoHost | string | `"mongo-register"` | | +| register.register.env.mongoPort | int | `27017` | | +| register.register.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| register.register.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/jwtauth"` | The docker image repository to use | +| register.register.image.tag | string | `""` | The docker image tag to use @default Chart version | +| register.register.resources.limits.cpu | string | `"100m"` | | +| register.register.resources.limits.memory | string | `"128Mi"` | | +| register.register.resources.requests.cpu | string | `"100m"` | | +| register.register.resources.requests.memory | string | `"128Mi"` | | +| register.replicas | int | `1` | | +| register.type | string | `"ClusterIP"` | | +| serviceApis.ports[0].name | string | `"8080"` | | +| serviceApis.ports[0].port | int | `8080` | | +| serviceApis.ports[0].targetPort | int | `8080` | | +| serviceApis.replicas | int | `1` | | +| serviceApis.serviceApis.env | object | `{"monitoring":"true"}` | If env.monitoring: true. Setup monitoring.enable: true | +| serviceApis.serviceApis.image.imagePullPolicy | string | `"Always"` | Image pull policy: Always, IfNotPresent | +| serviceApis.serviceApis.image.repository | string | `"public.ecr.aws/o2v4a8t6/opencapif/discover-service-api"` | The docker image repository to use | +| serviceApis.serviceApis.image.tag | string | `""` | The docker image tag to use @default Chart version | +| serviceApis.serviceApis.resources.limits.cpu | string | `"100m"` | | +| serviceApis.serviceApis.resources.limits.memory | string | `"128Mi"` | | +| serviceApis.serviceApis.resources.requests.cpu | string | `"100m"` | | +| serviceApis.serviceApis.resources.requests.memory | string | `"128Mi"` | | +| serviceApis.type | string | `"ClusterIP"` | | +| tempo | object | `{"enabled":true,"persistence":{"enabled":true,"size":"3Gi"},"tempo":{"metricsGenerator":{"enabled":true,"remoteWriteUrl":"http://prometheus.mon.svc.cluster.local:9090/api/v1/write"}}}` | If monitoring.enable: "true". Also enable tempo.enabled: true | + diff --git a/helm/capif/docker-monitoring.json b/helm/capif/docker-monitoring.json new file mode 100644 index 0000000000000000000000000000000000000000..8a3102a055ed6fad82439f8bc99397f419f277ef --- /dev/null +++ b/helm/capif/docker-monitoring.json @@ -0,0 +1,690 @@ +{ + "annotations": { + "list": [ + { + "builtIn": 1, + "datasource": { + "type": "grafana", + "uid": "-- Grafana --" + }, + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "type": "dashboard" + } + ] + }, + "description": "Docker monitoring with Prometheus and cAdvisor", + "editable": true, + "fiscalYearStartMonth": 0, + "gnetId": 193, + "graphTooltip": 1, + "id": 1, + "links": [], + "liveNow": false, + "panels": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 8, + "x": 0, + "y": 0 + }, + "id": 7, + "links": [], + "maxDataPoints": 100, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "mean" + ], + "fields": "", + "values": false + }, + "textMode": "auto" + }, + "pluginVersion": "9.5.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "count(container_last_seen{image!=\"\"})", + "intervalFactor": 2, + "legendFormat": "", + "metric": "container_last_seen", + "refId": "A", + "step": 240 + } + ], + "title": "Running containers", + "transparent": true, + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "mbytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 8, + "x": 8, + "y": 0 + }, + "id": 5, + "links": [], + "maxDataPoints": 100, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "textMode": "auto" + }, + "pluginVersion": "9.5.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum(container_memory_usage_bytes{image!=\"\"})/1024/1024", + "intervalFactor": 2, + "legendFormat": "", + "metric": "container_memory_usage_bytes", + "refId": "A", + "step": 240 + } + ], + "title": "Total Memory Usage", + "transparent": true, + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "percent" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 8, + "x": 16, + "y": 0 + }, + "id": 6, + "links": [], + "maxDataPoints": 100, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "textMode": "auto" + }, + "pluginVersion": "9.5.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum(rate(container_cpu_user_seconds_total{image!=\"\"}[5m]) * 100)", + "intervalFactor": 2, + "legendFormat": "", + "metric": "container_memory_usage_bytes", + "refId": "A", + "step": 240 + } + ], + "title": "Total CPU Usage", + "transparent": true, + "type": "stat" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fill": 1, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 3 + }, + "hiddenSeries": false, + "id": 2, + "isNew": true, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "9.5.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "rate(container_cpu_user_seconds_total{image!=\"\"}[5m]) * 100", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "cpu", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "CPU Usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "percent", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": true + } + ], + "yaxis": { + "align": false + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fill": 1, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 10 + }, + "hiddenSeries": false, + "id": 1, + "isNew": true, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "9.5.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "container_memory_usage_bytes{image!=\"\"}", + "hide": false, + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "container_memory_usage_bytes", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Memory Usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "label": "", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "editable": true, + "error": false, + "fill": 1, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 12, + "x": 0, + "y": 17 + }, + "hiddenSeries": false, + "id": 3, + "isNew": true, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "9.5.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "irate(container_network_receive_bytes_total{image!=\"\"}[5m])", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "metric": "container_network_receive_bytes_total", + "refId": "A", + "step": 20 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Network Rx", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": true + } + ], + "yaxis": { + "align": false + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "editable": true, + "error": false, + "fill": 1, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 12, + "x": 12, + "y": 17 + }, + "hiddenSeries": false, + "id": 4, + "isNew": true, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "9.5.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "irate(container_network_transmit_bytes_total{image!=\"\"}[5m])", + "intervalFactor": 2, + "legendFormat": "{{name}}", + "refId": "A", + "step": 20 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Network Tx", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": true + } + ], + "yaxis": { + "align": false + } + } + ], + "refresh": "10s", + "schemaVersion": 38, + "style": "dark", + "tags": [ + "docker" + ], + "templating": { + "list": [] + }, + "time": { + "from": "now-3h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "Docker monitoring", + "uid": "f66dea48-ca2f-46fb-a6f0-50bf50502d74", + "version": 2, + "weekStart": "" +} \ No newline at end of file diff --git a/helm/capif/kubernetes-dashboard.json b/helm/capif/kubernetes-dashboard.json new file mode 100644 index 0000000000000000000000000000000000000000..ac97f80ebd0139f383c3f82920b9b9aec7b8c1e9 --- /dev/null +++ b/helm/capif/kubernetes-dashboard.json @@ -0,0 +1,2629 @@ +{ + "annotations": { + "list": [ + { + "$$hashKey": "object:103", + "builtIn": 1, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "type": "dashboard" + } + ] + }, + "description": "Monitors Kubernetes cluster using Prometheus. Shows overall cluster CPU / Memory / Filesystem usage as well as individual pod, containers, systemd services statistics. Uses cAdvisor metrics only.", + "editable": true, + "fiscalYearStartMonth": 0, + "gnetId": 12740, + "graphTooltip": 0, + "id": 7, + "links": [], + "liveNow": false, + "panels": [ + { + "collapsed": false, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 0 + }, + "id": 33, + "panels": [], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "Network I/O pressure", + "type": "row" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fieldConfig": { + "defaults": { + "links": [] + }, + "overrides": [] + }, + "fill": 1, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 6, + "w": 24, + "x": 0, + "y": 1 + }, + "height": "200px", + "hiddenSeries": false, + "id": 32, + "legend": { + "alignAsTable": false, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "sideWidth": 200, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "10.0.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_network_receive_bytes_total{kubernetes_io_hostname=~\"^$Node$\"}[1m]))", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "Received", + "metric": "network", + "refId": "A", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "- sum (rate (container_network_transmit_bytes_total{kubernetes_io_hostname=~\"^$Node$\"}[1m]))", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "Sent", + "metric": "network", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Network I/O pressure", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 0, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "logBase": 1, + "show": true + }, + { + "format": "Bps", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + }, + { + "collapsed": false, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 7 + }, + "id": 34, + "panels": [], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "Total usage", + "type": "row" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "max": 100, + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "rgba(50, 172, 45, 0.97)", + "value": null + }, + { + "color": "rgba(237, 129, 40, 0.89)", + "value": 65 + }, + { + "color": "rgba(245, 54, 54, 0.9)", + "value": 90 + } + ] + }, + "unit": "percent" + }, + "overrides": [] + }, + "gridPos": { + "h": 5, + "w": 8, + "x": 0, + "y": 8 + }, + "id": 4, + "links": [], + "maxDataPoints": 100, + "options": { + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showThresholdLabels": false, + "showThresholdMarkers": true + }, + "pluginVersion": "10.0.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_memory_working_set_bytes{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) / sum (machine_memory_bytes{kubernetes_io_hostname=~\"^$Node$\"}) * 100", + "interval": "10s", + "intervalFactor": 1, + "refId": "A", + "step": 10 + } + ], + "title": "Cluster memory usage", + "type": "gauge" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "decimals": 2, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "max": 100, + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "rgba(50, 172, 45, 0.97)", + "value": null + }, + { + "color": "rgba(237, 129, 40, 0.89)", + "value": 65 + }, + { + "color": "rgba(245, 54, 54, 0.9)", + "value": 90 + } + ] + }, + "unit": "percent" + }, + "overrides": [] + }, + "gridPos": { + "h": 5, + "w": 8, + "x": 8, + "y": 8 + }, + "id": 6, + "links": [], + "maxDataPoints": 100, + "options": { + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showThresholdLabels": false, + "showThresholdMarkers": true + }, + "pluginVersion": "10.0.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_cpu_usage_seconds_total{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) / sum (machine_cpu_cores{kubernetes_io_hostname=~\"^$Node$\"}) * 100", + "interval": "10s", + "intervalFactor": 1, + "refId": "A", + "step": 10 + } + ], + "title": "Cluster CPU usage (1m avg)", + "type": "gauge" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "decimals": 2, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "max": 100, + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "rgba(50, 172, 45, 0.97)", + "value": null + }, + { + "color": "rgba(237, 129, 40, 0.89)", + "value": 65 + }, + { + "color": "rgba(245, 54, 54, 0.9)", + "value": 90 + } + ] + }, + "unit": "percent" + }, + "overrides": [] + }, + "gridPos": { + "h": 5, + "w": 8, + "x": 16, + "y": 8 + }, + "id": 7, + "links": [], + "maxDataPoints": 100, + "options": { + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showThresholdLabels": false, + "showThresholdMarkers": true + }, + "pluginVersion": "10.0.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_fs_usage_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) / sum (container_fs_limit_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) * 100", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "", + "metric": "", + "refId": "A", + "step": 10 + } + ], + "title": "Cluster filesystem usage", + "type": "gauge" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "decimals": 2, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 0, + "y": 13 + }, + "id": 9, + "links": [], + "maxDataPoints": 100, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "textMode": "auto" + }, + "pluginVersion": "10.0.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_memory_working_set_bytes{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"})", + "interval": "10s", + "intervalFactor": 1, + "refId": "A", + "step": 10 + } + ], + "title": "Used", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "decimals": 2, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 4, + "y": 13 + }, + "id": 10, + "links": [], + "maxDataPoints": 100, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "textMode": "auto" + }, + "pluginVersion": "10.0.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (machine_memory_bytes{kubernetes_io_hostname=~\"^$Node$\"})", + "interval": "10s", + "intervalFactor": 1, + "refId": "A", + "step": 10 + } + ], + "title": "Total", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "decimals": 2, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 8, + "y": 13 + }, + "id": 11, + "links": [], + "maxDataPoints": 100, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "textMode": "auto" + }, + "pluginVersion": "10.0.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_cpu_usage_seconds_total{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m]))", + "interval": "10s", + "intervalFactor": 1, + "refId": "A", + "step": 10 + } + ], + "title": "Used", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "decimals": 2, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 12, + "y": 13 + }, + "id": 12, + "links": [], + "maxDataPoints": 100, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "textMode": "auto" + }, + "pluginVersion": "10.0.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (machine_cpu_cores{kubernetes_io_hostname=~\"^$Node$\"})", + "interval": "10s", + "intervalFactor": 1, + "refId": "A", + "step": 10 + } + ], + "title": "Total", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "decimals": 2, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 16, + "y": 13 + }, + "id": 13, + "links": [], + "maxDataPoints": 100, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "textMode": "auto" + }, + "pluginVersion": "10.0.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_fs_usage_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"})", + "interval": "10s", + "intervalFactor": 1, + "refId": "A", + "step": 10 + } + ], + "title": "Used", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "decimals": 2, + "mappings": [ + { + "options": { + "match": "null", + "result": { + "text": "N/A" + } + }, + "type": "special" + } + ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 20, + "y": 13 + }, + "id": 14, + "links": [], + "maxDataPoints": 100, + "options": { + "colorMode": "none", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "textMode": "auto" + }, + "pluginVersion": "10.0.2", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_fs_limit_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"})", + "interval": "10s", + "intervalFactor": 1, + "refId": "A", + "step": 10 + } + ], + "title": "Total", + "type": "stat" + }, + { + "collapsed": false, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 16 + }, + "id": 35, + "panels": [], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "Pods CPU usage", + "type": "row" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 3, + "editable": true, + "error": false, + "fieldConfig": { + "defaults": { + "links": [] + }, + "overrides": [] + }, + "fill": 0, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 17 + }, + "height": "", + "hiddenSeries": false, + "id": 17, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "10.0.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "editorMode": "code", + "expr": "sum (rate (container_cpu_usage_seconds_total{image!=\"\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (pod)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "{{ pod }}", + "metric": "container_cpu", + "range": true, + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Pods CPU usage (1m avg)", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "$$hashKey": "object:112", + "format": "none", + "label": "cores", + "logBase": 1, + "show": true + }, + { + "$$hashKey": "object:113", + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + }, + { + "collapsed": false, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 24 + }, + "id": 39, + "panels": [], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "Pods memory usage", + "type": "row" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fieldConfig": { + "defaults": { + "links": [] + }, + "overrides": [] + }, + "fill": 0, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 25 + }, + "hiddenSeries": false, + "id": 25, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "10.0.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "editorMode": "code", + "expr": "sum (container_memory_working_set_bytes{image!=\"\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}) by (pod)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "{{ pod }}", + "metric": "container_memory_usage:sort_desc", + "range": true, + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Pods memory usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "$$hashKey": "object:181", + "format": "bytes", + "logBase": 1, + "show": true + }, + { + "$$hashKey": "object:182", + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + }, + { + "collapsed": false, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 32 + }, + "id": 43, + "panels": [], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "Pods network I/O", + "type": "row" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fieldConfig": { + "defaults": { + "links": [] + }, + "overrides": [] + }, + "fill": 1, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 33 + }, + "hiddenSeries": false, + "id": 16, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "10.0.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "editorMode": "code", + "expr": "sum (rate (container_network_receive_bytes_total{image!=\"\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (pod)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "-> {{ pod }}", + "metric": "network", + "range": true, + "refId": "A", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "- sum (rate (container_network_transmit_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (pod)", + "hide": true, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "<- {{ pod }}", + "metric": "network", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Pods network I/O (1m avg)", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + }, + { + "collapsed": true, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 40 + }, + "id": 37, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 3, + "editable": true, + "error": false, + "fieldConfig": { + "defaults": { + "links": [] + }, + "overrides": [] + }, + "fill": 0, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 41 + }, + "height": "", + "hiddenSeries": false, + "id": 24, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "hideEmpty": false, + "hideZero": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "10.0.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_cpu_usage_seconds_total{image!=\"\",name=~\"^k8s_.*\",container!=\"POD\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (container, pod)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "pod: {{ pod }}| {{ container }}", + "metric": "container_cpu", + "refId": "A", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_cpu_usage_seconds_total{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (kubernetes_io_hostname, name, image)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", + "metric": "container_cpu", + "refId": "B", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_cpu_usage_seconds_total{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (kubernetes_io_hostname, rkt_container_name)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", + "metric": "container_cpu", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Containers CPU usage (1m avg)", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "$$hashKey": "object:337", + "format": "none", + "label": "cores", + "logBase": 1, + "show": true + }, + { + "$$hashKey": "object:338", + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + } + ], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "Containers CPU usage", + "type": "row" + }, + { + "collapsed": true, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 41 + }, + "id": 41, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fieldConfig": { + "defaults": { + "links": [] + }, + "overrides": [] + }, + "fill": 0, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 42 + }, + "hiddenSeries": false, + "id": 27, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "10.0.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_memory_working_set_bytes{image!=\"\",name=~\"^k8s_.*\",container!=\"POD\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}) by (container, pod)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "pod: {{ pod }} | {{ container }}", + "metric": "container_memory_usage:sort_desc", + "refId": "A", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_memory_working_set_bytes{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}) by (kubernetes_io_hostname, name, image)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", + "metric": "container_memory_usage:sort_desc", + "refId": "B", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_memory_working_set_bytes{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}) by (kubernetes_io_hostname, rkt_container_name)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", + "metric": "container_memory_usage:sort_desc", + "refId": "C", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Containers memory usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "$$hashKey": "object:406", + "format": "bytes", + "logBase": 1, + "show": true + }, + { + "$$hashKey": "object:407", + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + } + ], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "Containers memory usage", + "type": "row" + }, + { + "collapsed": true, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 42 + }, + "id": 44, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fieldConfig": { + "defaults": { + "links": [] + }, + "overrides": [] + }, + "fill": 1, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 43 + }, + "hiddenSeries": false, + "id": 30, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "10.0.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_network_receive_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (container, pod)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "-> pod: {{ pod }} | {{ container }}", + "metric": "network", + "refId": "B", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "- sum (rate (container_network_transmit_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (container, pod)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "<- pod: {{ pod }} | {{ container }}", + "metric": "network", + "refId": "D", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_network_receive_bytes_total{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (kubernetes_io_hostname, name, image)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "-> docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", + "metric": "network", + "refId": "A", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "- sum (rate (container_network_transmit_bytes_total{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (kubernetes_io_hostname, name, image)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "<- docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", + "metric": "network", + "refId": "C", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_network_transmit_bytes_total{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (kubernetes_io_hostname, rkt_container_name)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "-> rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", + "metric": "network", + "refId": "E", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "- sum (rate (container_network_transmit_bytes_total{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\",namespace=~\"^$namespace$\"}[1m])) by (kubernetes_io_hostname, rkt_container_name)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "<- rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", + "metric": "network", + "refId": "F", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Containers network I/O (1m avg)", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + } + ], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "Containers network I/O", + "type": "row" + }, + { + "collapsed": false, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 43 + }, + "id": 36, + "panels": [], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "System services CPU usage", + "type": "row" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 3, + "editable": true, + "error": false, + "fieldConfig": { + "defaults": { + "links": [] + }, + "overrides": [] + }, + "fill": 0, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 44 + }, + "height": "", + "hiddenSeries": false, + "id": 23, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "10.0.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": true, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "editorMode": "code", + "expr": "sum (rate (container_cpu_usage_seconds_total{systemd_service_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (systemd_service_name)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "{{ systemd_service_name }}", + "metric": "container_cpu", + "range": true, + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "System services CPU usage (1m avg)", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": "cores", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + }, + { + "collapsed": true, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 51 + }, + "id": 40, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 29 + }, + "hiddenSeries": false, + "id": 26, + "isNew": true, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "sideWidth": 200, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": true, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_memory_working_set_bytes{systemd_service_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}) by (systemd_service_name)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "{{ systemd_service_name }}", + "metric": "container_memory_usage:sort_desc", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "System services memory usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + } + ], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "System services memory usage", + "type": "row" + }, + { + "collapsed": true, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 52 + }, + "id": 38, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 3, + "editable": true, + "error": false, + "fieldConfig": { + "defaults": { + "links": [] + }, + "overrides": [] + }, + "fill": 0, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 14, + "w": 24, + "x": 0, + "y": 46 + }, + "hiddenSeries": false, + "id": 20, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": false, + "show": true, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "10.0.2", + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_cpu_usage_seconds_total{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (id)", + "hide": false, + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "{{ id }}", + "metric": "container_cpu", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "All processes CPU usage (1m avg)", + "tooltip": { + "msResolution": true, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "$$hashKey": "object:254", + "format": "none", + "label": "cores", + "logBase": 1, + "show": true + }, + { + "$$hashKey": "object:255", + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + } + ], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "All processes CPU usage", + "type": "row" + }, + { + "collapsed": true, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 53 + }, + "id": 42, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fill": 0, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 14, + "w": 24, + "x": 0, + "y": 47 + }, + "hiddenSeries": false, + "id": 28, + "isNew": true, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": false, + "show": true, + "sideWidth": 200, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": true, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (container_memory_working_set_bytes{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) by (id)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "{{ id }}", + "metric": "container_memory_usage:sort_desc", + "refId": "A", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "All processes memory usage", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "bytes", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + } + ], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "All processes memory usage", + "type": "row" + }, + { + "collapsed": true, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 54 + }, + "id": 45, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "decimals": 2, + "editable": true, + "error": false, + "fill": 1, + "fillGradient": 0, + "grid": {}, + "gridPos": { + "h": 14, + "w": 24, + "x": 0, + "y": 48 + }, + "hiddenSeries": false, + "id": 29, + "isNew": true, + "legend": { + "alignAsTable": true, + "avg": true, + "current": true, + "max": false, + "min": false, + "rightSide": false, + "show": true, + "sideWidth": 200, + "sort": "current", + "sortDesc": true, + "total": false, + "values": true + }, + "lines": true, + "linewidth": 2, + "links": [], + "nullPointMode": "connected", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "sum (rate (container_network_receive_bytes_total{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (id)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "-> {{ id }}", + "metric": "network", + "refId": "A", + "step": 10 + }, + { + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "expr": "- sum (rate (container_network_transmit_bytes_total{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (id)", + "interval": "10s", + "intervalFactor": 1, + "legendFormat": "<- {{ id }}", + "metric": "network", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeRegions": [], + "title": "All processes network I/O (1m avg)", + "tooltip": { + "msResolution": false, + "shared": true, + "sort": 2, + "value_type": "cumulative" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "Bps", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": false + } + ], + "yaxis": { + "align": false + } + } + ], + "targets": [ + { + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "refId": "A" + } + ], + "title": "All processes network I/O", + "type": "row" + } + ], + "refresh": "10s", + "schemaVersion": 38, + "style": "dark", + "tags": [ + "kubernetes" + ], + "templating": { + "list": [ + { + "allValue": "", + "current": { + "selected": true, + "text": "monitoring", + "value": "monitoring" + }, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "definition": "label_values(namespace)", + "hide": 0, + "includeAll": true, + "multi": false, + "name": "namespace", + "options": [], + "query": "label_values(namespace)", + "refresh": 1, + "regex": "", + "skipUrlSync": false, + "sort": 0, + "tagValuesQuery": "", + "tagsQuery": "", + "type": "query", + "useTags": false + }, + { + "allValue": ".*", + "current": { + "selected": false, + "text": "All", + "value": "$__all" + }, + "datasource": { + "type": "prometheus", + "uid": "af6b44aa-0703-4979-825c-c1afba946534" + }, + "definition": "", + "hide": 0, + "includeAll": true, + "multi": false, + "name": "Node", + "options": [], + "query": "label_values(kubernetes_io_hostname)", + "refresh": 1, + "regex": "", + "skipUrlSync": false, + "sort": 0, + "tagValuesQuery": "", + "tagsQuery": "", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-5m", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": [ + "5m", + "15m", + "1h", + "6h", + "12h", + "24h", + "2d", + "7d", + "30d" + ] + }, + "timezone": "browser", + "title": "Kubernetes Monitoring Dashboard", + "uid": "msqzbWjWk", + "version": 2, + "weekStart": "" + } \ No newline at end of file diff --git a/helm/capif/loki-logs.json b/helm/capif/loki-logs.json new file mode 100644 index 0000000000000000000000000000000000000000..e7e4d72162ad88ab094ee11d3c49e257664b3324 --- /dev/null +++ b/helm/capif/loki-logs.json @@ -0,0 +1,281 @@ +{ + "annotations": { + "list": [ + { + "builtIn": 1, + "datasource": { + "type": "datasource", + "uid": "grafana" + }, + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "type": "dashboard" + } + ] + }, + "description": "Simple Loki dashboard", + "editable": true, + "fiscalYearStartMonth": 0, + "gnetId": 13198, + "graphTooltip": 0, + "id": 9, + "links": [], + "liveNow": false, + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { + "type": "loki", + "uid": "$datasource" + }, + "fill": 0, + "fillGradient": 0, + "gridPos": { + "h": 8, + "w": 24, + "x": 0, + "y": 0 + }, + "hiddenSeries": false, + "id": 4, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "hideEmpty": true, + "hideZero": true, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "9.5.2", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": true, + "steppedLine": false, + "targets": [ + { + "datasource": { + "uid": "$datasource" + }, + "editorMode": "code", + "expr": "count_over_time({job=\"fluent-bit\"}[1m])", + "legendFormat": "{{ container_name }}", + "queryType": "range", + "refId": "A" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Metric Rate", + "tooltip": { + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "mode": "time", + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "logBase": 1, + "show": true + }, + { + "format": "short", + "logBase": 1, + "show": true + } + ], + "yaxis": { + "align": false + } + }, + { + "datasource": { + "type": "loki", + "uid": "e4f43364-7019-45a7-aa7a-14ce2d4ddb0b" + }, + "gridPos": { + "h": 14, + "w": 24, + "x": 0, + "y": 8 + }, + "id": 2, + "options": { + "dedupStrategy": "none", + "enableLogDetails": true, + "prettifyLogMessage": false, + "showCommonLabels": false, + "showLabels": false, + "showTime": true, + "sortOrder": "Descending", + "wrapLogMessage": true + }, + "pluginVersion": "7.1.3", + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "e4f43364-7019-45a7-aa7a-14ce2d4ddb0b" + }, + "editorMode": "code", + "expr": "{job=~\"fluent-bit\"} |~ \"$string\"", + "legendFormat": "", + "queryType": "range", + "refId": "A" + } + ], + "title": "Loki Search", + "type": "logs" + } + ], + "refresh": "1m", + "schemaVersion": 38, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "current": { + "selected": false, + "text": "Loki", + "value": "Loki" + }, + "hide": 0, + "includeAll": false, + "multi": false, + "name": "datasource", + "options": [], + "query": "loki", + "queryValue": "", + "refresh": 1, + "regex": "", + "skipUrlSync": false, + "type": "datasource" + }, + { + "allValue": ".*", + "current": { + "selected": false, + "text": "All", + "value": "$__all" + }, + "datasource": { + "type": "loki", + "uid": "$datasource" + }, + "definition": "label_values(container_name)", + "hide": 0, + "includeAll": true, + "label": "app", + "multi": false, + "name": "app", + "options": [], + "query": "label_values(container_name)", + "refresh": 2, + "regex": "(.*)-.*-.*-.*-.*-.*", + "skipUrlSync": false, + "sort": 0, + "tagValuesQuery": "", + "tagsQuery": "", + "type": "query", + "useTags": false + }, + { + "allValue": ".*", + "current": { + "selected": false, + "text": "All", + "value": "$__all" + }, + "datasource": { + "type": "loki", + "uid": "$datasource" + }, + "definition": "label_values(container_name)", + "hide": 0, + "includeAll": true, + "label": "job", + "multi": false, + "name": "job", + "options": [], + "query": "label_values(container_name)", + "refresh": 2, + "regex": "$app-(.*)", + "skipUrlSync": false, + "sort": 0, + "tagValuesQuery": "", + "tagsQuery": "", + "type": "query", + "useTags": false + }, + { + "current": { + "selected": false, + "text": "", + "value": "" + }, + "hide": 0, + "label": "string", + "name": "string", + "options": [ + { + "selected": true, + "text": "", + "value": "" + } + ], + "query": "", + "skipUrlSync": false, + "type": "textbox" + } + ] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ] + }, + "timezone": "", + "title": "Loki Logs", + "uid": "ffxEJdvGz", + "version": 6, + "weekStart": "" +} \ No newline at end of file diff --git a/helm/capif/templates/_helpers.tpl b/helm/capif/templates/_helpers.tpl new file mode 100644 index 0000000000000000000000000000000000000000..35ce1ecf5e45fa36fd2bd7c02633c7dc7a3457b4 --- /dev/null +++ b/helm/capif/templates/_helpers.tpl @@ -0,0 +1,62 @@ +{{/* +Expand the name of the chart. +*/}} +{{- define "capif.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "capif.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "capif.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "capif.labels" -}} +helm.sh/chart: {{ include "capif.chart" . }} +{{ include "capif.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "capif.selectorLabels" -}} +app.kubernetes.io/name: {{ include "capif.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Create the name of the service account to use +*/}} +{{- define "capif.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "capif.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} diff --git a/helm/capif/templates/access-control-policy.yaml b/helm/capif/templates/access-control-policy.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8b2b198beddf3e97793b994c43c86ef1df644e5e --- /dev/null +++ b/helm/capif/templates/access-control-policy.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: access-control-policy + labels: + io.kompose.service: access-control-policy + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.accessControlPolicy.type }} + selector: + io.kompose.service: access-control-policy + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.accessControlPolicy.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/api-invocation-logs.yaml b/helm/capif/templates/api-invocation-logs.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a9b4d8fd63deb56395c11b225503ce60e7b248f9 --- /dev/null +++ b/helm/capif/templates/api-invocation-logs.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: api-invocation-logs + labels: + io.kompose.service: api-invocation-logs + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.apiInvocationLogs.type }} + selector: + io.kompose.service: api-invocation-logs + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.apiInvocationLogs.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/api-invoker-management.yaml b/helm/capif/templates/api-invoker-management.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3eaeda40135ba0f97db68fe3acc00b96a05ead58 --- /dev/null +++ b/helm/capif/templates/api-invoker-management.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: api-invoker-management + labels: + io.kompose.service: api-invoker-management + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.apiInvokerManagement.type }} + selector: + io.kompose.service: api-invoker-management + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.apiInvokerManagement.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/api-provider-management.yaml b/helm/capif/templates/api-provider-management.yaml new file mode 100644 index 0000000000000000000000000000000000000000..42379862e425bf8b796cc7234262fcf3310faa4a --- /dev/null +++ b/helm/capif/templates/api-provider-management.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: api-provider-management + labels: + io.kompose.service: api-provider-management + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.apiProviderManagement.type }} + selector: + io.kompose.service: api-provider-management + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.apiProviderManagement.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/capif-events-configmap.yaml b/helm/capif/templates/capif-events-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ca31c23ec295806d0941623b19848a920ada61b9 --- /dev/null +++ b/helm/capif/templates/capif-events-configmap.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: capif-events-configmap +data: + config.yaml: | + mongo: { + 'user': '{{ .Values.mongo.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongo.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif', + 'col': 'eventsdetails', + 'certs_col': "certs", + 'capif_invokers_col': 'invokerdetails', + 'capif_providers_col': 'providerenrolmentdetails', + 'host': 'mongo', + 'port': "27017" + } + + monitoring: { + "fluent_bit_host": fluent-bit, + "fluent_bit_port": 24224, + "opentelemetry_url": "otel-collector", + "opentelemetry_port": "55680", + "opentelemetry_max_queue_size": 8192, + "opentelemetry_schedule_delay_millis": 20000, + "opentelemetry_max_export_batch_size": 2048, + "opentelemetry_export_timeout_millis": 60000 + } diff --git a/helm/capif/templates/capif-events.yaml b/helm/capif/templates/capif-events.yaml new file mode 100644 index 0000000000000000000000000000000000000000..40b3d7bdcf7a01aca3f6c78a108039ff1ba22ca3 --- /dev/null +++ b/helm/capif/templates/capif-events.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: capif-events + labels: + io.kompose.service: capif-events + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.capifEvents.type }} + selector: + io.kompose.service: capif-events + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.capifEvents.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/capif-invocation-configmap.yaml b/helm/capif/templates/capif-invocation-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..68fc1f1444f515a6802332096e140615fda3c69f --- /dev/null +++ b/helm/capif/templates/capif-invocation-configmap.yaml @@ -0,0 +1,29 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: capif-invocation-configmap +data: + config.yaml: | + mongo: { + 'user': '{{ .Values.mongo.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongo.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif', + 'logs_col': 'invocationlogs', + 'invoker_col': 'invokerdetails', + 'prov_col': 'providerenrolmentdetails', + 'serv_col': 'serviceapidescriptions', + 'capif_users_col': "user", + 'host': 'mongo', + 'port': "27017" + } + + monitoring: { + "fluent_bit_host": fluent-bit, + "fluent_bit_port": 24224, + "opentelemetry_url": "otel-collector", + "opentelemetry_port": "55680", + "opentelemetry_max_queue_size": 8192, + "opentelemetry_schedule_delay_millis": 20000, + "opentelemetry_max_export_batch_size": 2048, + "opentelemetry_export_timeout_millis": 60000 + } diff --git a/helm/capif/templates/capif-invoker-configmap.yaml b/helm/capif/templates/capif-invoker-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..32bab3cbcafbc40d0e028f75c33c1da2b811c240 --- /dev/null +++ b/helm/capif/templates/capif-invoker-configmap.yaml @@ -0,0 +1,41 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: capif-invoker-configmap +data: + config.yaml: | + mongo: { + 'user': '{{ .Values.mongo.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongo.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif', + 'col': 'invokerdetails', + 'capif_users_col': "user", + 'certs_col': "certs", + 'service_col': 'serviceapidescriptions', + 'host': 'mongo', + 'port': "27017" + } + mongo_register: { + 'user': '{{ .Values.mongoRegister.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongoRegister.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif_users', + 'col': 'user', + 'host': 'mongo-register', + 'port': '27017' + } + ca_factory: { + "url": {{ quote .Values.parametersVault.env.vaultHostname }}, + "port": {{ quote .Values.parametersVault.env.vaultPort }}, + "token": {{ quote .Values.parametersVault.env.vaultAccessToken }} + } + + monitoring: { + "fluent_bit_host": fluent-bit, + "fluent_bit_port": 24224, + "opentelemetry_url": "otel-collector", + "opentelemetry_port": "55680", + "opentelemetry_max_queue_size": 8192, + "opentelemetry_schedule_delay_millis": 20000, + "opentelemetry_max_export_batch_size": 2048, + "opentelemetry_export_timeout_millis": 60000 + } \ No newline at end of file diff --git a/helm/capif/templates/capif-logs-configmap.yaml b/helm/capif/templates/capif-logs-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..53cae6ea3eaf65b017001ef504367eef67ce15d2 --- /dev/null +++ b/helm/capif/templates/capif-logs-configmap.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: capif-logs-configmap +data: + config.yaml: | + mongo: { + 'user': '{{ .Values.mongo.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongo.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif', + 'logs_col': 'invocationlogs', + 'capif_users_col': "user", + 'host': 'mongo', + 'port': "27017" + } + + monitoring: { + "fluent_bit_host": fluent-bit, + "fluent_bit_port": 24224, + "opentelemetry_url": "otel-collector", + "opentelemetry_port": "55680", + "opentelemetry_max_queue_size": 8192, + "opentelemetry_schedule_delay_millis": 20000, + "opentelemetry_max_export_batch_size": 2048, + "opentelemetry_export_timeout_millis": 60000 + } + diff --git a/helm/capif/templates/capif-provider-configmap.yaml b/helm/capif/templates/capif-provider-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..28e530fddd16402c9f7ec70734bf6f0d82220bed --- /dev/null +++ b/helm/capif/templates/capif-provider-configmap.yaml @@ -0,0 +1,41 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: capif-provider-configmap +data: + config.yaml: | + mongo: { + 'user': '{{ .Values.mongo.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongo.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif', + 'col': 'providerenrolmentdetails', + 'certs_col': "certs", + 'capif_users': 'user', + 'host': 'mongo', + 'port': "27017" + } + mongo_register: { + 'user': '{{ .Values.mongoRegister.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongoRegister.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif_users', + 'col': 'user', + 'host': 'mongo-register', + 'port': '27017' + } + ca_factory: { + "url": {{ quote .Values.parametersVault.env.vaultHostname }}, + "port": {{ quote .Values.parametersVault.env.vaultPort }}, + "token": {{ quote .Values.parametersVault.env.vaultAccessToken }} + } + + + monitoring: { + "fluent_bit_host": fluent-bit, + "fluent_bit_port": 24224, + "opentelemetry_url": "otel-collector", + "opentelemetry_port": "55680", + "opentelemetry_max_queue_size": 8192, + "opentelemetry_schedule_delay_millis": 20000, + "opentelemetry_max_export_batch_size": 2048, + "opentelemetry_export_timeout_millis": 60000 + } \ No newline at end of file diff --git a/helm/capif/templates/capif-published-configmap.yaml b/helm/capif/templates/capif-published-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..507afd4a769e5598435bdf084fec5e866bad04d2 --- /dev/null +++ b/helm/capif/templates/capif-published-configmap.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: capif-published-configmap +data: + config.yaml: | + mongo: { + 'user': '{{ .Values.mongo.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongo.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif', + 'col': 'serviceapidescriptions', + 'certs_col': "certs", + 'capif_provider_col': "providerenrolmentdetails", + 'host': 'mongo', + 'port': "27017" + } + + monitoring: { + "fluent_bit_host": fluent-bit, + "fluent_bit_port": 24224, + "opentelemetry_url": "otel-collector", + "opentelemetry_port": "55680", + "opentelemetry_max_queue_size": 8192, + "opentelemetry_schedule_delay_millis": 20000, + "opentelemetry_max_export_batch_size": 2048, + "opentelemetry_export_timeout_millis": 60000 + } \ No newline at end of file diff --git a/helm/capif/templates/capif-routing-info.yaml b/helm/capif/templates/capif-routing-info.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6de48aa99149970097522dc28f0a1b4340debae3 --- /dev/null +++ b/helm/capif/templates/capif-routing-info.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: capif-routing-info + labels: + io.kompose.service: capif-routing-info + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.capifRoutingInfo.type }} + selector: + io.kompose.service: capif-routing-info + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.capifRoutingInfo.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/capif-security-configmap.yaml b/helm/capif/templates/capif-security-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ade6a59257fe064ae1a27d36af23dc608e452ad4 --- /dev/null +++ b/helm/capif/templates/capif-security-configmap.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: capif-security-configmap +data: + config.yaml: | + mongo: { + 'user': '{{ .Values.mongo.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongo.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif', + 'col': 'security', + 'capif_service_col': 'serviceapidescriptions', + 'certs_col': "certs", + 'capif_invokers' : 'invokerdetails', + 'host': 'mongo', + 'port': "27017" + } + + monitoring: { + "fluent_bit_host": fluent-bit, + "fluent_bit_port": 24224, + "opentelemetry_url": "otel-collector", + "opentelemetry_port": "55680", + "opentelemetry_max_queue_size": 8192, + "opentelemetry_schedule_delay_millis": 20000, + "opentelemetry_max_export_batch_size": 2048, + "opentelemetry_export_timeout_millis": 60000 + } \ No newline at end of file diff --git a/helm/capif/templates/capif-security.yaml b/helm/capif/templates/capif-security.yaml new file mode 100644 index 0000000000000000000000000000000000000000..e0bf7d885643e717c9fa4b587fe0cadbb25d0fec --- /dev/null +++ b/helm/capif/templates/capif-security.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: capif-security + labels: + io.kompose.service: capif-security + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.capifSecurity.type }} + selector: + io.kompose.service: capif-security + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.capifSecurity.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/capif-service-configmap.yaml b/helm/capif/templates/capif-service-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1cd3d6610c9e3850ec0231f3680345fac8aad969 --- /dev/null +++ b/helm/capif/templates/capif-service-configmap.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: capif-service-configmap +data: + config.yaml: | + mongo: { + 'user': '{{ .Values.mongo.mongo.env.mongoInitdbRootUsername }}', + 'password': '{{ .Values.mongo.mongo.env.mongoInitdbRootPassword }}', + 'db': 'capif', + 'col': 'serviceapidescriptions', + 'invokers_col': 'invokerdetails', + 'capif_users_col': "user", + 'host': 'mongo', + 'port': "27017" + } + + monitoring: { + "fluent_bit_host": fluent-bit, + "fluent_bit_port": 24224, + "opentelemetry_url": "otel-collector", + "opentelemetry_port": "55680", + "opentelemetry_max_queue_size": 8192, + "opentelemetry_schedule_delay_millis": 20000, + "opentelemetry_max_export_batch_size": 2048, + "opentelemetry_export_timeout_millis": 60000 + } \ No newline at end of file diff --git a/helm/capif/templates/deployment.yaml b/helm/capif/templates/deployment.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4c2e0265d80a53931ac74bea856cd59a94b09ccb --- /dev/null +++ b/helm/capif/templates/deployment.yaml @@ -0,0 +1,1001 @@ +{{- if eq .Values.CapifClient.enable "true" }} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: capif-client + labels: + io.kompose.service: capif-client + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.CapifClient.replicas }} + selector: + matchLabels: + io.kompose.service: capif-client + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: capif-client + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: CAPIF_HOSTNAME + value: nginx.mon.svc.cluster.local + - name: VAULT_HOSTNAME + value: {{ quote .Values.parametersVault.env.vaultHostname }} + - name: VAULT_PORT + value: {{ quote .Values.parametersVault.env.vaultPort }} + - name: VAULT_ACCESS_TOKEN + value: {{ quote .Values.parametersVault.env.vaultAccessToken }} + image: {{ .Values.CapifClient.image.repository }}:{{ .Values.CapifClient.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.CapifClient.image.imagePullPolicy }} + name: capif-client + resources: + {{- toYaml .Values.CapifClient.resources | nindent 12 }} + restartPolicy: Always +{{- end }} +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: access-control-policy + labels: + io.kompose.service: access-control-policy + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.accessControlPolicy.replicas }} + selector: + matchLabels: + io.kompose.service: access-control-policy + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: access-control-policy + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: CAPIF_HOSTNAME + value: {{ quote .Values.nginx.nginx.env.capifHostname }} + - name: MONITORING + value: {{ quote .Values.accessControlPolicy.env.monitoring }} + image: {{ .Values.accessControlPolicy.image.repository }}:{{ .Values.accessControlPolicy.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.accessControlPolicy.image.imagePullPolicy }} + name: access-control-policy + ports: + - containerPort: 8080 + resources: + {{- toYaml .Values.accessControlPolicy.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 10 + periodSeconds: 5 + restartPolicy: Always + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: api-invocation-logs + labels: + io.kompose.service: api-invocation-logs + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.apiInvocationLogs.replicas }} + selector: + matchLabels: + io.kompose.service: api-invocation-logs + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: api-invocation-logs + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + checksum/config: {{ include (print $.Template.BasePath "/capif-invocation-configmap.yaml") . | sha256sum }} + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: CAPIF_HOSTNAME + value: {{ quote .Values.nginx.nginx.env.capifHostname }} + - name: MONITORING + value: {{ quote .Values.apiInvocationLogs.apiInvocationLogs.env.monitoring }} + - name: VAULT_HOSTNAME + value: {{ quote .Values.parametersVault.env.vaultHostname }} + - name: VAULT_PORT + value: {{ quote .Values.parametersVault.env.vaultPort }} + - name: VAULT_ACCESS_TOKEN + value: {{ quote .Values.parametersVault.env.vaultAccessToken }} + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + image: {{ .Values.apiInvocationLogs.apiInvocationLogs.image.repository }}:{{ .Values.apiInvocationLogs.apiInvocationLogs.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.apiInvocationLogs.apiInvocationLogs.image.imagePullPolicy }} + name: api-invocation-logs + ports: + - containerPort: 8080 + volumeMounts: + - name: capif-invocation-config + mountPath: /usr/src/app/config.yaml + subPath: config.yaml + resources: + {{- toYaml .Values.apiInvocationLogs.apiInvocationLogs.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 10 + periodSeconds: 5 + volumes: + - name: capif-invocation-config + configMap: + name: capif-invocation-configmap + items: + - key: "config.yaml" + path: "config.yaml" + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: api-invoker-management + labels: + io.kompose.service: api-invoker-management + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.apiInvokerManagement.replicas }} + selector: + matchLabels: + io.kompose.service: api-invoker-management + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: api-invoker-management + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + checksum/config: {{ include (print $.Template.BasePath "/capif-invoker-configmap.yaml") . | sha256sum }} + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + - name: MONITORING + value: {{ quote .Values.apiInvokerManagement.apiInvokerManagement.env.monitoring }} + - name: VAULT_HOSTNAME + value: {{ quote .Values.parametersVault.env.vaultHostname }} + - name: VAULT_PORT + value: {{ quote .Values.parametersVault.env.vaultPort }} + - name: VAULT_ACCESS_TOKEN + value: {{ quote .Values.parametersVault.env.vaultAccessToken }} + image: {{ .Values.apiInvokerManagement.apiInvokerManagement.image.repository }}:{{ + .Values.apiInvokerManagement.apiInvokerManagement.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.apiInvokerManagement.apiInvokerManagement.image.imagePullPolicy }} + name: api-invoker-management + ports: + - containerPort: 8080 + volumeMounts: + - name: capif-invoker-config + mountPath: /usr/src/app/config.yaml + subPath: config.yaml + resources: + {{- toYaml .Values.apiInvokerManagement.apiInvokerManagement.resources | nindent 12 }} + volumes: + - name: capif-invoker-config + configMap: + name: capif-invoker-configmap + items: + - key: "config.yaml" + path: "config.yaml" + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: api-provider-management + labels: + io.kompose.service: api-provider-management + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.apiProviderManagement.replicas }} + selector: + matchLabels: + io.kompose.service: api-provider-management + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: api-provider-management + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + checksum/config: {{ include (print $.Template.BasePath "/capif-provider-configmap.yaml") . | sha256sum }} + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + - name: MONITORING + value: {{ quote .Values.apiProviderManagement.apiProviderManagement.env.monitoring }} + - name: VAULT_HOSTNAME + value: {{ quote .Values.parametersVault.env.vaultHostname }} + - name: VAULT_PORT + value: {{ quote .Values.parametersVault.env.vaultPort }} + - name: VAULT_ACCESS_TOKEN + value: {{ quote .Values.parametersVault.env.vaultAccessToken }} + image: {{ .Values.apiProviderManagement.apiProviderManagement.image.repository + }}:{{ .Values.apiProviderManagement.apiProviderManagement.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.apiProviderManagement.apiProviderManagement.image.imagePullPolicy }} + name: api-provider-management + ports: + - containerPort: 8080 + volumeMounts: + - name: capif-provider-config + mountPath: /usr/src/app/config.yaml + subPath: config.yaml + resources: + {{- toYaml .Values.apiProviderManagement.apiProviderManagement.resources | nindent 12 }} + volumes: + - name: capif-provider-config + configMap: + name: capif-provider-configmap + items: + - key: "config.yaml" + path: "config.yaml" + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: capif-events + labels: + io.kompose.service: capif-events + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.capifEvents.replicas }} + selector: + matchLabels: + io.kompose.service: capif-events + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: capif-events + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + checksum/config: {{ include (print $.Template.BasePath "/capif-events-configmap.yaml") . | sha256sum }} + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + - name: MONITORING + value: {{ quote .Values.capifEvents.capifEvents.env.monitoring }} + image: {{ .Values.capifEvents.capifEvents.image.repository }}:{{ .Values.capifEvents.capifEvents.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.capifEvents.capifEvents.image.imagePullPolicy }} + name: capif-events + ports: + - containerPort: 8080 + volumeMounts: + - name: capif-events-config + mountPath: /usr/src/app/config.yaml + subPath: config.yaml + resources: + {{- toYaml .Values.capifEvents.capifEvents.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 5 + periodSeconds: 5 + volumes: + - name: capif-events-config + configMap: + name: capif-events-configmap + items: + - key: "config.yaml" + path: "config.yaml" + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: capif-routing-info + labels: + io.kompose.service: capif-routing-info + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.capifRoutingInfo.replicas }} + selector: + matchLabels: + io.kompose.service: capif-routing-info + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: capif-routing-info + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + - name: MONITORING + value: {{ quote .Values.capifRoutingInfo.capifRoutingInfo.env.monitoring }} + image: {{ .Values.capifRoutingInfo.capifRoutingInfo.image.repository }}:{{ .Values.capifRoutingInfo.capifRoutingInfo.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.capifRoutingInfo.capifRoutingInfo.image.imagePullPolicy }} + name: capif-routing-info + ports: + - containerPort: 8080 + resources: + {{- toYaml .Values.capifRoutingInfo.capifRoutingInfo.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 5 + periodSeconds: 5 + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: capif-security + labels: + io.kompose.service: capif-security + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.capifSecurity.replicas }} + selector: + matchLabels: + io.kompose.service: capif-security + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: capif-security + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + checksum/config: {{ include (print $.Template.BasePath "/capif-security-configmap.yaml") . | sha256sum }} + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: CAPIF_HOSTNAME + value: {{ quote .Values.nginx.nginx.env.capifHostname }} + - name: MONITORING + value: {{ quote .Values.capifSecurity.capifSecurity.env.monitoring }} + - name: VAULT_HOSTNAME + value: {{ quote .Values.parametersVault.env.vaultHostname }} + - name: VAULT_PORT + value: {{ quote .Values.parametersVault.env.vaultPort }} + - name: VAULT_ACCESS_TOKEN + value: {{ quote .Values.parametersVault.env.vaultAccessToken }} + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + image: {{ .Values.capifSecurity.capifSecurity.image.repository }}:{{ .Values.capifSecurity.capifSecurity.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.capifSecurity.capifSecurity.image.imagePullPolicy }} + name: capif-security + ports: + - containerPort: 8080 + volumeMounts: + - name: capif-security-config + mountPath: /usr/src/app/config.yaml + subPath: config.yaml + resources: + {{- toYaml .Values.capifSecurity.capifSecurity.resources | nindent 12 }} + volumes: + - name: capif-security-config + configMap: + name: capif-security-configmap + items: + - key: "config.yaml" + path: "config.yaml" + restartPolicy: Always + restartPolicy: Always + +{{- if eq .Values.register.enable "true" }} +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: register + labels: + io.kompose.service: register + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.register.replicas }} + selector: + matchLabels: + io.kompose.service: register + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: register + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + checksum/config: {{ include (print $.Template.BasePath "/register-configmap.yaml") . | sha256sum }} + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + - name: VAULT_HOSTNAME + value: {{ quote .Values.parametersVault.env.vaultHostname }} + - name: VAULT_PORT + value: {{ quote .Values.parametersVault.env.vaultPort }} + - name: VAULT_ACCESS_TOKEN + value: {{ quote .Values.parametersVault.env.vaultAccessToken }} + image: {{ .Values.register.register.image.repository }}:{{ .Values.register.register.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.register.register.image.imagePullPolicy }} + name: register + ports: + - containerPort: 8080 + resources: + {{- toYaml .Values.register.register.resources | nindent 12 }} + volumeMounts: + - name: register-configmap + mountPath: /usr/src/app/config.yaml + subPath: config.yaml + volumes: + - name: register-configmap + configMap: + name: register-configmap + items: + - key: "config.yaml" + path: "config.yaml" + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: mongo-register + labels: + io.kompose.service: mongo-register + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.mongoRegister.replicas }} + selector: + matchLabels: + io.kompose.service: mongo-register + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: mongo-register + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: MONGO_INITDB_ROOT_PASSWORD + value: {{ quote .Values.mongoRegister.mongo.env.mongoInitdbRootPassword }} + - name: MONGO_INITDB_ROOT_USERNAME + value: {{ quote .Values.mongoRegister.mongo.env.mongoInitdbRootUsername }} + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + image: {{ .Values.mongoRegister.mongo.image.repository }}:{{ .Values.mongoRegister.mongo.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.mongoRegister.mongo.image.imagePullPolicy }} + name: mongo-register + ports: + - containerPort: 27017 + resources: + {{- toYaml .Values.mongoRegister.mongo.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 27017 +# initialDelaySeconds: 5 + periodSeconds: 5 + restartPolicy: Always +{{- end }} +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: logs + labels: + io.kompose.service: logs + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.logs.replicas }} + selector: + matchLabels: + io.kompose.service: logs + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: logs + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + checksum/config: {{ include (print $.Template.BasePath "/capif-logs-configmap.yaml") . | sha256sum }} + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + - name: MONITORING + value: {{ quote .Values.logs.logs.env.monitoring }} + image: {{ .Values.logs.logs.image.repository }}:{{ .Values.logs.logs.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.logs.logs.image.imagePullPolicy }} + name: logs + ports: + - containerPort: 8080 + volumeMounts: + - name: capif-logs-config + mountPath: /usr/src/app/config.yaml + subPath: config.yaml + resources: + {{- toYaml .Values.logs.logs.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 5 + periodSeconds: 5 + volumes: + - name: capif-logs-config + configMap: + name: capif-logs-configmap + items: + - key: "config.yaml" + path: "config.yaml" + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: mongo + labels: + io.kompose.service: mongo + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.mongo.replicas }} + strategy: + type: Recreate + selector: + matchLabels: + io.kompose.service: mongo + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: mongo + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: MONGO_INITDB_ROOT_PASSWORD + value: {{ quote .Values.mongo.mongo.env.mongoInitdbRootPassword }} + - name: MONGO_INITDB_ROOT_USERNAME + value: {{ quote .Values.mongo.mongo.env.mongoInitdbRootUsername }} + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + image: {{ .Values.mongo.mongo.image.repository }}:{{ .Values.mongo.mongo.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.mongo.mongo.image.imagePullPolicy }} + name: mongo + ports: + - containerPort: 27017 + securityContext: + runAsUser: 0 + {{- if eq .Values.mongo.persistence.enable "true" }} + volumeMounts: + - name: mongo-pvc + mountPath: /data/db + {{- end }} + resources: + {{- toYaml .Values.mongo.mongo.resources | nindent 12 }} + livenessProbe: + tcpSocket: + port: 27017 + initialDelaySeconds: 20 + periodSeconds: 5 + readinessProbe: + tcpSocket: + port: 27017 +# initialDelaySeconds: 5 + periodSeconds: 5 + - name: mongo-helper + image: busybox + command: + - sh + - -c + - while true ; do echo alive ; sleep 10 ; done + {{- if eq .Values.mongo.persistence.enable "true" }} + volumeMounts: + - mountPath: /mongodata + name: mongo-pvc + {{- end }} + {{- if eq .Values.mongo.persistence.enable "true" }} + volumes: + - name: mongo-pvc + persistentVolumeClaim: + claimName: mongo-pvc + {{- end }} + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: mongo-express + labels: + io.kompose.service: mongo-express + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.mongoExpress.replicas }} + selector: + matchLabels: + io.kompose.service: mongo-express + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: mongo-express + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: ME_CONFIG_MONGODB_ADMINPASSWORD + value: {{ quote .Values.mongoExpress.mongoExpress.env.meConfigMongodbAdminpassword + }} + - name: ME_CONFIG_MONGODB_ADMINUSERNAME + value: {{ quote .Values.mongoExpress.mongoExpress.env.meConfigMongodbAdminusername + }} + - name: ME_CONFIG_MONGODB_URL + value: {{ quote .Values.mongoExpress.mongoExpress.env.meConfigMongodbUrl }} + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + image: {{ .Values.mongoExpress.mongoExpress.image.repository }}:{{ .Values.mongoExpress.mongoExpress.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.mongoExpress.mongoExpress.image.imagePullPolicy }} + name: mongo-express + ports: + - containerPort: 8081 + resources: + {{- toYaml .Values.mongoExpress.mongoExpress.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 8081 +# initialDelaySeconds: 0 + periodSeconds: 5 + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx + labels: + io.kompose.service: nginx + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.nginx.replicas }} + selector: + matchLabels: + io.kompose.service: nginx + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: nginx + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: CAPIF_HOSTNAME + value: {{ quote .Values.nginx.nginx.env.capifHostname }} + - name: VAULT_HOSTNAME + value: {{ quote .Values.parametersVault.env.vaultHostname }} + - name: VAULT_PORT + value: {{ quote .Values.parametersVault.env.vaultPort }} + - name: VAULT_ACCESS_TOKEN + value: {{ quote .Values.parametersVault.env.vaultAccessToken }} + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + image: {{ .Values.nginx.nginx.image.repository }}:{{ .Values.nginx.nginx.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.nginx.nginx.image.imagePullPolicy }} + name: nginx + ports: + - containerPort: 8080 + - containerPort: 443 + livenessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 20 + periodSeconds: 5 +# readinessProbe: +# tcpSocket: +# port: 8080 +# initialDelaySeconds: 60 +# periodSeconds: 5 + resources: + {{- toYaml .Values.nginx.nginx.resources | nindent 12 }} + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: published-apis + labels: + io.kompose.service: published-apis + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.publishedApis.replicas }} + selector: + matchLabels: + io.kompose.service: published-apis + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: published-apis + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + checksum/config: {{ include (print $.Template.BasePath "/capif-published-configmap.yaml") . | sha256sum }} + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + - name: MONITORING + value: {{ quote .Values.publishedApis.publishedApis.env.monitoring }} + image: {{ .Values.publishedApis.publishedApis.image.repository }}:{{ .Values.publishedApis.publishedApis.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.publishedApis.publishedApis.image.imagePullPolicy }} + name: published-apis + ports: + - containerPort: 8080 + volumeMounts: + - name: capif-published-config + mountPath: /usr/src/app/config.yaml + subPath: config.yaml + resources: + {{- toYaml .Values.publishedApis.publishedApis.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 5 + periodSeconds: 5 + volumes: + - name: capif-published-config + configMap: + name: capif-published-configmap + items: + - key: "config.yaml" + path: "config.yaml" + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis + labels: + io.kompose.service: redis + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.redis.replicas }} + selector: + matchLabels: + io.kompose.service: redis + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: redis + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - args: + - redis-server + env: + - name: REDIS_REPLICATION_MODE + value: {{ quote .Values.redis.redis.env.redisReplicationMode }} + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + image: {{ .Values.redis.redis.image.repository }}:{{ .Values.redis.redis.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.redis.redis.image.imagePullPolicy }} + name: redis + ports: + - containerPort: 6379 + resources: + {{- toYaml .Values.redis.redis.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 6379 +# initialDelaySeconds: 5 + periodSeconds: 5 + livenessProbe: + tcpSocket: + port: 6379 + initialDelaySeconds: 5 + periodSeconds: 5 + restartPolicy: Always +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: service-apis + labels: + io.kompose.service: service-apis + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert +spec: + replicas: {{ .Values.serviceApis.replicas }} + selector: + matchLabels: + io.kompose.service: service-apis + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + io.kompose.network/services-default: "true" + io.kompose.service: service-apis + {{- include "capif.selectorLabels" . | nindent 8 }} + annotations: + date: "{{ now | unixEpoch }}" + checksum/config: {{ include (print $.Template.BasePath "/capif-service-configmap.yaml") . | sha256sum }} + spec: + hostAliases: + - ip: "{{ .Values.ingress.ip }}" + hostnames: + - "{{ .Values.nginx.nginx.env.capifHostname }}" + containers: + - env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + - name: MONITORING + value: {{ quote .Values.serviceApis.serviceApis.env.monitoring }} + image: {{ .Values.serviceApis.serviceApis.image.repository }}:{{ .Values.serviceApis.serviceApis.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.serviceApis.serviceApis.image.imagePullPolicy }} + name: service-apis + ports: + - containerPort: 8080 + volumeMounts: + - name: capif-service-config + mountPath: /usr/src/app/config.yaml + subPath: config.yaml + resources: + {{- toYaml .Values.serviceApis.serviceApis.resources | nindent 12 }} + readinessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 5 + periodSeconds: 5 + volumes: + - name: capif-service-config + configMap: + name: capif-service-configmap + items: + - key: "config.yaml" + path: "config.yaml" + restartPolicy: Always \ No newline at end of file diff --git a/helm/capif/templates/fluent-bit-service.yaml b/helm/capif/templates/fluent-bit-service.yaml new file mode 100644 index 0000000000000000000000000000000000000000..90653b3aabb306b61829bd4f5ae02cec5dbdd817 --- /dev/null +++ b/helm/capif/templates/fluent-bit-service.yaml @@ -0,0 +1,24 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: Service +metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + creationTimestamp: null + labels: + io.kompose.service: fluent-bit + {{- include "capif.labels" . | nindent 4 }} + name: fluent-bit +spec: + ports: + - name: "24224-tcp" + port: 24224 + targetPort: 24224 + - name: 24224-udp + port: 24224 + protocol: UDP + targetPort: 24224 + selector: + io.kompose.service: fluent-bit +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/fluentbit-configmap.yaml b/helm/capif/templates/fluentbit-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..20467b10a80d79431e308c6c87cd9cdc42fffcd6 --- /dev/null +++ b/helm/capif/templates/fluentbit-configmap.yaml @@ -0,0 +1,24 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: fluent-bit-configmap +data: + LOKI_URL: {{ quote .Values.monitoring.fluentBit.env.lokiUrl }} + fluent-bit.conf: | + [INPUT] + Name forward + Listen 0.0.0.0 + Port 24224 + [Output] + Name grafana-loki + Match * + Url ${LOKI_URL} + RemoveKeys source + Labels {job="fluent-bit"} + LabelKeys container_name, traceID + BatchWait 1s + BatchSize 1001024 + LineFormat json + LogLevel info +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/fluentbit-deployment.yaml b/helm/capif/templates/fluentbit-deployment.yaml new file mode 100644 index 0000000000000000000000000000000000000000..925ec022bb6bd5a2763ffacac23bb4113ca7072d --- /dev/null +++ b/helm/capif/templates/fluentbit-deployment.yaml @@ -0,0 +1,59 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + labels: + io.kompose.service: fluent-bit + {{- include "capif.labels" . | nindent 4 }} + name: fluent-bit +spec: + replicas: 1 + selector: + matchLabels: + io.kompose.service: fluent-bit + {{- include "capif.selectorLabels" . | nindent 6 }} + strategy: + type: Recreate + template: + metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + checksum/config: {{ include (print $.Template.BasePath "/fluentbit-configmap.yaml") . | sha256sum }} + creationTimestamp: null + labels: + io.kompose.network/monitoring-default: "true" + io.kompose.service: fluent-bit + {{- include "capif.selectorLabels" . | nindent 8 }} + spec: + containers: + - env: + - name: LOKI_URL + valueFrom: + configMapKeyRef: + name: fluent-bit-configmap + key: LOKI_URL + image: {{ .Values.monitoring.fluentBit.image.repository }}:{{ .Values.monitoring.fluentBit.image.tag }} + name: fluent-bit + ports: + - containerPort: 24224 + - containerPort: 24224 + protocol: UDP + resources: + {{- toYaml .Values.monitoring.fluentBit.resources | nindent 12 }} + volumeMounts: + - name: fluent-bit-conf + mountPath: /fluent-bit/etc/fluent-bit.conf + subPath: fluent-bit.conf + restartPolicy: Always + volumes: + - name: fluent-bit-conf + configMap: + name: fluent-bit-configmap + items: + - key: "fluent-bit.conf" + path: "fluent-bit.conf" +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/grafana-configmap.yaml b/helm/capif/templates/grafana-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..654101f720e7c6d515aa17e5f14adedd532c44e9 --- /dev/null +++ b/helm/capif/templates/grafana-configmap.yaml @@ -0,0 +1,108 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: datasources +data: + datasources.yaml: | + apiVersion: 1 + datasources: + - name: Loki + type: loki + uid: e4f43364-7019-45a7-aa7a-14ce2d4ddb0b + typeName: Loki + typeLogoUrl: public/app/plugins/datasource/loki/img/loki_icon.svg + access: proxy + url: {{ .Values.monitoring.grafana.env.lokiUrl }} + user: '' + database: '' + basicAuth: false + isDefault: false + jsonData: + derivedFields: + - datasourceUid: fee7e008-f836-424a-b701-88cad583c715 + matcherRegex: '"traceID":\s*"([a-fA-F0-9]+)"' + name: traceID + url: "$${__value.raw}" + readOnly: false + - name: Prometheus + type: prometheus + typeName: Prometheus + typeLogoUrl: public/app/plugins/datasource/prometheus/img/prometheus_logo.svg + access: proxy + url: {{ .Values.monitoring.grafana.env.prometheusUrl }} + uid: af6b44aa-0703-4979-825c-c1afba946534 + user: '' + database: '' + basicAuth: false + isDefault: false + jsonData: + httpMethod: POST + prometheusType: Prometheus + prometheusVersion: 2.40.1 + readOnly: false + - name: Tempo + type: tempo + typeName: Tempo + typeLogoUrl: public/app/plugins/datasource/tempo/img/tempo_logo.svg + uid: fee7e008-f836-424a-b701-88cad583c715 + access: proxy + url: {{ .Values.monitoring.grafana.env.tempoUrl }} + user: '' + database: '' + basicAuth: false + isDefault: false + jsonData: + lokiSearch: + datasourceUid: e4f43364-7019-45a7-aa7a-14ce2d4ddb0b + readOnly: false +--- + +apiVersion: v1 +kind: ConfigMap +metadata: + name: default +data: + default.yaml: | + apiVersion: 1 + providers: + - name: Default # A uniquely identifiable name for the provider + orgId: 1 + folder: "" # The folder where to place the dashboards + folderUid: "" + type: file + disableDeletion: false + allowUiUpdates: true + options: + path: /var/lib/grafana/dashboards + +--- + +apiVersion: v1 +kind: ConfigMap +metadata: + name: docker-monitoring +data: + Docker-monitoring.json: |- +{{ .Files.Get "docker-monitoring.json" | indent 4 }} + +--- + +apiVersion: v1 +kind: ConfigMap +metadata: + name: kubernetes-dashboard +data: + kubernetes-dashboard.json: | +{{ .Files.Get "kubernetes-dashboard.json" | indent 4 }} + +--- + +apiVersion: v1 +kind: ConfigMap +metadata: + name: loki-logs +data: + loki-logs.json: | +{{ .Files.Get "loki-logs.json" | indent 4 }} +{{- end }} diff --git a/helm/capif/templates/grafana-deployment.yaml b/helm/capif/templates/grafana-deployment.yaml new file mode 100644 index 0000000000000000000000000000000000000000..844f32ece2ce27cc1d40774377c5dce7c6e5eaed --- /dev/null +++ b/helm/capif/templates/grafana-deployment.yaml @@ -0,0 +1,109 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + labels: + io.kompose.service: grafana + {{- include "capif.labels" . | nindent 4 }} + name: grafana +spec: + replicas: 1 + strategy: + type: Recreate + selector: + matchLabels: + io.kompose.service: grafana + {{- include "capif.selectorLabels" . | nindent 6 }} + strategy: + type: Recreate + template: + metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + checksum/config: {{ include (print $.Template.BasePath "/grafana-configmap.yaml") . | sha256sum }} + labels: + io.kompose.network/monitoring-default: "true" + io.kompose.service: grafana + {{- include "capif.selectorLabels" . | nindent 8 }} + spec: + containers: + - env: + - name: GF_AUTH_ANONYMOUS_ENABLED + value: {{ quote .Values.monitoring.grafana.env.gfAuthAnonymousEnable }} + - name: GF_SECURITY_ALLOW_EMBEDDING + value: {{ quote .Values.monitoring.grafana.env.gfSecurityAllowEmbedding }} + - name: GF_PATHS_PROVISIONING + value: /etc/grafana/provisioning + image: {{ .Values.monitoring.grafana.image.repository }}:{{ .Values.monitoring.grafana.image.tag }} + name: grafana + envFrom: + - secretRef: + name: grafana-secrets + ports: + - containerPort: 3000 + resources: + {{- toYaml .Values.monitoring.grafana.resources | nindent 12 }} + securityContext: + runAsUser: 0 + volumeMounts: + - name: grafana-datasources + mountPath: /etc/grafana/provisioning/datasources/datasources.yaml + subPath: datasources.yaml + - name: grafana-default + mountPath: /etc/grafana/provisioning/dashboards/default.yaml + subPath: default.yaml + - name: grafana-docker + mountPath: /var/lib/grafana/dashboards/Docker-monitoring.json + subPath: Docker-monitoring.json + - name: kubernetes-dashboard + mountPath: /var/lib/grafana/dashboards/kubernetes-dashboard.json + subPath: kubernetes-dashboard.json + - name: grafana-loki + mountPath: /var/lib/grafana/dashboards/Loki-Logs.json + subPath: loki-logs.json + {{- if eq .Values.monitoring.grafana.persistence.enable "true" }} + - name: grafana-claim0 + mountPath: /var/lib/grafana + {{- end }} + volumes: + - name: grafana-datasources + configMap: + name: datasources + items: + - key: "datasources.yaml" + path: "datasources.yaml" + - name: grafana-default + configMap: + name: default + items: + - key: "default.yaml" + path: "default.yaml" + - name: grafana-docker + configMap: + name: docker-monitoring + items: + - key: "Docker-monitoring.json" + path: "Docker-monitoring.json" + - name: kubernetes-dashboard + configMap: + name: kubernetes-dashboard + items: + - key: "kubernetes-dashboard.json" + path: "kubernetes-dashboard.json" + - name: grafana-loki + configMap: + name: loki-logs + items: + - key: "loki-logs.json" + path: "loki-logs.json" + {{- if eq .Values.monitoring.grafana.persistence.enable "true" }} + - name: grafana-claim0 + persistentVolumeClaim: + claimName: grafana-claim0 + {{- end }} + restartPolicy: Always +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/grafana-ingress-route.yaml b/helm/capif/templates/grafana-ingress-route.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2e2648bf5f4b7e52d5d1e1ca73037d4e2dc7a18b --- /dev/null +++ b/helm/capif/templates/grafana-ingress-route.yaml @@ -0,0 +1,18 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.grafana.ingressRoute.enable "true" }} +apiVersion: traefik.containo.us/v1alpha1 +kind: IngressRoute +metadata: + name: grafana-ingress-route +spec: + entryPoints: [web] + routes: + - kind: Rule + match: Host(`{{ .Values.monitoring.grafana.ingressRoute.host }}`) + services: + - kind: Service + name: grafana + port: {{ .Values.monitoring.grafana.service.port }} + scheme: http +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/grafana-ingress.yaml b/helm/capif/templates/grafana-ingress.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7d7d0cba2890fee678fc110be2f44592019be198 --- /dev/null +++ b/helm/capif/templates/grafana-ingress.yaml @@ -0,0 +1,34 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if .Values.monitoring.grafana.ingress.enabled -}} +{{- $svcPort := .Values.monitoring.grafana.service.port -}} +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: grafana-ingress + labels: + {{- include "capif.labels" . | nindent 4 }} + {{- with .Values.monitoring.grafana.ingress.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: +{{- if .Values.monitoring.grafana.ingress.ingressClassName }} + ingressClassName: {{ .Values.monitoring.grafana.ingress.ingressClassName }} +{{- end }} + rules: + {{- range .Values.monitoring.grafana.ingress.hosts }} + - host: {{ .host | quote }} + http: + paths: + {{- range .paths }} + - path: {{ .path }} + pathType: {{ .pathType }} + backend: + service: + name: grafana + port: + number: {{ $svcPort }} + {{- end }} + {{- end }} +{{- end }} +{{- end }} diff --git a/helm/capif/templates/grafana-pvc.yaml b/helm/capif/templates/grafana-pvc.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b2c6672ec88770fbc2ef0ff3d67a826971be668f --- /dev/null +++ b/helm/capif/templates/grafana-pvc.yaml @@ -0,0 +1,16 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.grafana.persistence.enable "true" }} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + io.kompose.service: grafana-claim0 + name: grafana-claim0 +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: {{ .Values.monitoring.grafana.persistence.storage }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/grafana-secrets.yaml b/helm/capif/templates/grafana-secrets.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a6796d4a9de32cfdd2557ff68dfd1ec721414005 --- /dev/null +++ b/helm/capif/templates/grafana-secrets.yaml @@ -0,0 +1,10 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: Secret +metadata: + name: grafana-secrets +type: Opaque +data: + GF_AUTH_ANONYMOUS_ORG_ROLE: {{ .Values.monitoring.grafana.env.gfAuthAnonymousOrgRole | b64enc | quote }} + GF_SECURITY_ADMIN_PASSWORD: {{ .Values.monitoring.grafana.env.gfSecurityAdminPassword | b64enc | quote }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/grafana-service.yaml b/helm/capif/templates/grafana-service.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c6280438f7cd896ae2e05f41a2de6039593af55c --- /dev/null +++ b/helm/capif/templates/grafana-service.yaml @@ -0,0 +1,17 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: Service +metadata: + name: grafana + labels: + {{- include "capif.labels" . | nindent 4 }} +spec: + type: {{ .Values.monitoring.grafana.service.type }} + ports: + - port: {{ .Values.monitoring.grafana.service.port }} + targetPort: {{ .Values.monitoring.grafana.service.port }} + protocol: TCP + name: http-port + selector: + io.kompose.service: grafana +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/logs.yaml b/helm/capif/templates/logs.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7382eff22fd777be0631d70f913efdf947c8c4b5 --- /dev/null +++ b/helm/capif/templates/logs.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: logs + labels: + io.kompose.service: logs + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.logs.type }} + selector: + io.kompose.service: logs + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.logs.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/loki-deployment.yaml b/helm/capif/templates/loki-deployment.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cadf37d067e88b18fb7c835a5af2f5c0bafea5f4 --- /dev/null +++ b/helm/capif/templates/loki-deployment.yaml @@ -0,0 +1,54 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + labels: + io.kompose.service: loki + {{- include "capif.labels" . | nindent 4 }} + name: loki +spec: + replicas: 1 + strategy: + type: Recreate + selector: + matchLabels: + io.kompose.service: loki + {{- include "capif.selectorLabels" . | nindent 6 }} + strategy: {} + template: + metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + labels: + io.kompose.network/monitoring-default: "true" + io.kompose.service: loki + {{- include "capif.selectorLabels" . | nindent 8 }} + spec: + containers: + - args: + - -config.file=/etc/loki/local-config.yaml + image: {{ .Values.monitoring.loki.image.repository }}:{{ .Values.monitoring.loki.image.tag }} + name: loki + ports: + - containerPort: 3100 + {{- if eq .Values.monitoring.loki.persistence.enable "true" }} + volumeMounts: + - name: loki-claim0 + mountPath: /loki/wal + {{- end }} + resources: + {{- toYaml .Values.monitoring.loki.resources | nindent 12 }} + securityContext: + runAsUser: 0 + {{- if eq .Values.monitoring.loki.persistence.enable "true" }} + volumes: + - name: loki-claim0 + persistentVolumeClaim: + claimName: loki-claim0 + {{- end }} + restartPolicy: Always +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/loki-pvc.yaml b/helm/capif/templates/loki-pvc.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7da7816938afdb1850f7d2f312f967811dafb69d --- /dev/null +++ b/helm/capif/templates/loki-pvc.yaml @@ -0,0 +1,16 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.loki.persistence.enable "true" }} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + io.kompose.service: loki-claim0 + name: loki-claim0 +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: {{ .Values.monitoring.loki.persistence.storage }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/loki-service.yaml b/helm/capif/templates/loki-service.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cf711a9d162ef071fd1601983f1f8e9f11814c96 --- /dev/null +++ b/helm/capif/templates/loki-service.yaml @@ -0,0 +1,19 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: Service +metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + labels: + io.kompose.service: loki + {{- include "capif.labels" . | nindent 4 }} + name: loki +spec: + ports: + - name: "loki-port" + port: 3100 + targetPort: 3100 + selector: + io.kompose.service: loki +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/mongo-express.yaml b/helm/capif/templates/mongo-express.yaml new file mode 100644 index 0000000000000000000000000000000000000000..28d553b2fa05594f414638d5122e3cc442889ec7 --- /dev/null +++ b/helm/capif/templates/mongo-express.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: mongo-express + labels: + io.kompose.service: mongo-express + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.mongoExpress.type }} + selector: + io.kompose.service: mongo-express + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.mongoExpress.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/mongo-pvc.yaml b/helm/capif/templates/mongo-pvc.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2996d57e2c121fe12c2fd04d5ceaceee14320f4c --- /dev/null +++ b/helm/capif/templates/mongo-pvc.yaml @@ -0,0 +1,16 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.mongo.persistence.enable "true" }} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + io.kompose.service: mongo-pvc + name: mongo-pvc +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: {{ .Values.mongo.persistence.storage }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/mongo-register.yaml b/helm/capif/templates/mongo-register.yaml new file mode 100644 index 0000000000000000000000000000000000000000..82b307f5f2bae9e026b1efb78c15a09d319d7306 --- /dev/null +++ b/helm/capif/templates/mongo-register.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: mongo-register + labels: + io.kompose.service: mongo-register + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.mongoRegister.type }} + selector: + io.kompose.service: mongo-register + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.mongoRegister.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/mongo.yaml b/helm/capif/templates/mongo.yaml new file mode 100644 index 0000000000000000000000000000000000000000..864276480d68191f6363191ab080936d43484d17 --- /dev/null +++ b/helm/capif/templates/mongo.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: mongo + labels: + io.kompose.service: mongo + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.mongo.type }} + selector: + io.kompose.service: mongo + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.mongo.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/nginx-ingress-route.yaml b/helm/capif/templates/nginx-ingress-route.yaml new file mode 100644 index 0000000000000000000000000000000000000000..57ca0bed1a4c8de978e244a34f346edb8606b003 --- /dev/null +++ b/helm/capif/templates/nginx-ingress-route.yaml @@ -0,0 +1,17 @@ +{{- if eq .Values.nginx.ingressType "IngressRoute" }} +--- +apiVersion: traefik.containo.us/v1alpha1 +kind: IngressRoute +metadata: + name: nginx-capif-ingress-route +spec: + entryPoints: [web] + routes: + - kind: Rule + match: Host(`{{ .Values.nginx.nginx.env.capifHostname }} && Path(`/ca-root`, `/sign-csr`, `/certdata`, `/register`, `/testdata`, `/getauth`, `/test`)`) + services: + - kind: Service + name: nginx + port: 8080 + scheme: http +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/nginx-ssl-ingress-route.yaml b/helm/capif/templates/nginx-ssl-ingress-route.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8c806b69a712ee995f4c97c8e406383acb507190 --- /dev/null +++ b/helm/capif/templates/nginx-ssl-ingress-route.yaml @@ -0,0 +1,18 @@ +{{- if eq .Values.nginx.ingressType "IngressRoute" }} +--- +apiVersion: traefik.containo.us/v1alpha1 +kind: IngressRoute +metadata: + name: nginx-ssl-capif-ingress-route +spec: + entryPoints: [web] + routes: + - kind: Rule + match: Host(`{{ .Values.nginx.nginx.env.capifHostname }}`) + services: + - kind: Service + name: nginx + port: 443 + tls: + passthrough: true +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/nginx-ssl-route.yaml b/helm/capif/templates/nginx-ssl-route.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3e24b72131a56915468f2dc40329fbf9c480f222 --- /dev/null +++ b/helm/capif/templates/nginx-ssl-route.yaml @@ -0,0 +1,22 @@ +{{- if eq .Values.env "openshift" }} +apiVersion: route.openshift.io/v1 +kind: Route +metadata: + labels: + name: nginx-ssl +spec: + host: {{ .Values.nginx.nginx.env.capifHostname }} + port: + targetPort: "443" + tls: + termination: passthrough + to: + kind: Service + name: nginx + weight: 100 +status: + ingress: + - conditions: + host: {{ .Values.nginx.nginx.env.capifHostname }} + routerCanonicalHostname: router-default.apps.ocp-epg.hi.inet +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/nginx-ssl.yaml b/helm/capif/templates/nginx-ssl.yaml new file mode 100644 index 0000000000000000000000000000000000000000..275e2c782095b810a385e95babaef5da223cb396 --- /dev/null +++ b/helm/capif/templates/nginx-ssl.yaml @@ -0,0 +1,32 @@ +{{- if eq .Values.nginx.ingressType "Ingress" }} +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: nginx-register + labels: + {{- include "capif.labels" . | nindent 4 }} + {{- with .Values.nginx.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + cert-manager.io/issuer: letsencrypt-issuer + {{- end }} +spec: +{{- if .Values.nginx.ingressClassName }} + ingressClassName: {{ .Values.nginx.ingressClassName }} +{{- end }} + rules: + - host: "register{{ .Values.nginx.nginx.env.capifHostname }}" + http: + paths: + - backend: + service: + name: 'register' + port: + number: 8084 + path: / + pathType: Prefix + tls: + - hosts: + - "register{{ .Values.nginx.nginx.env.capifHostname }}" + secretName: letsencrypt-secret +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/nginx.yaml b/helm/capif/templates/nginx.yaml new file mode 100644 index 0000000000000000000000000000000000000000..61856f56231201a76f82b2dce5b79c802a8e6953 --- /dev/null +++ b/helm/capif/templates/nginx.yaml @@ -0,0 +1,48 @@ +{{- if eq .Values.nginx.ingressType "Ingress" }} +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: nginx + labels: + {{- include "capif.labels" . | nindent 4 }} + {{- with .Values.nginx.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" + nginx.ingress.kubernetes.io/ssl-passthrough: "true" + nginx.ingress.kubernetes.io/ssl-redirect: "true" + {{- end }} +spec: +{{- if .Values.nginx.ingressClassName }} + ingressClassName: {{ .Values.nginx.ingressClassName }} +{{- end }} + rules: + - host: "{{ .Values.nginx.nginx.env.capifHostname }}" + http: + paths: + - backend: + service: + name: 'nginx' + port: + number: 443 + path: / + pathType: Prefix +{{- end }} +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx + labels: + io.kompose.service: nginx + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.nginx.type }} + selector: + io.kompose.service: nginx + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.nginx.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/otel-collector-configmap.yaml b/helm/capif/templates/otel-collector-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..fed15350a9b18e02a7e0a09935526034dfd316d9 --- /dev/null +++ b/helm/capif/templates/otel-collector-configmap.yaml @@ -0,0 +1,37 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: open-telemetry-configmap +data: + otel-collector-config.yaml: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:55680 + http: + endpoint: 0.0.0.0:4318 + + processors: + batch: + + + exporters: + logging: + loglevel: debug + otlp: + #timeout: 60s + endpoint: {{ .Values.monitoring.otel.configMap.tempoEndpoint }} + tls: + insecure: true + + + service: + pipelines: + traces: + receivers: [otlp] + processors: [batch] + # exporters: [otlp] + exporters: [otlp] +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/otel-collector-deployment.yaml b/helm/capif/templates/otel-collector-deployment.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8c83eca1ae2fe50c98b8ce73dde333c59f710579 --- /dev/null +++ b/helm/capif/templates/otel-collector-deployment.yaml @@ -0,0 +1,54 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + labels: + io.kompose.service: otel-collector + {{- include "capif.labels" . | nindent 4 }} + name: otel-collector +spec: + replicas: 1 + selector: + matchLabels: + io.kompose.service: otel-collector + {{- include "capif.selectorLabels" . | nindent 6 }} + strategy: + type: Recreate + template: + metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + checksum/config: {{ include (print $.Template.BasePath "/otel-collector-configmap.yaml") . | sha256sum }} + labels: + io.kompose.network/monitoring-default: "true" + io.kompose.service: otel-collector + {{- include "capif.selectorLabels" . | nindent 8 }} + spec: + containers: + - args: + - --config + - /etc/otel-collector-config.yaml + image: {{ .Values.monitoring.otel.image.repository }}:{{ .Values.monitoring.otel.image.tag }} + name: otel-collector + ports: + - containerPort: 55680 + - containerPort: 4317 + resources: + {{- toYaml .Values.monitoring.otel.resources | nindent 12 }} + volumeMounts: + - name: op-telemetry + mountPath: /etc/otel-collector-config.yaml + subPath: otel-collector-config.yaml + restartPolicy: Always + volumes: + - name: op-telemetry + configMap: + name: open-telemetry-configmap + items: + - key: "otel-collector-config.yaml" + path: "otel-collector-config.yaml" +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/otel-collector-service.yaml b/helm/capif/templates/otel-collector-service.yaml new file mode 100644 index 0000000000000000000000000000000000000000..761b8ceb4a3d1d96bfe098d7485037b8a2b21c78 --- /dev/null +++ b/helm/capif/templates/otel-collector-service.yaml @@ -0,0 +1,22 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: Service +metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + labels: + io.kompose.service: otel-collector + {{- include "capif.labels" . | nindent 4 }} + name: otel-collector +spec: + ports: + - name: "grpc-port" + port: 55680 + targetPort: 55680 + - name: "http-port" + port: 4318 + targetPort: 4318 + selector: + io.kompose.service: otel-collector +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/prometheus-clusterrole.yaml b/helm/capif/templates/prometheus-clusterrole.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3470ffd70714c68113df3ad009b36980cd1f147f --- /dev/null +++ b/helm/capif/templates/prometheus-clusterrole.yaml @@ -0,0 +1,49 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.prometheus.enable "true" }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: prometheus + labels: + app: prometheus +rules: +- apiGroups: [""] + resources: + - nodes + - nodes/proxy + - services + - endpoints + - pods + verbs: ["get", "list", "watch"] +- apiGroups: + - extensions + resources: + - ingresses + verbs: ["get", "list", "watch"] +- nonResourceURLs: ["/metrics"] + verbs: ["get"] +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: prometheus + namespace: {{ .Release.Namespace }} + labels: + app: prometheus +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: prometheus + labels: + app: prometheus +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: prometheus +subjects: +- kind: ServiceAccount + name: prometheus + namespace: {{ .Release.Namespace }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/prometheus-configmap.yaml b/helm/capif/templates/prometheus-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d2ab952e4ddb7013a0019f5c00586d5043db5cfe --- /dev/null +++ b/helm/capif/templates/prometheus-configmap.yaml @@ -0,0 +1,141 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.prometheus.enable "true" }} +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + app: prometheus + name: prometheus-config +data: + prometheus.rules: |- + groups: + - name: devopscube alert + rules: + - alert: High Pod Memory + expr: sum(container_memory_usage_bytes) > 1 + for: 1m + labels: + severity: slack + annotations: + summary: High Memory Usage + prometheus.yml: |- + global: + scrape_interval: 30s + scrape_timeout: 10s + scrape_configs: + #------------- configuration to collect pods metrics kubelet ------------------- + - job_name: 'kubernetes-cadvisor' + scheme: https + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + kubernetes_sd_configs: + - role: node + relabel_configs: + - action: labelmap + regex: __meta_kubernetes_node_label_(.+) + - target_label: __address__ + replacement: kubernetes.default.svc:443 + - source_labels: [__meta_kubernetes_node_name] + regex: (.+) + target_label: __metrics_path__ + replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor + #------------- configuration to collect pods metrics ------------------- + - job_name: 'kubernetes-pods' + honor_labels: true + kubernetes_sd_configs: + - role: pod + relabel_configs: + # select only those pods that has "prometheus.io/scrape: true" annotation + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] + action: keep + regex: true + # set metrics_path (default is /metrics) to the metrics path specified in "prometheus.io/path: " annotation. + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # set the scrapping port to the port specified in "prometheus.io/port: " annotation and set address accordingly. + - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] + action: replace + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + target_label: __address__ + - action: labelmap + regex: __meta_kubernetes_pod_label_(.+) + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: kubernetes_namespace + - source_labels: [__meta_kubernetes_pod_name] + action: replace + target_label: kubernetes_pod_name + + #-------------- configuration to collect metrics from service endpoints ----------------------- + - job_name: 'kubernetes-service-endpoints' + honor_labels: true + kubernetes_sd_configs: + - role: endpoints + relabel_configs: + # select only those endpoints whose service has "prometheus.io/scrape: true" annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] + action: keep + regex: true + # set the metrics_path to the path specified in "prometheus.io/path: " annotation. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # set the scrapping port to the port specified in "prometheus.io/port: " annotation and set address accordingly. + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: kubernetes_namespace + - source_labels: [__meta_kubernetes_service_name] + action: replace + target_label: kubernetes_name + + #---------------- configuration to collect metrics from kubernetes apiserver ------------------------- + - job_name: 'kubernetes-apiservers' + honor_labels: true + kubernetes_sd_configs: + - role: endpoints + # kubernetes apiserver serve metrics on a TLS secure endpoints. so, we have to use "https" scheme + scheme: https + # we have to provide certificate to establish tls secure connection + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + # bearer_token_file is required for authorizating prometheus server to kubernetes apiserver + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + + relabel_configs: + - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] + action: keep + regex: default;kubernetes;https + + #--------------- configuration to collect metrics from nodes ----------------------- + - job_name: 'kubernetes-nodes' + honor_labels: true + scheme: https + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + + kubernetes_sd_configs: + - role: node + relabel_configs: + - action: labelmap + regex: __meta_kubernetes_node_label_(.+) + - target_label: __address__ + replacement: kubernetes.default.svc:443 + - source_labels: [__meta_kubernetes_node_name] + regex: (.+) + target_label: __metrics_path__ + replacement: /api/v1/nodes/${1}/proxy/metrics +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/prometheus-deployment.yaml b/helm/capif/templates/prometheus-deployment.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d70cf09ddd5b37061b73992076fccc318e0fb208 --- /dev/null +++ b/helm/capif/templates/prometheus-deployment.yaml @@ -0,0 +1,68 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.prometheus.enable "true" }} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: prometheus + labels: + app: prometheus + {{- include "capif.labels" . | nindent 4 }} +spec: + replicas: 1 + strategy: + type: Recreate + selector: + matchLabels: + app: prometheus + {{- include "capif.selectorLabels" . | nindent 6 }} + template: + metadata: + annotations: + checksum/config: {{ include (print $.Template.BasePath "/prometheus-configmap.yaml") . | sha256sum }} + labels: + app: prometheus + {{- include "capif.selectorLabels" . | nindent 8 }} + spec: + serviceAccountName: prometheus + containers: + - name: prometheus + image: {{ .Values.monitoring.prometheus.image.repository }}:{{ .Values.monitoring.prometheus.image.tag }} + args: + - "--config.file=/etc/prometheus/prometheus.yml" + - "--storage.tsdb.path=/prometheus/" + - "--storage.tsdb.retention.time={{.Values.monitoring.prometheus.retentionTime }}" + ports: + - containerPort: 9090 + resources: + {{- toYaml .Values.monitoring.prometheus.resources | nindent 12 }} + securityContext: + runAsUser: 0 + livenessProbe: + tcpSocket: + port: 9090 + initialDelaySeconds: 20 + volumeMounts: + - name: prometheus-config + mountPath: /etc/prometheus/ + {{- if eq .Values.monitoring.prometheus.persistence.enable "true" }} + - name: prometheus-storage-volume + mountPath: /prometheus/ + {{ else }} + - name: prometheus-storage + mountPath: /prometheus/ + {{- end }} + volumes: + - name: prometheus-config + configMap: + defaultMode: 420 + name: prometheus-config + {{- if eq .Values.monitoring.prometheus.persistence.enable "true" }} + - name: prometheus-storage-volume + persistentVolumeClaim: + claimName: prometheus-pvc + {{ else }} + - name: prometheus-storage + emptyDir: {} + {{- end }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/prometheus-ingress-route.yaml b/helm/capif/templates/prometheus-ingress-route.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b7a0d2b0e02ebbb80c5e4a53aa9240066759f581 --- /dev/null +++ b/helm/capif/templates/prometheus-ingress-route.yaml @@ -0,0 +1,20 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.prometheus.enable "true" }} +{{- if eq .Values.monitoring.prometheus.ingressRoute.enable "true" }} +apiVersion: traefik.containo.us/v1alpha1 +kind: IngressRoute +metadata: + name: prometheus-ingress-route +spec: + entryPoints: [web] + routes: + - kind: Rule + match: Host(`{{ .Values.monitoring.prometheus.ingressRoute.host }}`) + services: + - kind: Service + name: prometheus + port: {{ .Values.monitoring.prometheus.service.port }} + scheme: http +{{- end }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/prometheus-ingress.yaml b/helm/capif/templates/prometheus-ingress.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d08297327eac7192db7618a5937e2af6e5f05d1d --- /dev/null +++ b/helm/capif/templates/prometheus-ingress.yaml @@ -0,0 +1,36 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.prometheus.enable "true" }} +{{- if .Values.monitoring.prometheus.ingress.enabled -}} +{{- $svcPort := .Values.monitoring.prometheus.service.port -}} +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: prometheus-ingress + labels: + {{- include "capif.labels" . | nindent 4 }} + {{- with .Values.monitoring.prometheus.ingress.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: +{{- if .Values.monitoring.prometheus.ingress.ingressClassName }} + ingressClassName: {{ .Values.monitoring.prometheus.ingress.ingressClassName }} +{{- end }} + rules: + {{- range .Values.monitoring.prometheus.ingress.hosts }} + - host: {{ .host | quote }} + http: + paths: + {{- range .paths }} + - path: {{ .path }} + pathType: {{ .pathType }} + backend: + service: + name: prometheus + port: + number: {{ $svcPort }} + {{- end }} + {{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/helm/capif/templates/prometheus-pvc.yaml b/helm/capif/templates/prometheus-pvc.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0ba676f3e817a25928e79c024021f62663e05de5 --- /dev/null +++ b/helm/capif/templates/prometheus-pvc.yaml @@ -0,0 +1,19 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.prometheus.enable "true" }} +{{- if eq .Values.monitoring.prometheus.persistence.enable "true" }} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: prometheus-pvc + labels: + app: prometheus + {{- include "capif.labels" . | nindent 4 }} +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: {{ .Values.monitoring.prometheus.persistence.storage }} +{{- end }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/prometheus-service.yaml b/helm/capif/templates/prometheus-service.yaml new file mode 100644 index 0000000000000000000000000000000000000000..778dbd580e55f200bb735b5605f0e00ba4ef1d94 --- /dev/null +++ b/helm/capif/templates/prometheus-service.yaml @@ -0,0 +1,22 @@ +{{- if eq .Values.monitoring.enable "true" }} +{{- if eq .Values.monitoring.prometheus.enable "true" }} +apiVersion: v1 +kind: Service +metadata: + annotations: + prometheus.io/path: /metrics + prometheus.io/port: {{ quote .Values.monitoring.prometheus.service.port }} + prometheus.io/scrape: "true" + name: prometheus + labels: + {{- include "capif.labels" . | nindent 4 }} +spec: + type: {{ .Values.monitoring.prometheus.service.type }} + ports: + - port: {{ .Values.monitoring.prometheus.service.port }} + protocol: TCP + targetPort: {{ .Values.monitoring.prometheus.service.port }} + selector: + app: prometheus +{{- end }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/published-apis.yaml b/helm/capif/templates/published-apis.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a5444f1640bf02b113d3bfd967bcde78122efcf2 --- /dev/null +++ b/helm/capif/templates/published-apis.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: published-apis + labels: + io.kompose.service: published-apis + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.publishedApis.type }} + selector: + io.kompose.service: published-apis + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.publishedApis.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/redis.yaml b/helm/capif/templates/redis.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3254a95f48f9b36be9de55d7f0c1d08df2ba5f73 --- /dev/null +++ b/helm/capif/templates/redis.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: redis + labels: + io.kompose.service: redis + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.redis.type }} + selector: + io.kompose.service: redis + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.redis.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/templates/register-configmap.yaml b/helm/capif/templates/register-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..51293a5b7824d6688d9545d600c5571f9db4d55d --- /dev/null +++ b/helm/capif/templates/register-configmap.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: register-configmap + labels: + {{- include "capif.labels" . | nindent 4 }} +data: + config.yaml: |- + mongo: { + 'user': 'root', + 'password': 'example', + 'db': 'capif_users', + 'col': 'user', + 'host': '{{ .Values.register.register.env.mongoHost }}', + 'port': '{{ .Values.register.register.env.mongoPort }}' + } + ca_factory: { + "url": "{{ .Values.parametersVault.env.vaultHostname }}", + "port": "{{ .Values.parametersVault.env.vaultPort }}", + "token": "{{ .Values.parametersVault.env.vaultAccessToken }}" + } \ No newline at end of file diff --git a/helm/capif/templates/register.yaml b/helm/capif/templates/register.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2de1d64248110745b5999c1e50e1b5801166b709 --- /dev/null +++ b/helm/capif/templates/register.yaml @@ -0,0 +1,19 @@ +{{- if eq .Values.register.enable "true" }} +apiVersion: v1 +kind: Service +metadata: + name: register + labels: + io.kompose.service: register + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.register.type }} + selector: + io.kompose.service: register + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.register.ports | toYaml | nindent 2 -}} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/renderer-configmap.yaml b/helm/capif/templates/renderer-configmap.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0159fcbc02450bc9a3eb9522e11e11c42656977e --- /dev/null +++ b/helm/capif/templates/renderer-configmap.yaml @@ -0,0 +1,8 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: renderer-configmap +data: + ENABLE_METRICS: {{ quote .Values.monitoring.renderer.env.enableMetrics }} +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/renderer-deployment.yaml b/helm/capif/templates/renderer-deployment.yaml new file mode 100644 index 0000000000000000000000000000000000000000..83a7ee2f73cb0c5254a3d75e1673a4e22d42dfab --- /dev/null +++ b/helm/capif/templates/renderer-deployment.yaml @@ -0,0 +1,44 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + labels: + io.kompose.service: renderer + {{- include "capif.labels" . | nindent 4 }} + name: renderer +spec: + replicas: 1 + selector: + matchLabels: + io.kompose.service: renderer + {{- include "capif.selectorLabels" . | nindent 6 }} + strategy: {} + template: + metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + checksum/config: {{ include (print $.Template.BasePath "/renderer-configmap.yaml") . | sha256sum }} + labels: + io.kompose.network/monitoring-default: "true" + io.kompose.service: renderer + {{- include "capif.selectorLabels" . | nindent 8 }} + spec: + containers: + - env: + - name: ENABLE_METRICS + valueFrom: + configMapKeyRef: + name: renderer-configmap + key: ENABLE_METRICS + image: {{ .Values.monitoring.renderer.image.repository }}:{{ .Values.monitoring.renderer.image.tag }} + name: grafana-image-renderer + ports: + - containerPort: 8081 + resources: + {{- toYaml .Values.monitoring.renderer.resources | nindent 12 }} + restartPolicy: Always +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/renderer-service.yaml b/helm/capif/templates/renderer-service.yaml new file mode 100644 index 0000000000000000000000000000000000000000..471a51dca046b1d35fcf0afd68a500a48db62cdc --- /dev/null +++ b/helm/capif/templates/renderer-service.yaml @@ -0,0 +1,18 @@ +{{- if eq .Values.monitoring.enable "true" }} +apiVersion: v1 +kind: Service +metadata: + annotations: + kompose.cmd: kompose -f docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) + labels: + io.kompose.service: renderer + name: renderer +spec: + ports: + - name: "rendere-port" + port: 8081 + targetPort: 8081 + selector: + io.kompose.service: renderer +{{- end }} \ No newline at end of file diff --git a/helm/capif/templates/service-apis.yaml b/helm/capif/templates/service-apis.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bff1af594b5ceba6a7ba58a78103fb1b885b43ea --- /dev/null +++ b/helm/capif/templates/service-apis.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: service-apis + labels: + io.kompose.service: service-apis + {{- include "capif.labels" . | nindent 4 }} + annotations: + kompose.cmd: kompose -f ../services/docker-compose.yml convert + kompose.version: 1.28.0 (c4137012e) +spec: + type: {{ .Values.serviceApis.type }} + selector: + io.kompose.service: service-apis + {{- include "capif.selectorLabels" . | nindent 4 }} + ports: + {{- .Values.serviceApis.ports | toYaml | nindent 2 -}} \ No newline at end of file diff --git a/helm/capif/values.yaml b/helm/capif/values.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a2aea33c43517bce909dad4d8133e6846fe94873 --- /dev/null +++ b/helm/capif/values.yaml @@ -0,0 +1,660 @@ +# -- The Environment variable. Use openshift if you are deploying in Openshift cluster. anotherwise use the field empty +env: "" + +# Use the Ip address dude for the kubernetes to your Ingress Controller ej: kubectl -n NAMESPACE_CAPIF get ing +ingress: + ip: "10.17.173.127" + +monitoring: + enable: "true" + +accessControlPolicy: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/access-control-policy" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + type: ClusterIP + +CapifClient: + # -- If enable capif client. + enable: "true" + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/client" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + type: ClusterIP + +apiInvocationLogs: + apiInvocationLogs: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/api-invocation-logs-api" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + type: ClusterIP +apiInvokerManagement: + apiInvokerManagement: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/api-invoker-management-api" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + type: ClusterIP +apiProviderManagement: + apiProviderManagement: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/api-provider-management-api" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + type: ClusterIP +capifEvents: + capifEvents: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/events-api" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + type: ClusterIP +capifRoutingInfo: + capifRoutingInfo: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/routing-info-api" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + type: ClusterIP +capifSecurity: + capifSecurity: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/security-api" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + type: ClusterIP +register: + # -- If register enabled. enable: true, enable: "" = not enabled + enable: "true" + register: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/register" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + env: + mongoHost: mongo-register + mongoPort: 27017 + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8084 + targetPort: 8080 + replicas: 1 + type: ClusterIP +mongoRegister: + mongo: + env: + # User's password MongoDB + mongoInitdbRootPassword: example + # Name of User's mongodb + mongoInitdbRootUsername: root + image: + # -- The docker image repository to use + repository: "mongo" + # -- The docker image tag to use + # @default Chart version + tag: "6.0.2" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + resources: {} +# limits: +# cpu: 100m +# memory: 128Mi +# requests: +# cpu: 100m +# memory: 128Mi + ports: + - name: "27017" + port: 27017 + targetPort: 27017 + replicas: 1 + type: ClusterIP + +kubernetesClusterDomain: cluster.local +logs: + # -- If register enabled. enable: true, enable: "" = not enabled + enable: "true" + logs: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/auditing-api" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + type: ClusterIP +mongo: + mongo: + env: + # User's password MongoDB + mongoInitdbRootPassword: example + # Name of User's mongodb + mongoInitdbRootUsername: root + image: + # -- The docker image repository to use + repository: "mongo" + # -- The docker image tag to use + # @default Chart version + tag: "6.0.2" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + resources: {} +# limits: +# cpu: 100m +# memory: 128Mi +# requests: +# cpu: 100m +# memory: 128Mi + ports: + - name: "27017" + port: 27017 + targetPort: 27017 + replicas: 1 + type: ClusterIP + # -- If mongo.persistence enabled. enable: true, enable: "" = not enabled + persistence: + enable: "true" + storage: 8Gi +mongoExpress: + mongoExpress: + env: + # User's password MongoDB + meConfigMongodbAdminpassword: example + # Name of User's mongodb + meConfigMongodbAdminusername: root + # URI for connecting MongoDB + meConfigMongodbUrl: mongodb://root:example@mongo:27017/ + image: + # -- The docker image repository to use + repository: "mongo-express" + # -- The docker image tag to use + # @default Chart version + tag: "1.0.0-alpha.4" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8082" + port: 8082 + targetPort: 8081 + replicas: 1 + type: ClusterIP +nginx: + # -- if nginx.ingressType: "Ingress". set up monitoring.prometheus.ingress: true + # and monitoring.grafana.ingress: true + # Use IngressRoute if you want to use Gateway API. ex traefix + ingressType: "Ingress" + ingressClassName: nginx + annotations: + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" + nginx.ingress.kubernetes.io/ssl-passthrough: "true" + nginx.ingress.kubernetes.io/ssl-redirect: "true" + nginx: + env: + # -- Ingress's host to Capif + capifHostname: "my-capif.apps.ocp-epg.hi.inet" + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/nginx" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + - name: "443" + port: 443 + targetPort: 443 + replicas: 1 + type: ClusterIP +publishedApis: + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + publishedApis: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/publish-service-api" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + replicas: 1 + type: ClusterIP +redis: + ports: + - name: "6379" + port: 6379 + targetPort: 6379 + redis: + env: + # Mode of replication + redisReplicationMode: master + image: + # -- The docker image repository to use + repository: "redis" + # -- The docker image tag to use + # @default Chart version + tag: "alpine" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + replicas: 1 + type: ClusterIP +serviceApis: + ports: + - name: "8080" + port: 8080 + targetPort: 8080 + replicas: 1 + serviceApis: + image: + # -- The docker image repository to use + repository: "public.ecr.aws/o2v4a8t6/opencapif/discover-service-api" + # -- The docker image tag to use + # @default Chart version + tag: "" + # -- Image pull policy: Always, IfNotPresent + imagePullPolicy: Always + # -- If env.monitoring: true. Setup monitoring.enable: true + env: + monitoring: "true" + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + type: ClusterIP +parametersVault: + env: + vaultHostname: vault-internal.mon.svc.cluster.local + vaultPort: 8200 + vaultAccessToken: dev-only-token +# -- With tempo.enabled: false. It won't be deployed +# -- If monitoring.enable: "true". Also enable tempo.enabled: true +tempo: + enabled: true + tempo: + metricsGenerator: + enabled: true + remoteWriteUrl: "http://prometheus.mon.svc.cluster.local:9090/api/v1/write" + persistence: + enabled: true + size: 3Gi +monitoring: + # -- If monitoring enabled. enable: true, enable: "" = not enabled + enable: "true" + fluentBit: + image: + # -- The docker image repository to use + repository: "grafana/fluent-bit-plugin-loki" + # -- The docker image tag to use + # @default Chart version + tag: "latest" + env: + lokiUrl: http://loki:3100/loki/api/v1/push + resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + loki: + image: + # -- The docker image repository to use + repository: "grafana/loki" + # -- The docker image tag to use + # @default Chart version + tag: "2.8.0" + resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + # -- If grafana.persistence enabled. enable: true, enable: "" = not enabled + persistence: + enable: "true" + storage: 100Mi + otel: + image: + # -- The docker image repository to use + repository: "otel/opentelemetry-collector" + # -- The docker image tag to use + # @default Chart version + tag: "0.81.0" + configMap: + tempoEndpoint: monitoring-capif-tempo:4317 + resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + renderer: + image: + # -- The docker image repository to use + repository: "grafana/grafana-image-renderer" + # -- The docker image tag to use + # @default Chart version + tag: "latest" + env: + enableMetrics: "true" + resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + prometheus: + # -- With prometheus.enabled: "". It won't be deployed. prometheus.enable: "true" + # -- It will deploy prometheus + enable: "true" + image: + # -- The docker image repository to use + repository: "prom/prometheus" + # -- The docker image tag to use + # @default Chart version + tag: "latest" + retentionTime: 5d + resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + persistence: + enable: "true" + storage: 8Gi + service: + type: ClusterIP + port: 9090 + ingress: + enabled: true + ingressClassName: nginx + annotations: + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + hosts: + - host: prometheus.5gnacar.int + paths: + - path: / + pathType: Prefix + tls: [] + # - secretName: chart-example-tls + # hosts: + # - chart-example.local + # -- If ingressRoute enable=true, use monitoring.prometheus.ingress.enabled="" + ingressRoute: + enable: "" + host: prometheus.5gnacar.int + grafana: + image: + # -- The docker image repository to use + repository: "grafana/grafana" + # -- The docker image tag to use + # @default Chart version + tag: "latest" + env: + gfAuthAnonymousEnable: true + gfSecurityAllowEmbedding: true + gfAuthAnonymousOrgRole: Admin + gfSecurityAdminPassword: secure_pass + lokiUrl: http://loki:3100 + prometheusUrl: http://prometheus.mon.svc.cluster.local:9090 + tempoUrl: http://monitoring-capif-tempo:3100 + resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + # -- If grafana.persistence enabled. enable: true, enable: "" = not enabled + persistence: + enable: "true" + storage: 100Mi + service: + type: ClusterIP + port: 3000 + # -- If ingress enabled=true, use monitoring.grafana.ingressRoute.enable="" + ingress: + enabled: true + ingressClassName: nginx + annotations: + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + hosts: + - host: grafana.5gnacar.int + paths: + - path: / + pathType: Prefix + tls: [] + # - secretName: chart-example-tls + # hosts: + # - chart-example.local + # -- If ingressRoute enable=true, use monitoring.grafana.ingress.enabled="" + ingressRoute: + enable: "" + host: grafana.5gnacar.int diff --git a/helm/helmfile-capif.yaml b/helm/helmfile-capif.yaml new file mode 100644 index 0000000000000000000000000000000000000000..610e64a4d19b92696fa23bc3ecf9dd698546a5b5 --- /dev/null +++ b/helm/helmfile-capif.yaml @@ -0,0 +1,19 @@ +# helm upgrade --install -n mon monitoring-capif capif/ --set nginx.nginx.env.capifHostname=mon-capif.monitoring.int \ +# --set ingress_ip.oneke="10.17.173.127" --set env=oneke --atomic +helmDefaults: + createNamespace: true + timeout: 600 +releases: + - name: monitoring-capif + chart: ./capif/ + namespace: monitoring +# atomic: true + wait: true + values: + - ./capif/values.yaml + - nginx: + nginx: + env: + capifHostname: monitoring-capif.monitoring.int + - ingress: + ip: "10.17.173.127" \ No newline at end of file diff --git a/helm/vault-job/vault-job.yaml b/helm/vault-job/vault-job.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6e0e9ce696cfa6ab1d5ae023e405c87bddc5a836 --- /dev/null +++ b/helm/vault-job/vault-job.yaml @@ -0,0 +1,251 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: vault-prepare-certs + namespace: mon + labels: + io.kompose.service: api-invocation-logs + app: capif + app.kubernetes.io/name: capif + app.kubernetes.io/instance: capif +data: + vault-prepare-certs.sh: |- + #!/bin/sh + + echo "install dependencies" + apk add --no-cache jq openssl + + # Establecer las variables de entorno de Vault + + export VAULT_ADDR='http://vault-internal:8200' + + # In standalone's mode. Please use the root's token + # or the token with the sufficient permissions + # to execute the next commands in vault + # otherwise, if use the vault as dev's mode. Just + # type the token's dev. + export VAULT_TOKEN="dev-only-token" + export DOMAIN1=capif.mobilesandbox.cloud + + vault secrets enable pki + + echo "# Generar una CA en Vault #" + vault secrets tune -max-lease-ttl=87600h pki + + vault write -field=certificate pki/root/generate/internal \ + common_name="capif" \ + issuer_name="root-2023" \ + ttl=87600h > root_2023_ca.crt + + echo "# check root_2023_ca.crt #" + cat root_2023_ca.crt + + vault write pki/config/urls \ + issuing_certificates="$VAULT_ADDR/v1/pki/ca" \ + crl_distribution_points="$VAULT_ADDR/v1/pki/crl" + + # # Generar una CA intermedia en Vault + vault secrets enable -path=pki_int pki + + vault secrets tune -max-lease-ttl=43800h pki_int + + vault write -format=json pki_int/intermediate/generate/internal \ + common_name="capif Intermediate Authority" \ + issuer_name="capif-intermediate" \ + | jq -r '.data.csr' > pki_intermediate.csr + + echo "### content pki_intermediate.csr ###" + cat pki_intermediate.csr + + # Firmar la CA intermedia con la CA raíz + vault write -format=json pki/root/sign-intermediate \ + issuer_ref="root-2023" \ + csr=@pki_intermediate.csr \ + format=pem_bundle ttl="43800h" \ + | jq -r '.data.certificate' > capif_intermediate.cert.pem + + # Configurar la CA intermedia en Vault + vault write pki_int/intermediate/set-signed certificate=@capif_intermediate.cert.pem + + #Crear rol en Vault + vault write pki_int/roles/my-ca use_csr_common_name=true require_cn=false allowed_domains="*" allow_any_name=true allow_bare_domains=true allow_glob_domains=true allow_subdomains=true max_ttl=4300h ttl=4300h + + # Emitir un certificado firmado por la CA intermedia + # vault write -format=json pki_int/issue/my-ca \ + # common_name="nginx.mon.svc.cluster.local" \ + # format=pem_bundle ttl="438h" \ + # | jq -r '.data.certificate' > ccf_cert.crt.pem \ + # && jq -r '.data.issuing_ca' > root_ca.crt.pem \ + # && jq -r '.data.private_key' > private_key.pem + + # vault write -format=json pki_int/issue/my-ca \ + # common_name="nginx.mon.svc.cluster.local" \ + # format=pem_bundle ttl="438h" \ + # | jq -r '.data.private_key as $private_key | .data.issuing_ca as $issuing_ca | .data.certificate as $certificate | [$private_key, $issuing_ca, $certificate]' > cert_data.json + + + #Create CSR + openssl genrsa -out ./server.key 2048 + + cat > ./foo.cnf < cert_data.json + + vault write -format=json pki_int/sign/my-ca format=pem_bundle ttl="43000h" csr=@server.csr | jq -r '.data.issuing_ca as $issuing_ca | .data.certificate as $certificate | [$issuing_ca, $certificate]' > cert_data.json + + jq -r '.[0]' cert_data.json > root_ca.crt.pem + echo "### content root_ca.crt.pem ###" + cat root_ca.crt.pem + + echo "### content server_certificate.crt.pem ###" + jq -r '.[1]' cert_data.json > server_certificate.crt.pem + + openssl x509 -pubkey -noout -in server_certificate.crt.pem > server_certificate_pub.pem + + #vault kv put secret/ca ca=@root_ca.crt.pem root_2023_ca.crt + + #cat root_2023_ca.crt root_2023_ca.crt > ca.crt + + cat > certificados_concatenados.crt << EOF + $(cat "root_2023_ca.crt") + $(cat "root_ca.crt.pem") + EOF + echo "### content of root_2023_ca.crt ###" + cat root_2023_ca.crt + + echo "### content of root_ca.crt.pem ###" + cat root_ca.crt.pem + + echo "### content of certificados_concatenados.crt ###" + cat certificados_concatenados.crt + + # vault kv put secret/ca ca=@root_2023_ca.crt + + echo "### enable secrets kv ###" + vault secrets enable -path=secret -version=2 kv + + vault kv put secret/ca ca=@certificados_concatenados.crt + + vault kv put secret/server_cert cert=@server_certificate.crt.pem + + vault kv put secret/server_cert/pub pub_key=@server_certificate_pub.pem + + vault kv put secret/server_cert/private key=@server.key + + #POLICY_NAME="my-policy" + #POLICY_FILE="my-policy.hcl" + #TOKEN_ID="read-ca-token" + + # Crear la política en Vault + #echo "path \"secret/data/ca\" { + # capabilities = [\"read\"] + #}" > "$POLICY_FILE" + + #vault policy write "$POLICY_NAME" "$POLICY_FILE" + + # Generar un nuevo token y asignar la política + #TOKEN=$(vault token create -id="$TOKEN_ID" -policy="$POLICY_NAME" -format=json | jq -r '.auth.client_token') + + #echo "Token generado:" + #echo "$TOKEN" +--- + +apiVersion: batch/v1 +kind: Job +metadata: + name: vault-pki + namespace: mon + labels: + io.kompose.service: vault-pki + app: capif + app.kubernetes.io/name: capif + app.kubernetes.io/instance: capif +spec: + template: + spec: + containers: + - name: vault-pki + image: docker.io/hashicorp/vault:1.15.1 + command: ["./vault-prepare-certs.sh"] + volumeMounts: + - name: vault-prepare-certs + mountPath: vault-prepare-certs.sh + subPath: vault-prepare-certs.sh + restartPolicy: Never + volumes: + - name: vault-prepare-certs + configMap: + name: vault-prepare-certs + defaultMode: 0777 + items: + - key: "vault-prepare-certs.sh" + path: "vault-prepare-certs.sh" + backoffLimit: 4 diff --git a/monitoring/docker-compose.yml b/monitoring/docker-compose.yml new file mode 100644 index 0000000000000000000000000000000000000000..41f647f0fd0371f207903d72ff41b5fcac598592 --- /dev/null +++ b/monitoring/docker-compose.yml @@ -0,0 +1,110 @@ +version: '3' +services: + prometheus: + image: prom/prometheus:latest + container_name: prometheus + user: "${DUID}:${DGID}" + volumes: + - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml + - ./prometheus/prometheus_db:/var/lib/prometheus + - ./prometheus/prometheus_db:/prometheus + - ./prometheus/prometheus_db:/etc/prometheus + - ./prometheus/alert.rules:/etc/prometheus/alert.rules + command: + - '--config.file=/etc/prometheus/prometheus.yml' + - '--web.route-prefix=/' + - '--storage.tsdb.retention.time=200h' + - '--web.enable-lifecycle' + restart: unless-stopped + ports: + - '9090:9090' + + # cadvisor collects metrics about running containers + cadvisor: + image: gcr.io/cadvisor/cadvisor:v0.47.2 + container_name: cadvisor + ports: + - 8090:8080 + volumes: + - /:/rootfs:ro + - /var/run:/var/run:rw + - /sys:/sys:ro + - /var/lib/docker/:/var/lib/docker:ro + - /var/run/docker.sock:/var/run/docker.sock:rw + + grafana: + image: grafana/grafana + user: "${DUID}:${DGID}" + environment: + - GF_SECURITY_ADMIN_PASSWORD=secure_pass + - GF_PATHS_PROVISIONING=/etc/grafana/provisioning + - GF_AUTH_ANONYMOUS_ENABLED=true + - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin + volumes: + - ./grafana/grafana_config/grafana.ini:/etc/grafana/grafana.ini + - ./grafana/grafana_db:/var/lib/grafana + - ./grafana/grafana_provisioning/grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml + - ./grafana/grafana_provisioning/grafana-default-provisioning.yaml:/etc/grafana/provisioning/dashboards/default.yaml + - ./grafana/grafana_dashboards/Docker-monitoring.json:/var/lib/grafana/dashboards/Docker-monitoring.json + - ./grafana/grafana_dashboards/Loki-Logs.json:/var/lib/grafana/dashboards/Loki-Logs.json + + depends_on: + - prometheus + ports: + - '3000:3000' + + # loki save and analyze logs + loki: + image: grafana/loki:2.8.0 + ports: + - "3100:3100" + command: -config.file=/etc/loki/local-config.yaml + + # promtail send docker logs to loki + promtail: + image: grafana/promtail:2.8.0 + volumes: + - /var/log:/var/log + command: -config.file=/etc/promtail/config.yml + + # grafana image renderer + renderer: + image: grafana/grafana-image-renderer:latest + container_name: grafana-image-renderer + expose: + - "8081" + environment: + ENABLE_METRICS: "true" + + # fluent-bit send logs to loki + fluent-bit: + image: grafana/fluent-bit-plugin-loki:latest + container_name: fluent-bit + environment: + - LOKI_URL=http://loki:3100/loki/api/v1/push + volumes: + - ./fluent_bit/fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf + ports: + - "24224:24224" + - "24224:24224/udp" + + # opentelemetry collector + otel-collector: + image: otel/opentelemetry-collector:latest + ports: + - 55680:55680 + - 4317:4317 + volumes: + - ./otlp_collector/otel-config.yaml:/etc/otel-collector-config.yaml + command: ["--config", "/etc/otel-collector-config.yaml"] + + # tempo is a distributed tracing backend + tempo: + image: grafana/tempo:latest + command: [ "-config.file=/etc/tempo.yaml" ] + volumes: + - ./tempo/tempo.yaml:/etc/tempo.yaml + - ./tempo/tempo-data:/tmp/tempo + ports: + - 3102:3100 + diff --git a/monitoring/fluent_bit/fluent-bit.conf b/monitoring/fluent_bit/fluent-bit.conf new file mode 100644 index 0000000000000000000000000000000000000000..ab11a28efe9fa933c85b3ba8f47422219f3239e1 --- /dev/null +++ b/monitoring/fluent_bit/fluent-bit.conf @@ -0,0 +1,15 @@ +[INPUT] + Name forward + Listen 0.0.0.0 + Port 24224 +[Output] + Name grafana-loki + Match * + Url ${LOKI_URL} + RemoveKeys source + Labels {job="fluent-bit"} + LabelKeys container_name, traceID + BatchWait 1s + BatchSize 1001024 + LineFormat json + LogLevel info \ No newline at end of file diff --git a/monitoring/grafana/grafana_config/grafana.ini b/monitoring/grafana/grafana_config/grafana.ini new file mode 100644 index 0000000000000000000000000000000000000000..c69fa95ada4d314f8a95fcfda8b53143a19572ed --- /dev/null +++ b/monitoring/grafana/grafana_config/grafana.ini @@ -0,0 +1,1894 @@ +##################### Grafana Configuration Example ##################### +# +# Everything has defaults so you only need to uncomment things you want to +# change + +# possible values : production, development +;app_mode = production + +# instance name, defaults to HOSTNAME environment variable value or hostname if HOSTNAME var is empty +;instance_name = ${HOSTNAME} + +# force migration will run migrations that might cause dataloss +;force_migration = false + +#################################### Paths #################################### +[paths] +# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used) +;data = /var/lib/grafana/dashboards + +# Temporary files in `data` directory older than given duration will be removed +;temp_data_lifetime = 24h + +# Directory where grafana can store logs +;logs = /var/log/grafana + +# Directory where grafana will automatically scan and look for plugins +;plugins = /var/lib/grafana/plugins + +# folder that contains provisioning config files that grafana will apply on startup and while running. +;provisioning = conf/provisioning + +#################################### Server #################################### +[server] +# Protocol (http, https, h2, socket) +;protocol = http + +# This is the minimum TLS version allowed. By default, this value is empty. Accepted values are: TLS1.2, TLS1.3. If nothing is set TLS1.2 would be taken +;min_tls_version = "" + +# The ip address to bind to, empty will bind to all interfaces +;http_addr = + +# The http port to use +;http_port = 3000 + +# The public facing domain name used to access grafana from a browser +;domain = localhost + +# Redirect to correct domain if host header does not match domain +# Prevents DNS rebinding attacks +;enforce_domain = false + +# The full public facing url you use in browser, used for redirects and emails +# If you use reverse proxy and sub path specify full url (with sub path) +;root_url = %(protocol)s://%(domain)s:%(http_port)s/ + +# Serve Grafana from subpath specified in `root_url` setting. By default it is set to `false` for compatibility reasons. +;serve_from_sub_path = false + +# Log web requests +;router_logging = false + +# the path relative working path +;static_root_path = public + +# enable gzip +;enable_gzip = false + +# https certs & key file +;cert_file = +;cert_key = + +# Unix socket gid +# Changing the gid of a file without privileges requires that the target group is in the group of the process and that the process is the file owner +# It is recommended to set the gid as http server user gid +# Not set when the value is -1 +;socket_gid = + +# Unix socket mode +;socket_mode = + +# Unix socket path +;socket = + +# CDN Url +;cdn_url = + +# Sets the maximum time using a duration format (5s/5m/5ms) before timing out read of an incoming request and closing idle connections. +# `0` means there is no timeout for reading the request. +;read_timeout = 0 + +# This setting enables you to specify additional headers that the server adds to HTTP(S) responses. +[server.custom_response_headers] +#exampleHeader1 = exampleValue1 +#exampleHeader2 = exampleValue2 + +#################################### GRPC Server ######################### +;[grpc_server] +;network = "tcp" +;address = "127.0.0.1:10000" +;use_tls = false +;cert_file = +;key_file = + +#################################### Database #################################### +[database] +# You can configure the database connection by specifying type, host, name, user and password +# as separate properties or as on string using the url properties. + +# Either "mysql", "postgres" or "sqlite3", it's your choice +;type = sqlite3 +;host = 127.0.0.1:3306 +;name = grafana +;user = root +# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;""" +;password = + +# Use either URL or the previous fields to configure the database +# Example: mysql://user:secret@host:port/database +;url = + +# For "postgres", use either "disable", "require" or "verify-full" +# For "mysql", use either "true", "false", or "skip-verify". +;ssl_mode = disable + +# Database drivers may support different transaction isolation levels. +# Currently, only "mysql" driver supports isolation levels. +# If the value is empty - driver's default isolation level is applied. +# For "mysql" use "READ-UNCOMMITTED", "READ-COMMITTED", "REPEATABLE-READ" or "SERIALIZABLE". +;isolation_level = + +;ca_cert_path = +;client_key_path = +;client_cert_path = +;server_cert_name = + +# For "sqlite3" only, path relative to data_path setting +;path = grafana.db + +# Max idle conn setting default is 2 +;max_idle_conn = 2 + +# Max conn setting default is 0 (mean not set) +;max_open_conn = + +# Connection Max Lifetime default is 14400 (means 14400 seconds or 4 hours) +;conn_max_lifetime = 14400 + +# Set to true to log the sql calls and execution times. +;log_queries = + +# For "sqlite3" only. cache mode setting used for connecting to the database. (private, shared) +;cache_mode = private + +# For "sqlite3" only. Enable/disable Write-Ahead Logging, https://sqlite.org/wal.html. Default is false. +;wal = false + +# For "mysql" only if migrationLocking feature toggle is set. How many seconds to wait before failing to lock the database for the migrations, default is 0. +;locking_attempt_timeout_sec = 0 + +# For "sqlite" only. How many times to retry query in case of database is locked failures. Default is 0 (disabled). +;query_retries = 0 + +# For "sqlite" only. How many times to retry transaction in case of database is locked failures. Default is 5. +;transaction_retries = 5 + +# Set to true to add metrics and tracing for database queries. +;instrument_queries = false + +################################### Data sources ######################### +[datasources] +# Upper limit of data sources that Grafana will return. This limit is a temporary configuration and it will be deprecated when pagination will be introduced on the list data sources API. +;datasource_limit = 5000 + +#################################### Cache server ############################# +[remote_cache] +# Either "redis", "memcached" or "database" default is "database" +;type = database + +# cache connectionstring options +# database: will use Grafana primary database. +# redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=0,ssl=false`. Only addr is required. ssl may be 'true', 'false', or 'insecure'. +# memcache: 127.0.0.1:11211 +;connstr = + +# prefix prepended to all the keys in the remote cache +; prefix = + +# This enables encryption of values stored in the remote cache +;encryption = + +#################################### Data proxy ########################### +[dataproxy] + +# This enables data proxy logging, default is false +;logging = false + +# How long the data proxy waits to read the headers of the response before timing out, default is 30 seconds. +# This setting also applies to core backend HTTP data sources where query requests use an HTTP client with timeout set. +;timeout = 30 + +# How long the data proxy waits to establish a TCP connection before timing out, default is 10 seconds. +;dialTimeout = 10 + +# How many seconds the data proxy waits before sending a keepalive probe request. +;keep_alive_seconds = 30 + +# How many seconds the data proxy waits for a successful TLS Handshake before timing out. +;tls_handshake_timeout_seconds = 10 + +# How many seconds the data proxy will wait for a server's first response headers after +# fully writing the request headers if the request has an "Expect: 100-continue" +# header. A value of 0 will result in the body being sent immediately, without +# waiting for the server to approve. +;expect_continue_timeout_seconds = 1 + +# Optionally limits the total number of connections per host, including connections in the dialing, +# active, and idle states. On limit violation, dials will block. +# A value of zero (0) means no limit. +;max_conns_per_host = 0 + +# The maximum number of idle connections that Grafana will keep alive. +;max_idle_connections = 100 + +# How many seconds the data proxy keeps an idle connection open before timing out. +;idle_conn_timeout_seconds = 90 + +# If enabled and user is not anonymous, data proxy will add X-Grafana-User header with username into the request, default is false. +;send_user_header = false + +# Limit the amount of bytes that will be read/accepted from responses of outgoing HTTP requests. +;response_limit = 0 + +# Limits the number of rows that Grafana will process from SQL data sources. +;row_limit = 1000000 + +# Sets a custom value for the `User-Agent` header for outgoing data proxy requests. If empty, the default value is `Grafana/` (for example `Grafana/9.0.0`). +;user_agent = + +#################################### Analytics #################################### +[analytics] +# Server reporting, sends usage counters to stats.grafana.org every 24 hours. +# No ip addresses are being tracked, only simple counters to track +# running instances, dashboard and error counts. It is very helpful to us. +# Change this option to false to disable reporting. +;reporting_enabled = true + +# The name of the distributor of the Grafana instance. Ex hosted-grafana, grafana-labs +;reporting_distributor = grafana-labs + +# Set to false to disable all checks to https://grafana.com +# for new versions of grafana. The check is used +# in some UI views to notify that a grafana update exists. +# This option does not cause any auto updates, nor send any information +# only a GET request to https://raw.githubusercontent.com/grafana/grafana/main/latest.json to get the latest version. +;check_for_updates = true + +# Set to false to disable all checks to https://grafana.com +# for new versions of plugins. The check is used +# in some UI views to notify that a plugin update exists. +# This option does not cause any auto updates, nor send any information +# only a GET request to https://grafana.com to get the latest versions. +;check_for_plugin_updates = true + +# Google Analytics universal tracking code, only enabled if you specify an id here +;google_analytics_ua_id = + +# Google Analytics 4 tracking code, only enabled if you specify an id here +;google_analytics_4_id = + +# When Google Analytics 4 Enhanced event measurement is enabled, we will try to avoid sending duplicate events and let Google Analytics 4 detect navigation changes, etc. +;google_analytics_4_send_manual_page_views = false + +# Google Tag Manager ID, only enabled if you specify an id here +;google_tag_manager_id = + +# Rudderstack write key, enabled only if rudderstack_data_plane_url is also set +;rudderstack_write_key = + +# Rudderstack data plane url, enabled only if rudderstack_write_key is also set +;rudderstack_data_plane_url = + +# Rudderstack SDK url, optional, only valid if rudderstack_write_key and rudderstack_data_plane_url is also set +;rudderstack_sdk_url = + +# Rudderstack Config url, optional, used by Rudderstack SDK to fetch source config +;rudderstack_config_url = + +# Intercom secret, optional, used to hash user_id before passing to Intercom via Rudderstack +;intercom_secret = + +# Controls if the UI contains any links to user feedback forms +;feedback_links_enabled = true + +#################################### Security #################################### +[security] +# disable creation of admin user on first start of grafana +;disable_initial_admin_creation = false + +# default admin user, created on startup +;admin_user = admin + +# default admin password, can be changed before first start of grafana, or in profile settings +;admin_password = admin + +# default admin email, created on startup +;admin_email = admin@localhost + +# used for signing +;secret_key = SW2YcwTIb9zpOOhoPsMm + +# current key provider used for envelope encryption, default to static value specified by secret_key +;encryption_provider = secretKey.v1 + +# list of configured key providers, space separated (Enterprise only): e.g., awskms.v1 azurekv.v1 +;available_encryption_providers = + +# disable gravatar profile images +;disable_gravatar = false + +# data source proxy whitelist (ip_or_domain:port separated by spaces) +;data_source_proxy_whitelist = + +# disable protection against brute force login attempts +;disable_brute_force_login_protection = false + +# set to true if you host Grafana behind HTTPS. default is false. +;cookie_secure = false + +# set cookie SameSite attribute. defaults to `lax`. can be set to "lax", "strict", "none" and "disabled" +;cookie_samesite = lax + +# set to true if you want to allow browsers to render Grafana in a ,