diff --git a/doc/source/locale/id/LC_MESSAGES/doc-devref.po b/doc/source/locale/id/LC_MESSAGES/doc-devref.po new file mode 100644 index 0000000000..3dbf0c7b60 --- /dev/null +++ b/doc/source/locale/id/LC_MESSAGES/doc-devref.po @@ -0,0 +1,1069 @@ +# suhartono , 2018. #zanata +msgid "" +msgstr "" +"Project-Id-Version: openstack-helm\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2018-09-29 05:49+0000\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=UTF-8\n" +"Content-Transfer-Encoding: 8bit\n" +"PO-Revision-Date: 2018-09-24 03:59+0000\n" +"Last-Translator: suhartono \n" +"Language-Team: Indonesian\n" +"Language: id\n" +"X-Generator: Zanata 4.3.3\n" +"Plural-Forms: nplurals=1; plural=0\n" + +msgid "" +"**Note:** The values defined in a PodDisruptionBudget may conflict with " +"other values that have been provided if an operator chooses to leverage " +"Rolling Updates for deployments. In the case where an operator defines a " +"``maxUnavailable`` and ``maxSurge`` within an update strategy that is higher " +"than a ``minAvailable`` within a pod disruption budget, a scenario may occur " +"where pods fail to be evicted from a deployment." +msgstr "" +"**Note:** Nilai yang ditentukan dalam PodDisruptionBudget mungkin " +"bertentangan dengan nilai lain yang telah disediakan jika operator memilih " +"untuk memanfaatkan Pembaruan Bergulir untuk penyebaran. Dalam kasus di mana " +"operator mendefinisikan `` maksUnavailable`` dan `` MaxSurge`` dalam " +"strategi pembaruan yang lebih tinggi dari `` minimum Tersedia dalam anggaran " +"gangguan pod, skenario dapat terjadi di mana pod gagal digusur dari sebuah " +"penyebaran." + +msgid "" +":code:`neutron/templates/bin/_neutron-linuxbridge-agent-init.sh.tpl` is " +"configuring the tunnel IP, external bridge and all bridge mappings defined " +"in config. It is done in init container, and the IP for tunneling is shared " +"using file :code:`/tmp/pod-shared/ml2-local-ip.ini` with main linuxbridge " +"container." +msgstr "" +":code:`neutron/templates/bin/_neutron-linuxbridge-agent-init.sh.tpl` " +"mengkonfigurasi IP terowongan (tunnel IP), jembatan eksternal dan semua " +"pemetaan jembatan yang didefinisikan dalam konfigurasi. Hal ini dilakukan " +"dalam init container, dan IP untuk tunneling dibagi menggunakan file :code:`/" +"tmp/pod-shared/ml2-local-ip.ini` dengan wadah linuxbridge utama." + +msgid "" +"A detail worth mentioning is that ovs is configured to use sockets, rather " +"than the default loopback mechanism." +msgstr "" +"Detail yang perlu disebutkan adalah bahwa ov dikonfigurasi untuk menggunakan " +"soket, daripada mekanisme loopback default." + +msgid "" +"A long-term goal, besides being image agnostic, is to also be able to " +"support any of the container runtimes that Kubernetes supports, even those " +"that might not use Docker's own packaging format. This will allow the " +"project to continue to offer maximum flexibility with regard to operator " +"choice." +msgstr "" +"Sasaran jangka panjang, selain sebagai image agnostik, juga dapat mendukung " +"salah satu dari runtimes kontainer yang didukung Kubernetes, bahkan yang " +"mungkin tidak menggunakan format pengepakan Docker sendiri. Ini akan " +"memungkinkan proyek untuk terus menawarkan fleksibilitas maksimum berkaitan " +"dengan pilihan operator." + +msgid "" +"All ``Deployment`` chart components are outfitted by default with rolling " +"update strategies:" +msgstr "" +"Semua komponen chart ``Deployment`` sudah dilengkapi secara default dengan " +"strategi pembaruan bergulir:" + +msgid "All dependencies described in neutron-dhcp-agent are valid here." +msgstr "" +"Semua dependensi yang dijelaskan dalam neutron-dhcp-agent berlaku di sini." + +msgid "" +"All of the above configs are endpoints or path to the specific class " +"implementing the interface. You can see the endpoints to class mapping in " +"`setup.cfg `_." +msgstr "" +"Semua konfigurasi di atas adalah endpoints atau path ke kelas khusus yang " +"mengimplementasikan antarmuka. Anda dapat melihat endpoints pemetaan kelas " +"di `setup.cfg `_." + +msgid "" +"Also note that other non-overridden values are inherited by hosts and labels " +"with overrides. The following shows a set of example hosts and the values " +"fed into the configmap for each:" +msgstr "" +"Juga perhatikan bahwa nilai non-override lainnya diwarisi oleh host dan " +"label dengan override. Berikut ini menunjukkan satu set contoh hosts dan " +"nilai yang dimasukkan ke dalam configmap untuk masing-masing:" + +msgid "" +"An illustrative example of an ``images:`` section taken from the heat chart:" +msgstr "" +"Contoh ilustratif dari bagian ``images:``yang diambil dari heat chart:" + +msgid "" +"Another place where the DHCP agent is dependent of L2 agent is the " +"dependency for the L2 agent daemonset:" +msgstr "" +"Tempat lain di mana agen DHCP bergantung pada agen L2 adalah ketergantungan " +"untuk daemonset agen L2:" + +msgid "" +"As Helm stands today, several issues exist when you update images within " +"charts that might have been used by jobs that already ran to completion or " +"are still in flight. OpenStack-Helm developers will continue to work with " +"the Helm community or develop charts that will support job removal prior to " +"an upgrade, which will recreate services with updated images. An example of " +"where this behavior would be desirable is when an updated db\\_sync image " +"has updated to point from a Mitaka image to a Newton image. In this case, " +"the operator will likely want a db\\_sync job, which was already run and " +"completed during site installation, to run again with the updated image to " +"bring the schema inline with the Newton release." +msgstr "" +"Saat Helm berdiri hari ini, ada beberapa masalah ketika Anda memperbarui " +"image dalam chart yang mungkin telah digunakan oleh pekerjaan yang sudah " +"berjalan hingga selesai atau masih dalam penerbangan (in flight). Pengembang " +"OpenStack-Helm akan terus bekerja dengan komunitas Helm atau mengembangkan " +"chart yang akan mendukung penghapusan pekerjaan sebelum peningkatan, yang " +"akan menciptakan layanan dengan image yang diperbarui. Contoh di mana " +"perilaku ini akan diinginkan adalah ketika image db\\_sync yang diperbarui " +"telah diperbarui untuk menunjuk dari image Mitaka ke citra Newton. Dalam hal " +"ini, operator kemungkinan akan menginginkan pekerjaan db\\_sync, yang sudah " +"dijalankan dan diselesaikan selama instalasi situs, untuk dijalankan kembali " +"dengan image yang diperbarui untuk membawa skema inline dengan rilis Newton." + +msgid "" +"As an example, this line uses the ``endpoint_type_lookup_addr`` macro in the " +"``helm-toolkit`` chart (since it is used by all charts). Note that there is " +"a second convention here. All ``{{ define }}`` macros in charts should be " +"pre-fixed with the chart that is defining them. This allows developers to " +"easily identify the source of a Helm macro and also avoid namespace " +"collisions. In the example above, the macro ``endpoint_type_look_addr`` is " +"defined in the ``helm-toolkit`` chart. This macro is passing three " +"parameters (aided by the ``tuple`` method built into the go/sprig templating " +"library used by Helm):" +msgstr "" +"Sebagai contoh, baris ini menggunakan makro ``endpoint_type_lookup_addr`` " +"dalam chart ``helm-toolkit``` (karena ini digunakan oleh semua charts). " +"Perhatikan bahwa ada konvensi kedua di sini. Semua `` {{define}} `` macro " +"dalam charts harus be pre-fixed dengan chart yang mendefinisikannya. Ini " +"memungkinkan pengembang untuk dengan mudah mengidentifikasi sumber makro " +"Helm dan juga menghindari tabrakan namespace. Dalam contoh di atas, makro " +"``endpoint_type_look_addr`` ditentukan dalam chart ``helm-toolkit``. Makro " +"ini melewati tiga parameter (dibantu oleh metode ``tuple`` yang dibangun di " +"dalam go/sprig templating library yang digunakan oleh Helm):" + +msgid "" +"As part of Neutron chart, this daemonset is running Neutron OVS agent. It is " +"dependent on having :code:`openvswitch-db` and :code:`openvswitch-vswitchd` " +"deployed and ready. Since its the default choice of the networking backend, " +"all configuration is in place in `neutron/values.yaml`. :code:`neutron-ovs-" +"agent` should not be deployed when another SDN is used in `network.backend`." +msgstr "" +"Sebagai bagian dari Neutron chart, daemon ini menjalankan agen Neutron OVS. " +"Ini tergantung pada pemilikan :code:`openvswitch-db` dan :code:`openvswitch-" +"vswitchd` disebarkan dan siap. Karena ini adalah pilihan default dari " +"backend jaringan, semua konfigurasi sudah ada di `neutron/values.yaml`. :" +"code:`neutron-ovs-agent` tidak boleh digunakan ketika SDN lain digunakan " +"dalam` network.backend`." + +msgid "" +"By default, each endpoint is located in the same namespace as the current " +"service's helm chart. To connect to a service which is running in a " +"different Kubernetes namespace, a ``namespace`` can be provided to each " +"individual endpoint." +msgstr "" +"Secara default, setiap endpoint ditempatkan di namespace yang sama dengan " +"chart helm layanan (service's helm) saat ini. Untuk menyambung ke layanan " +"yang berjalan di namespace Kubernetes yang berbeda, ``namespace`` dapat " +"diberikan ke setiap endpoint individual." + +msgid "" +"Charts should not use hard coded values such as ``http://keystone-api:5000`` " +"because these are not compatible with operator overrides and do not support " +"spreading components out over various namespaces." +msgstr "" +"Charts tidak boleh menggunakan nilai berkode keras (hard coded values) " +"seperti ``http://keystone-api:5000`` karena ini tidak kompatibel dengan " +"penggantian operator dan tidak mendukung penyebaran komponen di berbagai " +"namespaces." + +msgid "" +"Configuration of OVS bridges can be done via `neutron/templates/bin/_neutron-" +"openvswitch-agent-init.sh.tpl`. The script is configuring the external " +"network bridge and sets up any bridge mappings defined in :code:`network." +"auto_bridge_add`. These values should be align with :code:`conf.plugins." +"openvswitch_agent.ovs.bridge_mappings`." +msgstr "" +"Konfigurasi jembatan OVS dapat dilakukan melalu i`neutron/templates/bin/" +"_neutron-openvswitch-agent-init.sh.tpl`. Skrip mengkonfigurasi jembatan " +"jaringan eksternal dan membuat pemetaan jembatan yang didefinisikan dalam :" +"code:`network.auto_bridge_add`. Nilai ini harus sejajar :code:`conf.plugins." +"openvswitch_agent.ovs.bridge_mappings`." + +msgid "" +"Configure neutron-server with SDN specific core_plugin/mechanism_drivers." +msgstr "" +"Konfigurasikan neutron-server dengan SDN specific core_plugin/" +"mechanism_drivers." + +msgid "Configuring network plugin" +msgstr "Mengonfigurasi plugin jaringan" + +msgid "Contents:" +msgstr "Contents (isi):" + +msgid "Create separate chart with new SDN deployment method." +msgstr "Buat chart terpisah dengan metode penerapan SDN baru." + +msgid "" +"Currently OpenStack-Helm supports OpenVSwitch and LinuxBridge as a network " +"virtualization engines. In order to support many possible backends (SDNs), " +"modular architecture of Neutron chart was developed. OpenStack-Helm can " +"support every SDN solution that has Neutron plugin, either core_plugin or " +"mechanism_driver." +msgstr "" +"Saat ini OpenStack-Helm mendukung OpenVSwitch dan LinuxBridge sebagai mesin " +"virtualisasi jaringan. Untuk mendukung banyak kemungkinan backend (SDNs), " +"arsitektur modular Neutron chart dikembangkan. OpenStack-Helm dapat " +"mendukung setiap solusi SDN yang memiliki plugin Neutron, baik core_plugin " +"atau mechanism_driver." + +msgid "DHCP - auto-assign IP address and DNS info" +msgstr "DHCP - auto-assign IP address dan info DNS" + +msgid "" +"DHCP agent is running dnsmasq process which is serving the IP assignment and " +"DNS info. DHCP agent is dependent on the L2 agent wiring the interface. So " +"one should be aware that when changing the L2 agent, it also needs to be " +"changed in the DHCP agent. The configuration of the DHCP agent includes " +"option `interface_driver`, which will instruct how the tap interface created " +"for serving the request should be wired." +msgstr "" +"Agen DHCP menjalankan proses dnsmasq yang melayani penugasan IP (IP " +"assignment) dan info DNS. Agen DHCP tergantung pada agen L2 yang " +"menghubungkan antar muka. Jadi orang harus menyadari bahwa ketika mengubah " +"agen L2, itu juga perlu diubah dalam agen DHCP. Konfigurasi agen DHCP " +"termasuk opsi `interface_driver`, yang akan menginstruksikan bagaimana " +"antarmuka tap yang dibuat untuk melayani permintaan harus ditransfer." + +msgid "Developer References" +msgstr "Referensi Pengembang" + +msgid "" +"EFK (Elasticsearch, Fluent-bit & Fluentd, Kibana) based Logging Mechanism" +msgstr "" +"EFK (Elasticsearch, Fluent-bit & Fluentd, Kibana) berdasarkan Logging " +"Mechanism" + +msgid "Endpoints" +msgstr "Endpoints (titik akhir)" + +msgid "" +"Fluent-bit, Fluentd meet OpenStack-Helm's logging requirements for " +"gathering, aggregating, and delivering of logged events. Fluent-bit runs as " +"a daemonset on each node and mounts the `/var/lib/docker/containers` " +"directory. The Docker container runtime engine directs events posted to " +"stdout and stderr to this directory on the host. Fluent-bit then forward the " +"contents of that directory to Fluentd. Fluentd runs as deployment at the " +"designated nodes and expose service for Fluent-bit to forward logs. Fluentd " +"should then apply the Logstash format to the logs. Fluentd can also write " +"kubernetes and OpenStack metadata to the logs. Fluentd will then forward the " +"results to Elasticsearch and to optionally Kafka. Elasticsearch indexes the " +"logs in a logstash-* index by default. Kafka stores the logs in a ``logs`` " +"topic by default. Any external tool can then consume the ``logs`` topic." +msgstr "" +"Fluent-bit, Fluentd memenuhi persyaratan logging OpenStack-Helm untuk " +"gathering, aggregating, dan delivering peristiwa yang tercatat. Fluent-bit " +"berjalan sebagai daemonset pada setiap node dan me-mount direktori `/var/" +"lib/docker/containers`. Engine runtime kontainer Docker mengarahkan events " +"yang diposting ke stdout dan stderr ke direktori ini pada host. Fluent-bit " +"kemudian meneruskan isi direktori itu ke Fluentd. Fluentd berfungsi sebagai " +"penyebaran (deployment) di node yang ditunjuk dan mengekspos layanan untuk " +"Fluent-bit ke forward logs. Fluentd kemudian harus menerapkan format " +"Logstash ke log. Fluentd juga dapat menulis metadata kubernetes dan " +"OpenStack ke log. Fluentd kemudian akan meneruskan hasil ke Elasticsearch " +"dan ke opsional Kafka. Elasticsearch mengindeks log dalam logstash-* index " +"secara default. Kafka menyimpan log dalam topik ``log`` secara default. " +"Setiap alat eksternal kemudian dapat mengkonsumsi topik ``log``." + +msgid "" +"For instance, in the Neutron chart ``values.yaml`` the following endpoints " +"are defined:" +msgstr "" +"Sebagai contoh, dalam Neutron chart ``values.yaml``, endpoint berikut " +"didefinisikan:" + +msgid "Host overrides supercede label overrides" +msgstr "Host mengesampingkan penggantian label supercede" + +msgid "" +"If :code:`.Values.manifests.daemonset_ovs_agent` will be set to false, " +"neutron ovs agent would not be launched. In that matter, other type of L2 or " +"L3 agent on compute node can be run." +msgstr "" +"Jika :code:`.Values.manifests.daemonset_ovs_agent` akan disetel ke false, " +"agen neutron ovs tidak akan diluncurkan. Dalam hal ini, jenis lain dari agen " +"L2 atau L3 pada node komputasi dapat dijalankan." + +msgid "If required, add new networking agent label type." +msgstr "Jika diperlukan, tambahkan jenis label agen jaringan baru." + +msgid "" +"If the SDN implements its own version of L3 networking, neutron-l3-agent " +"should not be started." +msgstr "" +"Jika SDN mengimplementasikan versi jaringan L3nya sendiri, neutron-l3-agent " +"tidak boleh dimulai." + +msgid "" +"If the SDN of your choice is using the ML2 core plugin, then the extra " +"options in `neutron/ml2/plugins/ml2_conf.ini` should be configured:" +msgstr "" +"Jika SDN pilihan Anda menggunakan plugin ML2 core, maka opsi tambahan di " +"`neutron/ml2/plugins/ml2_conf.ini` harus dikonfigurasi:" + +msgid "Images" +msgstr "Images" + +msgid "" +"In ``values.yaml`` in each chart, the same defaults are supplied in every " +"chart, which allows the operator to override at upgrade or deployment time." +msgstr "" +"Dalam ``values.yaml`` di setiap chart, default yang sama disediakan dalam " +"setiap chart, yang memungkinkan operator untuk meng-override saat upgrade " +"atau waktu deployment." + +msgid "" +"In order to add support for more SDNs, these steps need to be performed:" +msgstr "" +"Untuk menambahkan dukungan untuk lebih banyak SDN, langkah-langkah ini perlu " +"dilakukan:" + +msgid "" +"In order to meet modularity criteria of Neutron chart, section `manifests` " +"in :code:`neutron/values.yaml` contains boolean values describing which " +"Neutron's Kubernetes resources should be deployed:" +msgstr "" +"Untuk memenuhi kriteria modularitas chart Neutron, bagian `manifes` dalam :" +"code:`neutron/values.yaml` berisi nilai boolean yang menjelaskan sumber " +"Kubernetes Neutron mana yang harus digunakan:" + +msgid "" +"In order to use linuxbridge in your OpenStack-Helm deployment, you need to " +"label the compute and controller/network nodes with `linuxbridge=enabled` " +"and use this `neutron/values.yaml` override:" +msgstr "" +"Untuk menggunakan linuxbridge dalam penyebaran OpenStack-Helm Anda, Anda " +"perlu memberi label node komputasi dan controller/network dengan " +"`linuxbridge=enabled` dan menggunakan pengalih (override) `neutron/values." +"yaml` ini:" + +msgid "" +"Introducing a new SDN solution should consider how the above services are " +"provided. It maybe required to disable built-in Neutron functionality." +msgstr "" +"Memperkenalkan solusi SDN baru harus mempertimbangkan bagaimana layanan di " +"atas disediakan. Mungkin diperlukan untuk menonaktifkan fungsi Neutron " +"bawaan." + +msgid "" +"L3 agent is serving the routing capabilities for Neutron networks. It is " +"also dependent on the L2 agent wiring the tap interface for the routers." +msgstr "" +"Agen L3 melayani kemampuan routing untuk jaringan Neutron. Hal ini juga " +"tergantung pada agen L2 yang memasang antarmuka keran (tap interface) untuk " +"router." + +msgid "L3 routing - creation of routers" +msgstr "L3 routing - pembuatan router" + +msgid "Linuxbridge" +msgstr "Linuxbridge" + +msgid "" +"Linuxbridge is the second type of Neutron reference architecture L2 agent. " +"It is running on nodes labeled `linuxbridge=enabled`. As mentioned before, " +"all nodes that are requiring the L2 services need to be labeled with " +"linuxbridge. This includes both the compute and controller/network nodes. It " +"is not possible to label the same node with both openvswitch and linuxbridge " +"(or any other network virtualization technology) at the same time." +msgstr "" +"Linuxbridge adalah tipe kedua dari arsitektur referensi Neutron, agen L2. " +"Ini berjalan pada node berlabel `linuxbridge=enabled`. Seperti disebutkan " +"sebelumnya, semua node yang membutuhkan layanan L2 perlu diberi label dengan " +"linuxbridge. Ini termasuk node komputasi dan controller/network. Tidaklah " +"mungkin untuk melabeli node yang sama dengan baik openvswitch dan " +"linuxbridge (atau teknologi virtualisasi jaringan lainnya) pada saat yang " +"bersamaan." + +msgid "Logging Mechanism" +msgstr "Logging Mechanism (mekanisme logging)" + +msgid "Logging Requirements" +msgstr "Logging Requirements (persyaratan logging)" + +msgid "Metadata - Provide proxy for Nova metadata service" +msgstr "Metadata - Menyediakan proxy untuk layanan metadata Nova" + +msgid "" +"Metadata-agent is a proxy to nova-metadata service. This one provides " +"information about public IP, hostname, ssh keys, and any tenant specific " +"information. The same dependencies apply for metadata as it is for DHCP and " +"L3 agents. Other SDNs may require to force the config driver in nova, since " +"the metadata service is not exposed by it." +msgstr "" +"Metadata-agent adalah proxy untuk layanan nova-metadata. Yang satu ini " +"memberikan informasi tentang IP publik, nama host, kunci ssh, dan informasi " +"khusus penyewa apa pun. Ketergantungan yang sama berlaku untuk metadata " +"seperti halnya untuk DHCP dan agen L3. SDN lain mungkin perlu memaksa driver " +"konfigurasi di nova, karena layanan metadata tidak terkena olehnya." + +msgid "Networking" +msgstr "Networking (jaringan)" + +msgid "Neutron architecture" +msgstr "Arsitektur Neutron" + +msgid "Neutron chart includes the following services:" +msgstr "Chart Neutron mencakup layanan berikut:" + +msgid "" +"Neutron-server service is scheduled on nodes with `openstack-control-" +"plane=enabled` label." +msgstr "" +"Layanan Neutron-server dijadwalkan pada node dengan label `openstack-control-" +"plane=enabled`." + +msgid "Node and label specific configurations" +msgstr "Node dan label konfigurasi tertentu" + +msgid "Note that only one set of overrides is applied per node, such that:" +msgstr "" +"Perhatikan bahwa hanya satu set penggantian diterapkan per node, sehingga:" + +msgid "" +"Note that some additional values have been injected into the config file, " +"this is performed via statements in the configmap template, which also calls " +"the ``helm-toolkit.utils.to_oslo_conf`` to convert the yaml to the required " +"layout:" +msgstr "" +"Perhatikan bahwa beberapa nilai tambahan telah disuntikkan ke file " +"konfigurasi, ini dilakukan melalui pernyataan dalam template configmap, yang " +"juga memanggil ``helm-toolkit.utils.to_oslo_conf`` untuk mengonversi yaml ke " +"tata letak yang diperlukan:" + +msgid "" +"Note: Rolling update values can conflict with values defined in each " +"service's PodDisruptionBudget. See `here `_ for more " +"information." +msgstr "" +"Catatan: Nilai pembaruan bergulir dapat bertentangan dengan nilai yang " +"ditentukan dalam PodDisruptionBudget layanan masing-masing. Lihat `here " +"`_ untuk informasi lebih lanjut." + +msgid "Nova config dependency" +msgstr "Nova config dependency" + +msgid "OSLO-Config Values" +msgstr "OSLO-Config Values" + +msgid "" +"OpenStack-Helm defines a centralized logging mechanism to provide insight " +"into the state of the OpenStack services and infrastructure components as " +"well as underlying Kubernetes platform. Among the requirements for a logging " +"platform, where log data can come from and where log data need to be " +"delivered are very variable. To support various logging scenarios, OpenStack-" +"Helm should provide a flexible mechanism to meet with certain operation " +"needs." +msgstr "" +"OpenStack-Helm mendefinisikan mekanisme logging terpusat untuk memberikan " +"wawasan tentang keadaan layanan OpenStack dan komponen infrastruktur serta " +"platform Kubernetes yang mendasari. Di antara persyaratan untuk platform " +"logging, di mana data log dapat berasal dari dan di mana data log harus " +"dikirimkan sangat bervariasi. Untuk mendukung berbagai skenario logging, " +"OpenStack-Helm harus menyediakan mekanisme yang fleksibel untuk memenuhi " +"kebutuhan operasi tertentu." + +msgid "" +"OpenStack-Helm generates oslo-config compatible formatted configuration " +"files for services dynamically from values specified in a yaml tree. This " +"allows operators to control any and all aspects of an OpenStack services " +"configuration. An example snippet for an imaginary Keystone configuration is " +"described here:" +msgstr "" +"OpenStack-Helm menghasilkan file konfigurasi yang diformat oslo-config " +"kompatibel untuk layanan secara dinamis dari nilai yang ditentukan dalam " +"pohon yaml (yaml tree). Ini memungkinkan operator untuk mengontrol setiap " +"dan semua aspek dari konfigurasi layanan OpenStack. Contoh cuplikan untuk " +"konfigurasi Keystone imajiner dijelaskan di sini:" + +msgid "" +"OpenStack-Helm leverages PodDistruptionBudgets to enforce quotas that ensure " +"that a certain number of replicas of a pod are available at any given time. " +"This is particularly important in the case when a Kubernetes node needs to " +"be drained." +msgstr "" +"OpenStack-Helm memanfaatkan PodDistruptionBudgets untuk menegakkan kuota " +"yang memastikan bahwa sejumlah replika pod tertentu tersedia pada waktu " +"tertentu. Ini sangat penting dalam kasus ketika node Kubernetes perlu " +"dikuras." + +msgid "" +"OpenStack-Helm provides fast and lightweight log forwarder and full featured " +"log aggregator complementing each other providing a flexible and reliable " +"solution. Especially, Fluent-bit is used as a log forwarder and Fluentd is " +"used as a main log aggregator and processor." +msgstr "" +"OpenStack-Helm menyediakan log forwarder yang cepat dan ringan dan agregator " +"log fitur lengkap yang saling melengkapi menyediakan solusi yang fleksibel " +"dan dapat diandalkan. Terutama, Fluent-bit digunakan sebagai log forwarder " +"dan Fluentd digunakan sebagai aggregator dan prosesor log utama." + +msgid "OpenVSwitch" +msgstr "OpenVSwitch" + +msgid "Other SDNs" +msgstr "SDN lainnya" + +msgid "Other networking services provided by Neutron are:" +msgstr "Layanan jaringan lain yang disediakan oleh Neutron adalah:" + +msgid "Pod Disruption Budgets" +msgstr "Pod Disruption Budgets" + +msgid "" +"SDNs implementing ML2 driver can add extra/plugin-specific configuration " +"options in `neutron/ml2/plugins/ml2_conf.ini`. Or define its own " +"`ml2_conf_.ini` file where configs specific to the SDN would be placed." +msgstr "" +"Mengimplementasikan driver MLN SDN dapat menambahkan opsi konfigurasi extra/" +"plugin-specific di `neutron/ml2/plugins/ml2_conf.ini`. Atau tentukan sendiri " +"file `ml2_conf_.ini` di mana konfigurasi khusus untuk SDN akan " +"ditempatkan." + +msgid "" +"Script in :code:`neutron/templates/bin/_neutron-openvswitch-agent-init.sh." +"tpl` is responsible for determining the tunnel interface and its IP for " +"later usage by :code:`neutron-ovs-agent`. The IP is set in init container " +"and shared between init container and main container with :code:`neutron-ovs-" +"agent` via file :code:`/tmp/pod-shared/ml2-local-ip.ini`." +msgstr "" +"Skrip dalam :code:`neutron/templates/bin/_neutron-openvswitch-agent-init.sh." +"tpl` bertanggung jawab untuk menentukan antarmuka terowongan (tunnel " +"interface) dan IP-nya untuk digunakan nanti oleh :code:`neutron-ovs-agent`. " +"IP diatur dalam init container dan dibagi antara init container dan " +"container utama dengan :code:`neutron-ovs-agent` melalui file :code:`/tmp/" +"pod-shared/ml2-local-ip.ini`." + +msgid "" +"Specify if new SDN would like to use existing services from Neutron: L3, " +"DHCP, metadata." +msgstr "" +"Tentukan apakah SDN baru ingin menggunakan layanan yang ada dari Neutron: " +"L3, DHCP, metadata." + +msgid "" +"The Neutron reference architecture provides mechanism_drivers :code:" +"`OpenVSwitch` (OVS) and :code:`linuxbridge` (LB) with ML2 :code:" +"`core_plugin` framework." +msgstr "" +"Arsitektur referensi Neutron menyediakan mechanism_drivers :code:" +"`OpenVSwitch` (OVS) dan :code:`linuxbridge` (LB) with ML2 :code:" +"`core_plugin` framework." + +msgid "" +"The OpenStack-Helm project also implements annotations across all chart " +"configmaps so that changing resources inside containers, such as " +"configuration files, triggers a Kubernetes rolling update. This means that " +"those resources can be updated without deleting and redeploying the service " +"and can be treated like any other upgrade, such as a container image change." +msgstr "" +"Proyek OpenStack-Helm juga mengimplementasikan anotasi di semua chart " +"configmaps sehingga mengubah sumber daya di dalam kontainer, seperti file " +"konfigurasi, memicu pembaruan bergulir Kubernetes. Ini berarti bahwa sumber " +"daya tersebut dapat diperbarui tanpa menghapus dan menerapkan ulang layanan " +"dan dapat diperlakukan seperti peningkatan lainnya, seperti perubahan " +"container image." + +msgid "" +"The OpenStack-Helm project assumes all upgrades will be done through Helm. " +"This includes handling several different resource types. First, changes to " +"the Helm chart templates themselves are handled. Second, all of the " +"resources layered on top of the container image, such as ``ConfigMaps`` " +"which includes both scripts and configuration files, are updated during an " +"upgrade. Finally, any image references will result in rolling updates of " +"containers, replacing them with the updating image." +msgstr "" +"Proyek OpenStack-Helm mengasumsikan semua upgrade akan dilakukan melalui " +"Helm. Ini termasuk menangani beberapa jenis sumber daya yang berbeda. " +"Pertama, perubahan pada bagan Helm templates itu sendiri ditangani. Kedua, " +"semua sumber daya yang berlapis di atas container image, seperti " +"``ConfigMaps`` yang mencakup file skrip dan konfigurasi, diperbarui selama " +"upgrade. Akhirnya, setiap referensi image akan menghasilkan pembaruan " +"bergulir kontainer, menggantikannya dengan image yang diperbarui." + +msgid "" +"The OpenStack-Helm project today uses a mix of Docker images from " +"Stackanetes and Kolla, but will likely standardize on a default set of " +"images for all charts without any reliance on image-specific utilities." +msgstr "" +"Proyek OpenStack-Helm saat ini menggunakan campuran image Docker dari " +"Stackanetes dan Kolla, tetapi kemungkinan akan distandardisasi pada " +"sekumpulan image default untuk semua chart tanpa bergantung pada image-" +"specific utilities." + +msgid "" +"The ``hash`` function defined in the ``helm-toolkit`` chart ensures that any " +"change to any file referenced by configmap-bin.yaml or configmap-etc.yaml " +"results in a new hash, which will then trigger a rolling update." +msgstr "" +"Fungsi ``hash`` yang ditentukan dalam chart ``helm-toolkit`` memastikan " +"bahwa perubahan apa pun ke file apa pun yang direferensikan oleh configmap-" +"bin.yaml atau configmap-etc.yaml menghasilkan hash baru, yang kemudian akan " +"memicu pemutaran memperbarui." + +msgid "The above configuration options are handled by `neutron/values.yaml`:" +msgstr "Opsi konfigurasi di atas ditangani oleh `neutron/values.yaml`:" + +msgid "" +"The farther down the list the label appears, the greater precedence it has. " +"e.g., \"another-label\" overrides will apply to a node containing both " +"labels." +msgstr "" +"Semakin jauh ke bawah daftar label muncul, semakin prioritas yang lebih " +"besar itu. misalnya, penggantian \"another-label\" akan menerapkan ke nodus " +"yang berisi kedua label." + +msgid "" +"The following standards are in use today, in addition to any components " +"defined by the service itself:" +msgstr "" +"Standar berikut sedang digunakan saat ini, selain komponen yang ditentukan " +"oleh layanan itu sendiri:" + +msgid "" +"The macros that help translate these into the actual URLs necessary are " +"defined in the ``helm-toolkit`` chart. For instance, the cinder chart " +"defines a ``glance_api_servers`` definition in the ``cinder.conf`` template:" +msgstr "" +"Makro yang membantu menerjemahkan ini ke URL yang sebenarnya diperlukan " +"didefinisikan dalam chart ``helm-toolkit``. Sebagai contoh, chart cinder " +"mendefinisikan definisi ``glance_api_servers`` dalam template ``cinder." +"conf``:" + +msgid "" +"The ovs set of daemonsets are running on the node labeled " +"`openvswitch=enabled`. This includes the compute and controller/network " +"nodes. For more flexibility, OpenVSwitch as a tool was split out of Neutron " +"chart, and put in separate chart dedicated OpenVSwitch. Neutron OVS agent " +"remains in Neutron chart. Splitting out the OpenVSwitch creates " +"possibilities to use it with different SDNs, adjusting the configuration " +"accordingly." +msgstr "" +"Set ovs daemon berjalan di node berlabel `openvswitch = enabled`. Ini " +"termasuk node komputasi dan controller/network. Untuk fleksibilitas lebih, " +"OpenVSwitch sebagai alat dibagi dari chart Neutron, dan dimasukkan ke dalam " +"chart terpisah yang didedikasikan OpenVSwitch. Agen OVS Neutron tetap dalam " +"chart Neutron. Memisahkan OpenVSwitch menciptakan kemungkinan untuk " +"menggunakannya dengan SDN berbeda, menyesuaikan konfigurasi yang sesuai." + +msgid "" +"The project's core philosophy regarding images is that the toolsets required " +"to enable the OpenStack services should be applied by Kubernetes itself. " +"This requires OpenStack-Helm to develop common and simple scripts with " +"minimal dependencies that can be overlaid on any image that meets the " +"OpenStack core library requirements. The advantage of this is that the " +"project can be image agnostic, allowing operators to use Stackanetes, Kolla, " +"LOCI, or any image flavor and format they choose and they will all function " +"the same." +msgstr "" +"Filosofi inti proyek mengenai images adalah bahwa alat yang diperlukan untuk " +"mengaktifkan layanan OpenStack harus diterapkan oleh Kubernetes itu sendiri. " +"Hal ini membutuhkan OpenStack-Helm untuk mengembangkan skrip umum dan " +"sederhana dengan dependensi minimal yang dapat dihamparkan (overlaid) pada " +"images apa pun yang memenuhi persyaratan perpustakaan inti OpenStack. " +"Keuntungan dari ini adalah bahwa proyek dapat menjadi agnostik image, yang " +"memungkinkan operator untuk menggunakan Stackanetes, Kolla, LOCI, atau " +"berbagai flavor dan format image yang mereka pilih dan semuanya akan " +"berfungsi sama." + +msgid "" +"The project's goal is to provide a consistent mechanism for endpoints. " +"OpenStack is a highly interconnected application, with various components " +"requiring connectivity details to numerous services, including other " +"OpenStack components and infrastructure elements such as databases, queues, " +"and memcached infrastructure. The project's goal is to ensure that it can " +"provide a consistent mechanism for defining these \"endpoints\" across all " +"charts and provide the macros necessary to convert those definitions into " +"usable endpoints. The charts should consistently default to building " +"endpoints that assume the operator is leveraging all charts to build their " +"OpenStack cloud. Endpoints should be configurable if an operator would like " +"a chart to work with their existing infrastructure or run elements in " +"different namespaces." +msgstr "" +"Tujuan proyek adalah menyediakan mekanisme yang konsisten untuk endpoints. " +"OpenStack adalah aplikasi yang sangat saling berhubungan, dengan berbagai " +"komponen yang membutuhkan rincian konektivitas ke berbagai layanan, termasuk " +"komponen OpenStack dan elemen infrastruktur lainnya seperti database, " +"antrian (queues), dan infrastruktur memcache. Tujuan proyek adalah untuk " +"memastikan bahwa ia dapat memberikan mekanisme yang konsisten untuk " +"mendefinisikan \"endpoints\" ini di semua charts dan menyediakan makro yang " +"diperlukan untuk mengubah definisi tersebut menjadi endpoints yang dapat " +"digunakan. Charts tersebut harus secara konsisten menjadi standar untuk " +"membangun endpoints yang mengasumsikan operator memanfaatkan semua charts " +"untuk membangun cloud OpenStack mereka. Endpoints harus dapat dikonfigurasi " +"jika operator ingin bagan bekerja dengan infrastruktur yang ada atau " +"menjalankan elemen di namespaces yang berbeda." + +msgid "" +"The resulting logs can then be queried directly through Elasticsearch, or " +"they can be viewed via Kibana. Kibana offers a dashboard that can create " +"custom views on logged events, and Kibana integrates well with Elasticsearch " +"by default." +msgstr "" +"Log yang dihasilkan kemudian dapat ditanyakan secara langsung melalui " +"Elasticsearch, atau mereka dapat dilihat melalui Kibana. Kibana menawarkan " +"dasbor yang dapat membuat tampilan khusus pada acara yang dicatat, dan " +"Kibana terintegrasi dengan baik dengan Elasticsearch secara default." + +msgid "" +"There are situations where we need to define configuration differently for " +"different nodes in the environment. For example, we may require that some " +"nodes have a different vcpu_pin_set or other hardware specific deltas in " +"nova.conf." +msgstr "" +"Ada situasi di mana kita perlu mendefinisikan konfigurasi secara berbeda " +"untuk berbagai node di lingkungan. Sebagai contoh, kami mungkin mengharuskan " +"beberapa node memiliki vcpu_pin_set yang berbeda atau delta spesifik " +"perangkat keras lainnya di nova.conf." + +msgid "" +"There is also a need for DHCP agent to pass ovs agent config file (in :code:" +"`neutron/templates/bin/_neutron-dhcp-agent.sh.tpl`):" +msgstr "" +"Ada juga kebutuhan untuk agen DHCP untuk melewati file konfigurasi agen ovs " +"(in :code:`neutron/templates/bin/_neutron-dhcp-agent.sh.tpl`):" + +msgid "" +"These quotas are configurable by modifying the ``minAvailable`` field within " +"each PodDistruptionBudget manifest, which is conveniently mapped to a " +"templated variable inside the ``values.yaml`` file. The ``min_available`` " +"within each service's ``values.yaml`` file can be represented by either a " +"whole number, such as ``1``, or a percentage, such as ``80%``. For example, " +"when deploying 5 replicas of a pod (such as keystone-api), using " +"``min_available: 3`` would enforce policy to ensure at least 3 replicas were " +"running, whereas using ``min_available: 80%`` would ensure that 4 replicas " +"of that pod are running." +msgstr "" +"Kuota ini dapat dikonfigurasi dengan memodifikasi field ``minAvailable`` " +"dalam setiap manifes PodDistruptionBudget, yang dengan mudah dipetakan ke " +"variabel templated di dalam file ``values.yaml``. ``Min_available`` dalam " +"file ``values.yaml`` masing-masing layanan dapat direpresentasikan dengan " +"salah satu bilangan bulat, seperti ``1``, atau persentase, seperti ``80%``. " +"Misalnya, ketika menerapkan 5 replika pod (seperti keystone-api), " +"menggunakan ``min_available: 3`` akan menegakkan kebijakan untuk memastikan " +"setidaknya 3 replika berjalan, sedangkan menggunakan ``min_available: 80%`` " +"akan memastikan bahwa 4 replika pod tersebut sedang berjalan." + +msgid "" +"These values define all the endpoints that the Neutron chart may need in " +"order to build full URL compatible endpoints to various services. Long-term, " +"these will also include database, memcached, and rabbitmq elements in one " +"place. Essentially, all external connectivity can be be defined centrally." +msgstr "" +"Nilai ini mendefinisikan semua endpoints yang mungkin diperlukan Neutron " +"chart untuk membangun endpoint kompatibel URL lengkap ke berbagai layanan. " +"Jangka panjang, ini juga akan mencakup elemen database, memcached, dan " +"rabbitmq di satu tempat. Pada dasarnya, semua konektivitas eksternal dapat " +"didefinisikan secara terpusat." + +msgid "" +"This daemonset includes the linuxbridge Neutron agent with bridge-utils and " +"ebtables utilities installed. This is all that is needed, since linuxbridge " +"uses native kernel libraries." +msgstr "" +"Daemonset ini termasuk linuxbridge Neutron agent dengan utilitas bridge-" +"utils dan ebtables yang diinstal. Ini semua yang diperlukan, karena " +"linuxbridge menggunakan pustaka kernel native." + +msgid "This is accomplished with the following annotation:" +msgstr "Ini dilakukan dengan anotasi berikut:" + +msgid "" +"This option will allow to configure the Neutron services in proper way, by " +"checking what is the actual backed set in :code:`neutron/values.yaml`." +msgstr "" +"Pilihan ini akan memungkinkan untuk mengkonfigurasi layanan Neutron dengan " +"cara yang tepat, dengan memeriksa apa yang sebenarnya didukung di set " +"dalam :code:`neutron/values.yaml`." + +msgid "" +"This requirement is OVS specific, the `ovsdb_connection` string is defined " +"in `openvswitch_agent.ini` file, specifying how DHCP agent can connect to " +"ovs. When using other SDNs, running the DHCP agent may not be required. When " +"the SDN solution is addressing the IP assignments in another way, neutron's " +"DHCP agent should be disabled." +msgstr "" +"Persyaratan ini khusus OVS, string `ovsdb_connection` didefinisikan dalam " +"file `openvswitch_agent.ini`, menentukan bagaimana agen DHCP dapat terhubung " +"ke ov. Saat menggunakan SDN lain, menjalankan agen DHCP mungkin tidak " +"diperlukan. Ketika solusi SDN menangani IP assignments dengan cara lain, " +"agen DHCP neutron harus dinonaktifkan." + +msgid "" +"This runs the OVS tool and database. OpenVSwitch chart is not Neutron " +"specific, it may be used with other technologies that are leveraging the OVS " +"technology, such as OVN or ODL." +msgstr "" +"Ini menjalankan OVS tool dan database. Chart OpenVSwitch tidak spesifik " +"Neutron, ia dapat digunakan dengan teknologi lain yang memanfaatkan " +"teknologi OVS, seperti OVN atau ODL." + +msgid "" +"This will be consumed by the templated ``configmap-etc.yaml`` manifest to " +"produce the following config file:" +msgstr "" +"Ini akan digunakan oleh manifasi ``configmap-etc.yaml`` manifes untuk " +"menghasilkan file konfigurasi berikut:" + +msgid "" +"To be able to configure multiple networking plugins inside of OpenStack-" +"Helm, a new configuration option is added:" +msgstr "" +"Untuk dapat mengkonfigurasi beberapa plugin jaringan di dalam OpenStack-" +"Helm, opsi konfigurasi baru ditambahkan:" + +msgid "" +"To do this, we can specify overrides in the values fed to the chart. Ex:" +msgstr "" +"Untuk melakukan ini, kita dapat spesifik menimpa nilai yang diumpankan ke " +"chart. Ex:" + +msgid "" +"To enable new SDN solution, there should be separate chart created, which " +"would handle the deployment of service, setting up the database and any " +"related networking functionality that SDN is providing." +msgstr "" +"Untuk mengaktifkan solusi SDN baru, harus ada bagan terpisah yang dibuat, " +"yang akan menangani penyebaran layanan, pengaturan database dan fungsi " +"jaringan terkait yang disediakan SDN." + +msgid "" +"To that end, all charts provide an ``images:`` section that allows operators " +"to override images. Also, all default image references should be fully " +"spelled out, even those hosted by Docker or Quay. Further, no default image " +"reference should use ``:latest`` but rather should be pinned to a specific " +"version to ensure consistent behavior for deployments over time." +msgstr "" +"Untuk itu, semua charts menyediakan bagian ``images:``yang memungkinkan " +"operator untuk mengganti image. Juga, semua referensi image default harus " +"sepenuhnya tereja (spelled out), bahkan yang dihosting oleh Docker atau " +"Quay. Lebih lanjut, tidak ada referensi image default yang harus menggunakan " +"``:latest`` tetapi sebaiknya disematkan (pinned) ke versi tertentu untuk " +"memastikan perilaku yang konsisten untuk penerapan dari waktu ke waktu." + +msgid "" +"To use other Neutron reference architecture types of SDN, these options " +"should be configured in :code:`neutron.conf`:" +msgstr "" +"Untuk menggunakan jenis arsitektur referensi Neutron lainnya dari SDN, opsi " +"ini harus dikonfigurasi :code:`neutron.conf`:" + +msgid "" +"Today, the ``images:`` section has several common conventions. Most " +"OpenStack services require a database initialization function, a database " +"synchronization function, and a series of steps for Keystone registration " +"and integration. Each component may also have a specific image that composes " +"an OpenStack service. The images may or may not differ, but regardless, " +"should all be defined in ``images``." +msgstr "" +"Hari ini, bagian ``images:`` memiliki beberapa konvensi umum. Sebagian besar " +"layanan OpenStack memerlukan fungsi inisialisasi basis data, fungsi " +"sinkronisasi basis data, dan serangkaian langkah untuk pendaftaran dan " +"integrasi Keystone. Setiap komponen juga dapat memiliki image khusus yang " +"membentuk layanan OpenStack. Images mungkin atau mungkin tidak berbeda, " +"tetapi bagaimanapun, semua harus didefinisikan dalam ``images``." + +msgid "Typical networking API request is an operation of create/update/delete:" +msgstr "" +"Permintaan API jaringan tipikal adalah operasi dari create/update/delete:" + +msgid "Upgrades and Reconfiguration" +msgstr "Upgrades dan Konfigurasi Ulang" + +msgid "" +"Whenever we change the L2 agent, it should be reflected in `nova/values." +"yaml` in dependency resolution for nova-compute." +msgstr "" +"Setiap kali kita mengubah agen L2, itu harus tercermin dalam `nova/values." +"yaml` dalam resolusi ketergantungan untuk nova-compute." + +msgid "" +"``host1.fqdn`` with labels ``compute-type: dpdk, sriov`` and ``another-" +"label: another-value``:" +msgstr "" +"``host1.fqdn`` dengan label ``compute-type: dpdk, sriov`` dan ``another-" +"label: another-value``:" + +msgid "" +"``host2.fqdn`` with labels ``compute-type: dpdk, sriov`` and ``another-" +"label: another-value``:" +msgstr "" +"``host2.fqdn`` dengan label ``compute-type: dpdk, sriov`` dan ``another-" +"label: another-value``:" + +msgid "" +"``host3.fqdn`` with labels ``compute-type: dpdk, sriov`` and ``another-" +"label: another-value``:" +msgstr "" +"``host3.fqdn`` dengan label ``compute-type: dpdk, sriov`` dan ``another-" +"label: another-value``:" + +msgid "``host4.fqdn`` with labels ``compute-type: dpdk, sriov``:" +msgstr "``host4.fqdn`` dengan label ``compute-type: dpdk, sriov``:" + +msgid "``host5.fqdn`` with no labels:" +msgstr "``host5.fqdn`` tanpa label:" + +msgid "" +"api: This is the port to map to for the service. Some components, such as " +"glance, provide an ``api`` port and a ``registry`` port, for example." +msgstr "" +"api: Ini adalah port untuk dipetakan ke layanan. Beberapa komponen, seperti " +"glance, menyediakan port ``api`` dan port ``registry``, misalnya." + +msgid "" +"db\\_drop: The image that will perform database deletion operations for the " +"OpenStack service." +msgstr "" +"db\\_drop: Image yang akan melakukan operasi penghapusan basis data untuk " +"layanan OpenStack." + +msgid "" +"db\\_init: The image that will perform database creation operations for the " +"OpenStack service." +msgstr "" +"db\\_init: Image yang akan melakukan operasi pembuatan basis data untuk " +"layanan OpenStack." + +msgid "" +"db\\_sync: The image that will perform database sync (schema initialization " +"and migration) for the OpenStack service." +msgstr "" +"db\\_sync: Image yang akan melakukan sinkronisasi database (skema " +"inisialisasi dan migrasi) untuk layanan OpenStack." + +msgid "" +"dep\\_check: The image that will perform dependency checking in an init-" +"container." +msgstr "" +"dep\\_check: Image yang akan melakukan pemeriksaan dependensi dalam init-" +"container." + +msgid "" +"image: This is the OpenStack service that the endpoint is being built for. " +"This will be mapped to ``glance`` which is the image service for OpenStack." +msgstr "" +"image: Ini adalah layanan OpenStack yang endpoint sedang dibangun. Ini akan " +"dipetakan ke ``glance`` yang merupakan layanan image untuk OpenStack." + +msgid "" +"internal: This is the OpenStack endpoint type we are looking for - valid " +"values would be ``internal``, ``admin``, and ``public``" +msgstr "" +"internal: Ini adalah tipe endpoint OpenStack yang kami cari - nilai yang " +"valid adalah ``internal``, ``admin``, dan ``public``" + +msgid "" +"ks\\_endpoints: The image that will perform keystone endpoint registration " +"for the service." +msgstr "" +"ks\\_endpoints: Image yang akan melakukan pendaftaran endpoint keystone " +"untuk layanan." + +msgid "" +"ks\\_service: The image that will perform keystone service registration for " +"the service." +msgstr "" +"ks\\_service: Image yang akan melakukan pendaftaran layanan keystone untuk " +"layanan." + +msgid "" +"ks\\_user: The image that will perform keystone user creation for the " +"service." +msgstr "" +"ks\\_user: Image yang akan melakukan pembuatan pengguna keystone untuk " +"layanan ini." + +msgid "network" +msgstr "network" + +msgid "neutron-dhcp-agent" +msgstr "neutron-dhcp-agent" + +msgid "" +"neutron-dhcp-agent service is scheduled to run on nodes with the label " +"`openstack-control-plane=enabled`." +msgstr "" +"layanan neutron-dhcp-agent dijadwalkan untuk berjalan di node dengan label " +"`openstack-control-plane=enabled`." + +msgid "neutron-l3-agent" +msgstr "neutron-l3-agent" + +msgid "" +"neutron-l3-agent service is scheduled to run on nodes with the label " +"`openstack-control-plane=enabled`." +msgstr "" +"layanan neutron-l3-agent dijadwalkan untuk berjalan di node dengan label " +"`openstack-control-plane=enabled`." + +msgid "neutron-lb-agent" +msgstr "neutron-lb-agent" + +msgid "neutron-metadata-agent" +msgstr "neutron-metadata-agent" + +msgid "" +"neutron-metadata-agent service is scheduled to run on nodes with the label " +"`openstack-control-plane=enabled`." +msgstr "" +"layanan neutron-metadata-agent dijadwalkan untuk berjalan di node dengan " +"label `openstack-control-plane=enabled`." + +msgid "neutron-ovs-agent" +msgstr "neutron-ovs-agent" + +msgid "neutron-server" +msgstr "neutron-server" + +msgid "" +"neutron-server is serving the networking REST API for operator and other " +"OpenStack services usage. The internals of Neutron are highly flexible, " +"providing plugin mechanisms for all networking services exposed. The " +"consistent API is exposed to the user, but the internal implementation is up " +"to the chosen SDN." +msgstr "" +"neutron-server melayani API REST jaringan untuk operator dan penggunaan " +"layanan OpenStack lainnya. Internal Neutron sangat fleksibel, menyediakan " +"mekanisme plugin untuk semua layanan jaringan yang terbuka. API yang " +"konsisten terpapar kepada pengguna, tetapi implementasi internal terserah " +"pada SDN yang dipilih." + +msgid "openvswitch-db and openvswitch-vswitchd" +msgstr "openvswitch-db dan openvswitch-vswitchd" + +msgid "port" +msgstr "port" + +msgid "" +"pull\\_policy: The image pull policy, one of \"Always\", \"IfNotPresent\", " +"and \"Never\" which will be used by all containers in the chart." +msgstr "" +"pull\\_policy: Kebijakan pengambilan image, salah satu \"Always\", " +"\"IfNotPresent\", dan \"Never\" yang akan digunakan oleh semua kontainer " +"dalam chart." + +msgid "subnet" +msgstr "subnet" diff --git a/doc/source/locale/id/LC_MESSAGES/doc-install.po b/doc/source/locale/id/LC_MESSAGES/doc-install.po new file mode 100644 index 0000000000..8c94d87730 --- /dev/null +++ b/doc/source/locale/id/LC_MESSAGES/doc-install.po @@ -0,0 +1,1005 @@ +# suhartono , 2018. #zanata +msgid "" +msgstr "" +"Project-Id-Version: openstack-helm\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2018-09-29 05:49+0000\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=UTF-8\n" +"Content-Transfer-Encoding: 8bit\n" +"PO-Revision-Date: 2018-09-27 06:40+0000\n" +"Last-Translator: suhartono \n" +"Language-Team: Indonesian\n" +"Language: id\n" +"X-Generator: Zanata 4.3.3\n" +"Plural-Forms: nplurals=1; plural=0\n" + +msgid "**Check Chart Status**" +msgstr "**Check Chart Status**" + +msgid "" +"**First**, edit the ``values.yaml`` for Neutron, Glance, Horizon, Keystone, " +"and Nova." +msgstr "" +"**First**, edit ``values.yaml`` untuk Neutron, Glance, Horizon, Keystone, " +"dan Nova." + +msgid "**Review the Ingress configuration.**" +msgstr "**Review the Ingress configuration.**" + +msgid "" +"**Second** option would be as ``--set`` flags when calling ``helm install``" +msgstr "" +"Pilihan **Second** akan menjadi ``--set`` flags saat memanggil ``helm " +"install``" + +msgid "16GB of RAM" +msgstr "RAM 16GB" + +msgid "4 Cores" +msgstr "4 Cores" + +msgid "48GB HDD" +msgstr "48GB HDD" + +msgid "8 Cores" +msgstr "8 Cores" + +msgid "8GB of RAM" +msgstr "RAM 8GB" + +msgid "Activate the OpenStack namespace to be able to use Ceph" +msgstr "Aktifkan namespace OpenStack untuk dapat menggunakan Ceph" + +msgid "Activate the openstack namespace to be able to use Ceph" +msgstr "Aktifkan namespace openstack untuk dapat menggunakan Ceph" + +msgid "" +"Add the address of the Kubernetes API, ``172.17.0.1``, and ``.svc.cluster." +"local`` to your ``no_proxy`` and ``NO_PROXY`` environment variables." +msgstr "" +"Tambahkan alamat API Kubernetes, ``172,17,0,1``, dan ``.svc.cluster.local`` " +"ke variabel lingkungan ``no_proxy`` dan ``NO_PROXY`` Anda." + +msgid "" +"Add to the Install steps these flags - also adding a shell environment " +"variable to save on repeat code." +msgstr "" +"Tambahkan ke langkah Instal flags ini - juga menambahkan variabel lingkungan " +"shell untuk menghemat kode ulang." + +msgid "" +"Additional configuration variables can be found `here `_. In particular, ``kubernetes_cluster_pod_subnet`` can " +"be used to override the pod subnet set up by Calico (the default container " +"SDN), if you have a preexisting network that conflicts with the default pod " +"subnet of 192.168.0.0/16." +msgstr "" +"Variabel konfigurasi tambahan dapat ditemukan `here `_. Khususnya, ``kubernetes_cluster_pod_subnet`` dapat " +"digunakan untuk mengganti subnet pod yang diatur oleh Calico (kontainer " +"default SDN), jika Anda memiliki jaringan yang sudah ada sebelumnya yang " +"bertentangan (conflict) dengan subnet pod default 192.168.0.0/16." + +msgid "" +"Additional information on Kubernetes Ceph-based integration can be found in " +"the documentation for the `CephFS `_ and `RBD `_ storage provisioners, as well as for the alternative `NFS `_ provisioner." +msgstr "" +"Additional information on Kubernetes Ceph-based integration can be found in " +"the documentation for the `CephFS `_ dan `RBD ` _ storage provisioners, serta untuk alternatif `NFS `_ " +"provisioner." + +msgid "" +"After making the configuration changes, run a ``make`` and then install as " +"you would from AIO or MultiNode instructions." +msgstr "" +"Setelah membuat perubahan konfigurasi, jalankan ``make`` dan kemudian instal " +"seperti yang Anda lakukan dari instruksi AIO atau MultiNode." + +msgid "" +"All commands below should be run as a normal user, not as root. Appropriate " +"versions of Docker, Kubernetes, and Helm will be installed by the playbooks " +"used below, so there's no need to install them ahead of time." +msgstr "" +"Semua perintah di bawah ini harus dijalankan sebagai pengguna biasa, bukan " +"sebagai root. Versi Docker, Kubernetes, dan Helm yang sesuai akan dipasang " +"oleh playbook yang digunakan di bawah ini, jadi tidak perlu menginstalnya " +"terlebih dahulu." + +msgid "" +"Alternatively, this step can be performed by running the script directly:" +msgstr "" +"Alternatifnya, langkah ini dapat dilakukan dengan menjalankan skrip secara " +"langsung:" + +msgid "" +"An Ingress is a collection of rules that allow inbound connections to reach " +"the cluster services." +msgstr "" +"Ingress adalah kumpulan aturan yang memungkinkan koneksi masuk untuk " +"menjangkau layanan kluster." + +msgid "" +"Below are some instructions and suggestions to help you get started with a " +"Kubeadm All-in-One environment on Ubuntu 16.04. Other supported versions of " +"Linux can also be used, with the appropriate changes to package installation." +msgstr "" +"Di bawah ini adalah beberapa petunjuk dan saran untuk membantu Anda memulai " +"dengan lingkungan Kubeadm All-in-One di Ubuntu 16.04. Versi Linux lainnya " +"yang didukung juga dapat digunakan, dengan perubahan yang sesuai untuk " +"instalasi paket." + +msgid "" +"By default the Calico CNI will use ``192.168.0.0/16`` and Kubernetes " +"services will use ``10.96.0.0/16`` as the CIDR for services. Check that " +"these CIDRs are not in use on the development node before proceeding, or " +"adjust as required." +msgstr "" +"Secara default, Calico CNI akan menggunakan layanan ``192.168.0.0/16`` dan " +"Kubernetes akan menggunakan ``10.96.0.0/16`` sebagai CIDR untuk layanan. " +"Periksa apakah CIDR ini tidak digunakan pada node pengembangan sebelum " +"melanjutkan, atau sesuaikan sesuai kebutuhan." + +msgid "" +"Cinder deployment is not tested in the OSH development environment community " +"gates" +msgstr "" +"Cinder deployment tidak diuji di gate komunitas lingkungan pengembangan OSH" + +msgid "Cleaning the Deployment" +msgstr "Membersihkan Deployment" + +msgid "Clone the OpenStack-Helm Repos" +msgstr "Mengkloning OpenStack-Helm Repos" + +msgid "Code examples below." +msgstr "Contoh kode di bawah ini." + +msgid "Commmon Deployment Requirements" +msgstr "Persyaratan Penerapan Umum" + +msgid "Configure OpenStack" +msgstr "Konfigurasikan OpenStack" + +msgid "" +"Configuring OpenStack for a particular production use-case is beyond the " +"scope of this guide. Please refer to the OpenStack `Configuration `_ documentation for your selected " +"version of OpenStack to determine what additional values overrides should be " +"provided to the OpenStack-Helm charts to ensure appropriate networking, " +"security, etc. is in place." +msgstr "" +"Mengonfigurasi OpenStack untuk kasus penggunaan produksi tertentu berada di " +"luar ruang lingkup panduan ini. Silakan merujuk ke OpenStack `Configuration " +"`_ dokumentasi untuk versi " +"OpenStack pilihan Anda untuk menentukan apa penggantian nilai tambahan harus " +"diberikan ke chart OpenStack-Helm untuk memastikan jaringan yang sesuai, " +"keamanan, dll. sudah ada." + +msgid "Contents:" +msgstr "Isi:" + +msgid "" +"Copy the key: ``sudo cp ~/.ssh/id_rsa /etc/openstack-helm/deploy-key.pem``" +msgstr "" +"Salin kunci: ``sudo cp ~/.ssh/id_rsa /etc/openstack-helm/deploy-key.pem``" + +msgid "Create an environment file" +msgstr "Buat file lingkungan" + +msgid "Create an inventory file" +msgstr "Buat file inventaris" + +msgid "" +"Create an ssh-key on the master node, and add the public key to each node " +"that you intend to join the cluster." +msgstr "" +"Buat ssh-key pada node master, dan tambahkan kunci publik ke setiap node " +"yang Anda inginkan untuk bergabung dengan cluster." + +msgid "Deploy Barbican" +msgstr "Terapkan Barbican" + +msgid "Deploy Ceph" +msgstr "Terapkan Ceph" + +msgid "Deploy Cinder" +msgstr "Terapkan Cinder" + +msgid "Deploy Compute Kit (Nova and Neutron)" +msgstr "Menyebarkan Compute Kit (Nova dan Neutron)" + +msgid "Deploy Glance" +msgstr "Terapkan Glance" + +msgid "Deploy Heat" +msgstr "Terapkan Heat" + +msgid "Deploy Horizon" +msgstr "Terapkan Horizon" + +msgid "Deploy Keystone" +msgstr "Terapkan Keystone" + +msgid "Deploy Kubernetes & Helm" +msgstr "Menyebarkan Kubernetes & Helm" + +msgid "Deploy Libvirt" +msgstr "Terapkan Libvirt" + +msgid "Deploy MariaDB" +msgstr "Terapkan MariaDB" + +msgid "Deploy Memcached" +msgstr "Terapkan Memcached" + +msgid "Deploy NFS Provisioner" +msgstr "Menerapkan NFS Provisioner" + +msgid "Deploy OpenStack-Helm" +msgstr "Gunakan OpenStack-Helm" + +msgid "Deploy OpenvSwitch" +msgstr "Terapkan OpenvSwitch" + +msgid "Deploy RabbitMQ" +msgstr "Terapkan RabbitMQ" + +msgid "Deploy Rados Gateway for object store" +msgstr "Menerapkan Rados Gateway untuk menyimpan objek" + +msgid "Deploy the ingress controller" +msgstr "Pasang ingress controller" + +msgid "Deployment With Ceph" +msgstr "Deployment Dengan Ceph" + +msgid "Deployment With NFS" +msgstr "Deployment Dengan NFS" + +msgid "Development" +msgstr "Pengembangan" + +msgid "Environment tear-down" +msgstr "Environment tear-down (lingkungan meruntuhkan)" + +msgid "" +"Essentially the use of Ingress for OpenStack-Helm is an Nginx proxy service. " +"Ingress (Nginx) is accessible by your cluster public IP - e.g. the IP " +"associated with ``kubectl get pods -o wide --all-namespaces | grep ingress-" +"api`` Ingress/Nginx will be listening for server name requests of \"keystone" +"\" or \"keystone.openstack\" and will route those requests to the proper " +"internal K8s Services. These public listeners in Ingress must match the " +"external DNS that you will set up to access your OpenStack deployment. Note " +"each rule also has a Service that directs Ingress Controllers allow access " +"to the endpoints from within the cluster." +msgstr "" +"Pada dasarnya penggunaan Ingress for OpenStack-Helm adalah layanan proxy " +"Nginx. Ingress (Nginx) dapat diakses oleh IP publik kluster Anda - mis. IP " +"yang terkait dengan ``kubectl get pods -o wide --all-namespaces | grep " +"ingress-api`` Ingress/Nginx akan mendengarkan permintaan nama server " +"\"keystone\" atau \"keystone.openstack\" dan akan mengarahkan permintaan " +"tersebut ke K8s Services internal yang tepat. Pendengar publik di Ingress " +"harus sesuai dengan DNS eksternal yang akan Anda siapkan untuk mengakses " +"penyebaran OpenStack Anda. Perhatikan setiap aturan (rule) juga memiliki " +"Service yang mengarahkan Ingress Controllers memungkinkan akses ke endpoints " +"dari dalam kluster." + +msgid "Examples" +msgstr "Contoh" + +msgid "Exercise the Cloud" +msgstr "Latihlah Cloud" + +msgid "External DNS and FQDN" +msgstr "External DNS dan FQDN" + +msgid "External DNS to FQDN/Ingress" +msgstr "External DNS ke FQDN/Ingress" + +msgid "" +"For ``identity`` and ``dashboard`` at ``host_fdqn_override.public`` replace " +"``null`` with the value as ``keystone.os.foo.org`` and ``horizon.os.foo.org``" +msgstr "" +"Untuk ``identity`` dan ``dashboard`` pada ``host_fdqn_override.public`` " +"menggantikan ``null`` dengan nilai sebagai ``keystone.os.foo.org`` dan " +"``horizon.os.foo.org``" + +msgid "" +"For a deployment without cinder and horizon the system requirements are:" +msgstr "Untuk penyebaran tanpa cinder dan horizon, persyaratan sistem adalah:" + +msgid "" +"For a lab or proof-of-concept environment, the OpenStack-Helm gate scripts " +"can be used to quickly deploy a multinode Kubernetes cluster using KubeADM " +"and Ansible. Please refer to the deployment guide `here <./kubernetes-gate." +"html>`__." +msgstr "" +"Untuk lingkungan lab atau proof-of-concept, skrip gerbang OpenStack-Helm " +"dapat digunakan untuk dengan cepat menyebarkan kluster Kombetes multinode " +"menggunakan KubeADM dan Ansible. Silakan lihat panduan penerapan `here <./" +"kubernetes-gate.html>`__." + +msgid "" +"For other deployment options, select appropriate ``Deployment with ...`` " +"option from `Index <../developer/index.html>`__ page." +msgstr "" +"Untuk opsi penerapan lainnya, pilih opsi ``Deployment with ...`` dari " +"halaman `Index <../developer/index.html>`__ ." + +msgid "Gate-Based Kubernetes" +msgstr "Gate-Based Kubernetes" + +msgid "Get the Nginx configuration from the Ingress Pod:" +msgstr "Dapatkan konfigurasi Nginx dari Ingress Pod:" + +msgid "Get the ``helm status`` of your chart." +msgstr "Dapatkan ``helm status`` dari chart Anda." + +msgid "Helm Chart Installation" +msgstr "Pemasangan Helm Chart" + +msgid "" +"Horizon deployment is not tested in the OSH development environment " +"community gates" +msgstr "" +"Penyebaran Horizon tidak diuji di gate komunitas lingkungan pengembangan OSH" + +msgid "Host Configuration" +msgstr "Konfigurasi Host" + +msgid "" +"If doing an `AIO install `__, all the ``--set`` flags" +msgstr "" +"Jika melakukan suatu `AIO install `__, semua flags ``--set``" + +msgid "" +"Implementing the FQDN overrides **must** be done at install time. If you run " +"these as helm upgrades, Ingress will notice the updates though none of the " +"endpoint build-out jobs will run again, unless they are cleaned up manually " +"or using a tool like Armada." +msgstr "" +"Menerapkan FQDN mengesampingkan **must** dilakukan pada waktu instalasi. " +"Jika Anda menjalankan ini sebagai peningkatan helm, Ingress akan melihat " +"pembaruan meskipun tidak ada tugas build-out endpoint yang akan berjalan " +"lagi, kecuali mereka dibersihkan secara manual atau menggunakan alat seperti " +"Armada." + +msgid "" +"In order to access your OpenStack deployment on Kubernetes we can use the " +"Ingress Controller or NodePorts to provide a pathway in. A background on " +"Ingress, OpenStack-Helm fully qualified domain name (FQDN) overrides, " +"installation, examples, and troubleshooting will be discussed here." +msgstr "" +"Untuk mengakses penyebaran OpenStack Anda di Kubernetes kita dapat " +"menggunakan Ingress Controller atau NodePorts untuk menyediakan jalur masuk " +"(pathway in). Latar belakang tentang Ingress, OpenStack-Helm yang memenuhi " +"syarat nama domain lengkap (FQDN) menimpa, pemasangan, contoh, dan pemecahan " +"masalah akan dibahas di sini ." + +msgid "" +"In order to deploy OpenStack-Helm behind corporate proxy servers, add the " +"following entries to ``openstack-helm-infra/tools/gate/devel/local-vars." +"yaml``." +msgstr "" +"Untuk menyebarkan OpenStack-Helm di belakang server proxy perusahaan, " +"tambahkan entri berikut ini ``openstack-helm-infra/tools/gate/devel/local-" +"vars.yaml``." + +msgid "" +"In order to drive towards a production-ready OpenStack solution, our goal is " +"to provide containerized, yet stable `persistent volumes `_ that Kubernetes can use to " +"schedule applications that require state, such as MariaDB (Galera). Although " +"we assume that the project should provide a \"batteries included\" approach " +"towards persistent storage, we want to allow operators to define their own " +"solution as well. Examples of this work will be documented in another " +"section, however evidence of this is found throughout the project. If you " +"find any issues or gaps, please create a `story `_ to track what can be done to improve our " +"documentation." +msgstr "" +"Untuk menuju solusi OpenStack yang siap produksi, tujuan kami adalah " +"menyediakan paket kemas (containerized), namun stabil `persistent volumes " +"`_ yang " +"dapat Kubernetes gunakan untuk menjadwalkan aplikasi yang membutuhkan " +"status, seperti MariaDB (Galera). Meskipun kami menganggap bahwa proyek " +"harus memberikan pendekatan \"batteries included\" terhadap penyimpanan " +"persisten, kami ingin mengizinkan operator untuk menentukan solusi mereka " +"sendiri juga. Contoh pekerjaan ini akan didokumentasikan di bagian lain, " +"namun bukti ini ditemukan di seluruh proyek. Jika Anda menemukan masalah " +"atau celah, silakan buat `story `_ untuk melacak apa yang dapat dilakukan untuk meningkatkan " +"dokumentasi kami." + +msgid "Ingress" +msgstr "Ingress" + +msgid "Install OpenStack-Helm" +msgstr "Memasang OpenStack-Helm" + +msgid "Installation" +msgstr "Instalasi" + +msgid "" +"It can be configured to give services externally-reachable URLs, load " +"balance traffic, terminate SSL, offer name based virtual hosting, and more." +msgstr "" +"Ini dapat dikonfigurasi untuk memberikan layanan URL yang dapat dijangkau " +"secara eksternal, memuat keseimbangan lalu lintas (load balance traffic), " +"mengakhiri SSL, menawarkan virtual hosting berbasis nama, dan banyak lagi." + +msgid "Kubernetes Preparation" +msgstr "Persiapan Kubernetes" + +msgid "Kubernetes and Common Setup" +msgstr "Kubernetes dan Pengaturan Umum" + +msgid "Latest Version Installs" +msgstr "Pemasangan Versi Terbaru" + +msgid "" +"Look for *server* configuration with a *server_name* matching your desired " +"FQDN" +msgstr "" +"Carilah konfigurasi *server* dengan *server_name* sesuai dengan FQDN yang " +"Anda inginkan" + +msgid "" +"Managing and configuring a Kubernetes cluster is beyond the scope of " +"OpenStack-Helm and this guide." +msgstr "" +"Mengelola dan mengkonfigurasi kluster Kubernetes berada di luar lingkup " +"OpenStack-Helm dan panduan ini." + +msgid "" +"Many of the default container images that are referenced across OpenStack-" +"Helm charts are not intended for production use; for example, while LOCI and " +"Kolla can be used to produce production-grade images, their public reference " +"images are not prod-grade. In addition, some of the default images use " +"``latest`` or ``master`` tags, which are moving targets and can lead to " +"unpredictable behavior. For production-like deployments, we recommend " +"building custom images, or at minimum caching a set of known images, and " +"incorporating them into OpenStack-Helm via values overrides." +msgstr "" +"Banyak image kontainer default yang direferensikan di chart OpenStack-Helm " +"tidak dimaksudkan untuk penggunaan produksi; misalnya, sementara LOCI dan " +"Kolla dapat digunakan untuk menghasilkan image tingkat produksi, image " +"referensi publik mereka tidak bermutu (not prod-grade). Selain itu, beberapa " +"image default menggunakan tag ``latest`` atau ``master``, yang merupakan " +"target bergerak dan dapat menyebabkan perilaku yang tidak dapat diprediksi. " +"Untuk penyebaran seperti produksi, kami menyarankan Anda untuk membuat image " +"khusus, atau minimal cache satu set image yang dikenal, dan menggabungkannya " +"ke OpenStack-Helm melalui penggantian nilai." + +msgid "Multinode" +msgstr "Multinode" + +msgid "" +"Note if you need to make a DNS change, you will have to do a uninstall " +"(``helm delete ``) and install again." +msgstr "" +"Catatan jika Anda perlu membuat perubahan DNS, Anda harus melakukan " +"uninstall (``helm delete ``) dan instal lagi." + +msgid "" +"Note that this command will only enable you to auth successfully using the " +"``python-openstackclient`` CLI. To use legacy clients like the ``python-" +"novaclient`` from the CLI, reference the auth values in ``/etc/openstack/" +"clouds.yaml`` and run::" +msgstr "" +"Perhatikan bahwa perintah ini hanya akan memungkinkan Anda untuk auth " +"berhasil menggunakan ``python-openstackclient`` CLI. Untuk menggunakan klien " +"legacy seperti ``python-novaclient`` dari CLI, referensi nilai auth di ``/" +"etc/openstack/clouds.yaml`` dan jalankan ::" + +msgid "" +"On the host or master node, install the latest versions of Git, CA Certs & " +"Make if necessary" +msgstr "" +"Pada host atau node master, instal versi terbaru Git, CA Certs & Make jika " +"perlu" + +msgid "On the master node create an environment file for the cluster:" +msgstr "Pada node master buat file lingkungan untuk kluster:" + +msgid "On the master node create an inventory file for the cluster:" +msgstr "Pada node master buat file inventaris untuk kluster:" + +msgid "On the master node run the playbooks:" +msgstr "Pada node master jalankan playbook:" + +msgid "On the worker nodes:" +msgstr "Di worker nodes:" + +msgid "" +"Once OpenStack-Helm has been deployed, the cloud can be exercised either " +"with the OpenStack client, or the same heat templates that are used in the " +"validation gates." +msgstr "" +"Setelah OpenStack-Helm dikerahkan, cloud dapat dijalankan baik dengan klien " +"OpenStack, atau heat template yang sama yang digunakan di gate validasi." + +msgid "" +"Once installed, access the API's or Dashboard at `http://horizon.os.foo.org`" +msgstr "Setelah dipasang, akses API atau Dasbor di `http://horizon.os.foo.org`" + +msgid "" +"Once the host has been configured the repos containing the OpenStack-Helm " +"charts should be cloned onto each node in the cluster:" +msgstr "" +"Setelah host dikonfigurasi, repositori yang berisi chart OpenStack-Helm " +"harus dikloning ke setiap node dalam kluster:" + +msgid "" +"Once the host has been configured the repos containing the OpenStack-Helm " +"charts should be cloned:" +msgstr "" +"Setelah host dikonfigurasi, repositori yang berisi chart OpenStack-Helm " +"harus dikloning:" + +msgid "" +"OpenStack-Helm uses the hosts networking namespace for many pods including, " +"Ceph, Neutron and Nova components. For this, to function, as expected pods " +"need to be able to resolve DNS requests correctly. Ubuntu Desktop and some " +"other distributions make use of ``mdns4_minimal`` which does not operate as " +"Kubernetes expects with its default TLD of ``.local``. To operate at " +"expected either change the ``hosts`` line in the ``/etc/nsswitch.conf``, or " +"confirm that it matches:" +msgstr "" +"OpenStack-Helm menggunakan namespace jaringan host untuk banyak pod " +"termasuk, Ceph, Neutron dan Nova komponen. Untuk ini, berfungsi, seperti " +"yang diharapkan pod harus dapat menyelesaikan permintaan DNS dengan benar. " +"Desktop Ubuntu dan beberapa distro lain memanfaatkan ``mdns4_minimal`` yang " +"tidak beroperasi karena Kubernetes mengharapkan dengan TLD defaultnya ``." +"local``. Untuk beroperasi pada diharapkan baik mengubah baris ``host`` di ``/" +"etc/nsswitch.conf``, atau konfirmasi bahwa itu cocok:" + +msgid "" +"OpenStack-Helm utilizes the `Kubernetes Ingress Controller `__" +msgstr "" +"OpenStack-Helm memanfaatkan `Kubernetes Ingress Controller `__" + +msgid "OpenStack-Helm-Infra KubeADM deployment" +msgstr "OpenStack-Helm-Infra KubeADM deployment" + +msgid "" +"Other versions and considerations (such as other CNI SDN providers), config " +"map data, and value overrides will be included in other documentation as we " +"explore these options further." +msgstr "" +"Versi dan pertimbangan lain (seperti penyedia CNI SDN lainnya), data peta " +"konfigurasi, dan penggantian nilai akan dimasukkan dalam dokumentasi lain " +"saat kami mengeksplorasi opsi ini lebih lanjut." + +msgid "Overview" +msgstr "Ikhtisar" + +msgid "Passwordless Sudo" +msgstr "Passwordless Sudo" + +msgid "" +"Please see the supported application versions outlined in the `source " +"variable file `_." +msgstr "" +"Silakan lihat versi aplikasi yang didukung yang diuraikan dalam `source " +"variable file `_." + +msgid "" +"Prepare ahead of time your FQDN and DNS layouts. There are a handful of " +"OpenStack endpoints you will want exposed for API and Dashboard access." +msgstr "" +"Persiapkan sebelumnya tata letak FQDN dan DNS Anda. Ada beberapa endpoint " +"OpenStack yang ingin Anda buka untuk akses API dan Dashboard." + +msgid "Proxy Configuration" +msgstr "Konfigurasi Proxy" + +msgid "Removing Helm Charts" +msgstr "Menghapus Helm Chart" + +msgid "Requirements" +msgstr "Persyaratan" + +msgid "Requirements and Host Configuration" +msgstr "Persyaratan dan Konfigurasi Host" + +msgid "Run the playbooks" +msgstr "Jalankan playbook" + +msgid "SSH-Key preparation" +msgstr "Persiapan SSH-Key" + +msgid "" +"Set correct ownership: ``sudo chown ubuntu /etc/openstack-helm/deploy-key." +"pem``" +msgstr "" +"Setel kepemilikan yang benar: ``sudo chown ubuntu /etc/openstack-helm/deploy-" +"key.pem``" + +msgid "Setup Clients on the host and assemble the charts" +msgstr "Siapkan Klien pada host dan susun chart" + +msgid "Setup the gateway to the public network" +msgstr "Siapkan gateway ke jaringan publik" + +msgid "System Requirements" +msgstr "Persyaratan sistem" + +msgid "" +"Test this by ssh'ing to a node and then executing a command with 'sudo'. " +"Neither operation should require a password." +msgstr "" +"Uji ini dengan ssh'ing ke node dan kemudian jalankan perintah dengan 'sudo'. " +"Operasi tidak harus menggunakan kata sandi." + +msgid "" +"The OpenStack clients and Kubernetes RBAC rules, along with assembly of the " +"charts can be performed by running the following commands:" +msgstr "" +"Klien OpenStack dan aturan Kubernetes RBAC, bersama dengan perakitan chart " +"dapat dilakukan dengan menjalankan perintah berikut:" + +msgid "" +"The `./tools/deployment/multinode/kube-node-subnet.sh` script requires " +"docker to run." +msgstr "" +"Script `./tools/deployment/multinode/kube-node-subnet.sh` membutuhkan docker " +"untuk menjalankan." + +msgid "" +"The ``.svc.cluster.local`` address is required to allow the OpenStack client " +"to communicate without being routed through proxy servers. The IP address " +"``172.17.0.1`` is the advertised IP address for the Kubernetes API server. " +"Replace the addresses if your configuration does not match the one defined " +"above." +msgstr "" +"Alamat ``.svc.cluster.local`` diperlukan untuk memungkinkan klien OpenStack " +"berkomunikasi tanpa diarahkan melalui server proxy. Alamat IP ``172.17.0.1`` " +"adalah alamat IP yang diiklankan (advertised) untuk server API Kubernetes. " +"Ganti alamat jika konfigurasi Anda tidak sesuai dengan yang didefinisikan di " +"atas." + +msgid "The default FQDN's for OpenStack-Helm are" +msgstr "Default FQDN untuk OpenStack-Helm adalah" + +msgid "" +"The example above uses the default values used by ``openstack-helm-infra``." +msgstr "" +"Contoh di atas menggunakan nilai default yang digunakan oleh ``openstack-" +"helm-infra``." + +msgid "" +"The following commands all assume that they are run from the ``/opt/" +"openstack-helm`` directory." +msgstr "" +"Perintah berikut semua menganggap bahwa mereka dijalankan dari direktori ``/" +"opt/openstack-helm``." + +msgid "" +"The following commands all assume that they are run from the ``openstack-" +"helm`` directory and the repos have been cloned as above." +msgstr "" +"Perintah berikut semua berasumsi bahwa mereka dijalankan dari direktori " +"``openstack-helm`` dan repo telah dikloning seperti di atas." + +msgid "" +"The installation procedures below, will take an administrator from a new " +"``kubeadm`` installation to OpenStack-Helm deployment." +msgstr "" +"Prosedur instalasi di bawah ini, akan mengambil administrator dari instalasi " +"``kubeadm`` baru ke penyebaran OpenStack-Helm." + +msgid "The recommended minimum system requirements for a full deployment are:" +msgstr "" +"Persyaratan sistem minimum yang disarankan untuk penyebaran lengkap adalah:" + +msgid "" +"The script below configures Ceph to use filesystem directory-based storage. " +"To configure a custom block device-based backend, please refer to the ``ceph-" +"osd`` `values.yaml `_." +msgstr "" +"Skrip di bawah ini mengonfigurasi Ceph untuk menggunakan penyimpanan " +"berbasis direktori filesystem. Untuk mengkonfigurasi backend berbasis " +"perangkat blok kustom, silakan merujuk ke ``ceph-osd`` `values.yaml `_." + +msgid "" +"The upstream Ceph image repository does not currently pin tags to specific " +"Ceph point releases. This can lead to unpredictable results in long-lived " +"deployments. In production scenarios, we strongly recommend overriding the " +"Ceph images to use either custom built images or controlled, cached images." +msgstr "" +"Repositori image Ceph hulu saat ini tidak pin tag untuk rilis Ceph point " +"tertentu. Hal ini dapat menyebabkan hasil yang tak terduga dalam penempatan " +"jangka panjang. Dalam skenario produksi, kami sangat menyarankan untuk " +"mengganti image Ceph untuk menggunakan apakah image yang dibuat khusus " +"ataupun image yang dikontrol dan di-cache." + +msgid "" +"These commands will restore the environment back to a clean Kubernetes " +"deployment, that can either be manually removed or over-written by " +"restarting the deployment process. It is recommended to restart the host " +"before doing so to ensure any residual state, eg. Network interfaces are " +"removed." +msgstr "" +"Perintah ini akan memulihkan lingkungan kembali ke penyebaran Kubernetes " +"bersih, yang dapat dihapus secara manual atau ditulis berlebihan dengan " +"memulai kembali proses penerapan. Dianjurkan untuk me-restart host sebelum " +"melakukannya untuk memastikan setiap status yang tersisa, misalnya Network " +"interfaces dihapus." + +msgid "" +"This command will deploy a single node KubeADM administered cluster. This " +"will use the parameters in ``${OSH_INFRA_PATH}/playbooks/vars.yaml`` to " +"control the deployment, which can be over-ridden by adding entries to ``" +"${OSH_INFRA_PATH}/tools/gate/devel/local-vars.yaml``." +msgstr "" +"Perintah ini akan menyebarkan satu node yang dikelola KubeADM cluster. Ini " +"akan menggunakan parameter dalam ``${OSH_INFRA_PATH}/playbooks/vars.yaml`` " +"untuk mengontrol penyebaran, yang dapat ditimpa berlebihan (over-ridden) " +"dengan menambahkan entri ke ``${OSH_INFRA_PATH}/tools/gate/devel/local-vars." +"yaml``." + +msgid "" +"This guide assumes that users wishing to deploy behind a proxy have already " +"defined the conventional proxy environment variables ``http_proxy``, " +"``https_proxy``, and ``no_proxy``." +msgstr "" +"Panduan ini mengasumsikan bahwa pengguna yang ingin menyebarkan di belakang " +"proxy telah mendefinisikan variabel lingkungan proksi konvensional " +"``http_proxy``, ``https_proxy``, dan ``no_proxy``." + +msgid "This guide covers the minimum number of requirements to get started." +msgstr "Panduan ini mencakup jumlah minimum persyaratan untuk memulai." + +msgid "" +"This installation, by default will use Google DNS servers, 8.8.8.8 or " +"8.8.4.4 and updates ``resolv.conf``. These DNS nameserver entries can be " +"changed by updating file ``openstack-helm-infra/tools/images/kubeadm-aio/" +"assets/opt/playbooks/vars.yaml`` under section ``external_dns_nameservers``." +msgstr "" +"Instalasi ini, secara default akan menggunakan server DNS Google, 8.8.8.8 " +"atau 8.8.4.4 dan pembaruan (update) ``resolv.conf``. Entri server nama DNS " +"ini dapat diubah dengan memperbarui file ``openstack-helm-infra/tools/images/" +"kubeadm-aio/assets/opt/playbooks/vars.yaml`` pada bagian " +"``external_dns_nameservers``." + +msgid "" +"This installation, by default will use Google DNS servers, 8.8.8.8 or " +"8.8.4.4 and updates resolv.conf. These DNS nameserver entries can be changed " +"by updating file ``/opt/openstack-helm-infra/tools/images/kubeadm-aio/assets/" +"opt/playbooks/vars.yaml`` under section ``external_dns_nameservers``. This " +"change must be done on each node in your cluster." +msgstr "" +"Instalasi ini, secara default akan menggunakan server DNS Google, 8.8.8.8 " +"atau 8.8.4.4 dan memperbarui resolv.conf. Entri server nama DNS ini dapat " +"diubah dengan memperbarui file ``/opt/openstack-helm-infra/tools/images/" +"kubeadm-aio/assets/opt/playbooks/vars.yaml`` di bawah bagian " +"``external_dns_nameservers``. Perubahan ini harus dilakukan pada setiap node " +"di kluster Anda." + +msgid "" +"This will delete all Kubernetes resources generated when the chart was " +"instantiated. However for OpenStack charts, by default, this will not delete " +"the database and database users that were created when the chart was " +"installed. All OpenStack projects can be configured such that upon deletion, " +"their database will also be removed. To delete the database when the chart " +"is deleted the database drop job must be enabled before installing the " +"chart. There are two ways to enable the job, set the job_db_drop value to " +"true in the chart's ``values.yaml`` file, or override the value using the " +"helm install command as follows:" +msgstr "" +"Ini akan menghapus semua sumber daya Kubernetes yang dihasilkan saat chart " +"dibuat. Namun untuk chart OpenStack, secara default, ini tidak akan " +"menghapus database dan pengguna database yang dibuat ketika chart dipasang. " +"Semua proyek OpenStack dapat dikonfigurasi sedemikian rupa sehingga pada " +"saat penghapusan, database mereka juga akan dihapus. Untuk menghapus " +"database ketika chart dihapus, pekerjaan drop database harus diaktifkan " +"sebelum menginstal chart. Ada dua cara untuk mengaktifkan pekerjaan, " +"mengatur nilai job_db_drop menjadi true dalam file ``values.yaml`` pada " +"chart, atau mengganti nilai menggunakan perintah penginstalan helm seperti " +"berikut:" + +msgid "" +"Throughout this guide the assumption is that the user is: ``ubuntu``. " +"Because this user has to execute root level commands remotely to other " +"nodes, it is advised to add the following lines to ``/etc/sudoers`` for each " +"node:" +msgstr "" +"Sepanjang panduan ini, asumsinya adalah bahwa pengguna adalah: ``ubuntu``. " +"Karena pengguna ini harus mengeksekusi perintah level root dari jarak jauh " +"ke node lain, disarankan untuk menambahkan baris berikut ke ``/etc/sudoers`` " +"untuk setiap node:" + +msgid "" +"To copy the ssh key to each node, this can be accomplished with the ``ssh-" +"copy-id`` command, for example: *ssh-copy-id ubuntu@192.168.122.178*" +msgstr "" +"Untuk menyalin kunci ssh ke setiap node, ini dapat diselesaikan dengan " +"perintah ``ssh-copy-id``, misalnya: *ssh-copy-id ubuntu@192.168.122.178*" + +msgid "To delete an installed helm chart, use the following command:" +msgstr "Untuk menghapus helm chart yang terpasang, gunakan perintah berikut:" + +msgid "To generate the key you can use ``ssh-keygen -t rsa``" +msgstr "Untuk menghasilkan kunci yang dapat Anda gunakan ``ssh-keygen -t rsa``" + +msgid "" +"To run further commands from the CLI manually, execute the following to set " +"up authentication credentials::" +msgstr "" +"Untuk menjalankan perintah lebih lanjut dari CLI secara manual, jalankan " +"perintah berikut untuk menyiapkan kredensial otentikasi ::" + +msgid "" +"To tear-down, the development environment charts should be removed first " +"from the 'openstack' namespace and then the 'ceph' namespace using the " +"commands from the `Removing Helm Charts` section. Additionally charts should " +"be removed from the 'nfs' and 'libvirt' namespaces if deploying with NFS " +"backing or bare metal development support. You can run the following " +"commands to loop through and delete the charts, then stop the kubelet " +"systemd unit and remove all the containers before removing the directories " +"used on the host by pods." +msgstr "" +"Untuk meruntuhkan (tear-down), chart lingkungan pengembangan harus dihapus " +"pertama dari namespace 'openstack' dan kemudian namespace 'ceph' menggunakan " +"perintah dari bagian `Removing Helm Charts`. Selain itu, chart harus dihapus " +"dari namespace 'nfs' dan 'libvirt' jika digunakan dengan dukungan dukungan " +"NFS atau bare metal. Anda dapat menjalankan perintah berikut untuk mengulang " +"dan menghapus chart, kemudian menghentikan unit kubelet systemd dan " +"menghapus semua kontainer sebelum menghapus direktori yang digunakan pada " +"host oleh pod." + +msgid "Troubleshooting" +msgstr "Penyelesaian masalah" + +msgid "" +"Two similar options exist to set the FQDN overrides for External DNS mapping." +msgstr "" +"Dua opsi serupa ada untuk mengatur FQDN menimpa untuk pemetaan DNS Eksternal." + +msgid "" +"Until the Ubuntu kernel shipped with 16.04 supports CephFS subvolume mounts " +"by default the `HWE Kernel <../../troubleshooting/ubuntu-hwe-kernel.rst>`__ " +"is required to use CephFS." +msgstr "" +"Sampai kernel Ubuntu yang dikirim dengan 16.04 mendukung CephFS subvolume " +"mount secara default, `HWE Kernel <../../troubleshooting/ubuntu-hwe-kernel." +"rst>`__ diperlukan untuk menggunakan CephFS." + +msgid "" +"Until the Ubuntu kernel shipped with 16.04 supports CephFS subvolume mounts " +"by default the `HWE Kernel <../troubleshooting/ubuntu-hwe-kernel.html>`__ is " +"required to use CephFS." +msgstr "" +"Sampai kernel Ubuntu yang dikirim dengan 16.04 mendukung CephFS subvolume " +"mount secara default `HWE Kernel <../troubleshooting/ubuntu-hwe-kernel." +"html>`__ diperlukan untuk menggunakan CephFS." + +msgid "" +"Update your lab/environment DNS server with your appropriate host values " +"creating A Records for the edge node IP's and various FQDN's. Alternatively " +"you can test these settings locally by editing your ``/etc/hosts``. Below is " +"an example with a dummy domain ``os.foo.org`` and dummy Ingress IP " +"``1.2.3.4``." +msgstr "" +"Perbarui server DNS lab/environment Anda dengan nilai host yang sesuai Anda " +"membuat A Records untuk node edge IP dan berbagai FQDN. Atau Anda dapat " +"menguji pengaturan ini secara lokal dengan mengedit ``/etc/hosts`` Anda. Di " +"bawah ini adalah contoh dengan dummy domain ``os.foo.org`` dan dummy Ingress " +"IP ``1.2.3.4``." + +msgid "Using Horizon as an example, find the ``endpoints`` config." +msgstr "" +"Dengan menggunakan Horizon sebagai contoh, cari konfigurasi ``endpoints``." + +msgid "" +"Using the Helm packages previously pushed to the local Helm repository, run " +"the following commands to instruct tiller to create an instance of the given " +"chart. During installation, the helm client will print useful information " +"about resources created, the state of the Helm releases, and whether any " +"additional configuration steps are necessary." +msgstr "" +"Dengan menggunakan paket Helm yang sebelumnya didorong ke repositori Helm " +"setempat, jalankan perintah berikut untuk menginstruksikan tiller untuk " +"membuat instance dari chart yang diberikan. Selama instalasi, klien helm " +"akan mencetak informasi yang berguna tentang sumber daya yang dibuat, " +"keadaan rilis Helm, dan berbagai langkah konfigurasi tambahan diperlukan." + +msgid "Verify the *v1beta1/Ingress* resource has a Host with your FQDN value" +msgstr "" +"Verifikasi sumber daya *v1beta1/Ingress* memiliki Host dengan nilai FQDN Anda" + +msgid "" +"We want to change the ***public*** configurations to match our DNS layouts " +"above. In each Chart ``values.yaml`` is a ``endpoints`` configuration that " +"has ``host_fqdn_override``'s for each API that the Chart either produces or " +"is dependent on. `Read more about how Endpoints are developed `__. Note while " +"Glance Registry is listening on a Ingress http endpoint, you will not need " +"to expose the registry for external services." +msgstr "" +"Kami ingin mengubah konfigurasi ***public*** agar sesuai dengan tata letak " +"DNS kami di atas. Di setiap Chart ``values.yaml`` adalah konfigurasi " +"``endpoints`` yang memiliki ``host_fqdn_override`` untuk setiap API yang " +"dihasilkan oleh Chart atau tergantung pada. `Read more about how Endpoints " +"are developed `__. Catatan sementara Glance Registry sedang mendengarkan " +"pada endpoint http Ingress, Anda tidak perlu mengekspos registri untuk " +"layanan eksternal." + +msgid "" +"You can use any Kubernetes deployment tool to bring up a working Kubernetes " +"cluster for use with OpenStack-Helm. For production deployments, please " +"choose (and tune appropriately) a highly-resilient Kubernetes distribution, " +"e.g.:" +msgstr "" +"Anda dapat menggunakan alat penyebaran Kubernetes untuk memunculkan kluster " +"Kubernetes yang berfungsi untuk digunakan dengan OpenStack-Helm. Untuk " +"penyebaran produksi, pilih (dan selaraskan dengan tepat) distribusi " +"Kubernetes yang sangat tangguh (highly-resilient), misalnya:" + +msgid "" +"You can use any Kubernetes deployment tool to bring up a working Kubernetes " +"cluster for use with OpenStack-Helm. This guide describes how to simply " +"stand up a multinode Kubernetes cluster via the OpenStack-Helm gate scripts, " +"which use KubeADM and Ansible. Although this cluster won't be production-" +"grade, it will serve as a quick starting point in a lab or proof-of-concept " +"environment." +msgstr "" +"Anda dapat menggunakan alat penyebaran Kubernetes untuk memunculkan klaster " +"Kubernetes yang berfungsi untuk digunakan dengan OpenStack-Helm. Panduan ini " +"menjelaskan cara untuk hanya berdiri multinode Kubernetes klaster melalui " +"skrip gate OpenStack-Helm, yang menggunakan KubeADM dan Ansible. Meskipun " +"klaster ini tidak akan menjadi kelas produksi (production-grade), ini akan " +"berfungsi sebagai titik awal yang cepat di laboratorium atau lingkungan " +"proof-of-concept." + +msgid "" +"You may now deploy kubernetes, and helm onto your machine, first move into " +"the ``openstack-helm`` directory and then run the following:" +msgstr "" +"Anda sekarang dapat menyebarkan kubernetes, dan helm ke komputer Anda, " +"pertama-tama masuk ke direktori ``openstack-helm`` dan kemudian jalankan " +"yang berikut:" + +msgid "" +"`Airship `_, a declarative open cloud infrastructure " +"platform" +msgstr "" +"`Airship `_, platform infrastruktur cloud terbuka " +"deklaratif" + +msgid "" +"`KubeADM `_, the foundation of a number of Kubernetes installation solutions" +msgstr "" +"`KubeADM `_, fondasi sejumlah solusi instalasi Kubernetes" + +msgid "" +"node_one, node_two and node_three below are all worker nodes, children of " +"the master node that the commands below are executed on." +msgstr "" +"node_one, node_two dan node_three di bawah ini adalah semua node pekerja " +"(worker node), anak-anak (children) dari node master dimana perintah di " +"bawah ini dieksekusi." diff --git a/doc/source/locale/id/LC_MESSAGES/doc-testing.po b/doc/source/locale/id/LC_MESSAGES/doc-testing.po new file mode 100644 index 0000000000..f65a1776d4 --- /dev/null +++ b/doc/source/locale/id/LC_MESSAGES/doc-testing.po @@ -0,0 +1,732 @@ +# suhartono , 2018. #zanata +msgid "" +msgstr "" +"Project-Id-Version: openstack-helm\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2018-09-29 05:49+0000\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=UTF-8\n" +"Content-Transfer-Encoding: 8bit\n" +"PO-Revision-Date: 2018-09-25 05:17+0000\n" +"Last-Translator: suhartono \n" +"Language-Team: Indonesian\n" +"Language: id\n" +"X-Generator: Zanata 4.3.3\n" +"Plural-Forms: nplurals=1; plural=0\n" + +msgid "3 Node (VM based) env." +msgstr "3 Node (VM based) env." + +msgid "" +"5. Replace the failed disk with a new one. If you repair (not replace) the " +"failed disk, you may need to run the following:" +msgstr "" +"5. Ganti disk yang gagal dengan yang baru. Jika Anda memperbaiki (bukan " +"mengganti) disk yang gagal, Anda mungkin perlu menjalankan yang berikut:" + +msgid "" +"A Ceph Monitor running on voyager3 (whose Monitor database is destroyed) " +"becomes out of quorum, and the mon-pod's status stays in ``Running`` -> " +"``Error`` -> ``CrashLoopBackOff`` while keeps restarting." +msgstr "" +"Monitor Ceph yang berjalan di voyager3 (yang database Monitor-nya dimatikan) " +"menjadi tidak di quorum, dan status mon-pod tetap dalam ``Running`` -> " +"``Error`` -> ``CrashLoopBackOff`` sementara terus restart." + +msgid "Adding Tests" +msgstr "Menambahkan Tes" + +msgid "" +"Additional information on Helm tests for OpenStack-Helm and how to execute " +"these tests locally via the scripts used in the gate can be found in the " +"gates_ directory." +msgstr "" +"Informasi tambahan tentang tes Helm untuk OpenStack-Helm dan cara melakukan " +"tes ini secara lokal melalui skrip yang digunakan di gate dapat ditemukan di " +"direktori gates_." + +msgid "" +"After 10+ miniutes, Ceph starts rebalancing with one node lost (i.e., 6 osds " +"down) and the status stablizes with 18 osds." +msgstr "" +"Setelah 10+ menit, Ceph mulai menyeimbangkan kembali dengan satu node yang " +"hilang (yaitu, 6 osds turun) dan statusnya stabil dengan 18 osds." + +msgid "After reboot (node voyager3), the node status changes to ``NotReady``." +msgstr "" +"Setelah reboot (node voyager3), status node berubah menjadi ``NotReady``." + +msgid "" +"After the host is down (node voyager3), the node status changes to " +"``NotReady``." +msgstr "" +"Setelah host mati (node voyager3), status node berubah menjadi ``NotReady``." + +msgid "" +"All tests should be added to the gates during development, and are required " +"for any new service charts prior to merging. All Helm tests should be " +"included as part of the deployment script. An example of this can be seen " +"in this script_." +msgstr "" +"Semua tes harus ditambahkan ke gate selama pengembangan, dan diperlukan " +"untuk chart layanan baru sebelum penggabungan. Semua tes Helm harus " +"dimasukkan sebagai bagian dari skrip pemasangan. Contoh ini dapat dilihat " +"dalam skrip ini_." + +msgid "" +"Also, the pod status of ceph-mon and ceph-osd changes from ``NodeLost`` back " +"to ``Running``." +msgstr "" +"Juga, status pod ceph-mon dan ceph-osd berubah dari ``NodeLost`` kembali ke " +"``Running``." + +msgid "Any Helm tests associated with a chart can be run by executing:" +msgstr "" +"Tes Helm apa pun yang terkait dengan chart dapat dijalankan dengan " +"mengeksekusi:" + +msgid "" +"Any templates for Helm tests submitted should follow the philosophies " +"applied in the other templates. These include: use of overrides where " +"appropriate, use of endpoint lookups and other common functionality in helm-" +"toolkit, and mounting any required scripting templates via the configmap-bin " +"template for the service chart. If Rally tests are not appropriate or " +"adequate for a service chart, any additional tests should be documented " +"appropriately and adhere to the same expectations." +msgstr "" +"Setiap template untuk tes Helm yang diajukan harus mengikuti filosofi yang " +"diterapkan dalam template lain. Ini termasuk: penggunaan menimpa di mana " +"yang sesuai, penggunaan pencarian endpoint dan fungsi umum lainnya dalam " +"helm-toolkit, dan pemasangan semua scripting template yang diperlukan " +"melalui template configmap-bin untuk chart layanan. Jika pengujian Rally " +"tidak sesuai atau memadai untuk chart layanan, pengujian tambahan apa pun " +"harus didokumentasikan dengan tepat dan mematuhi harapan yang sama." + +msgid "Capture Ceph pods statuses." +msgstr "Capture Ceph pods statuses." + +msgid "Capture Openstack pods statuses." +msgstr "Capture Openstack pods statuses." + +msgid "Capture final Ceph pod statuses:" +msgstr "Capture final Ceph pod statuses:" + +msgid "Capture final Openstack pod statuses:" +msgstr "Capture final Openstack pod statuses:" + +msgid "Case: 1 out of 3 Monitor Processes is Down" +msgstr "Kasus: 1 dari 3 Proses Monitor Sedang Turun" + +msgid "Case: 2 out of 3 Monitor Processes are Down" +msgstr "Kasus: 2 dari 3 Proses Monitor Sedang Turun" + +msgid "Case: 3 out of 3 Monitor Processes are Down" +msgstr "Kasus: 3 dari 3 Proses Monitor Sedang Turun" + +msgid "Case: A OSD pod is deleted" +msgstr "Kasus: Pod OSD dihapus" + +msgid "Case: A disk fails" +msgstr "Kasus: Disk gagal" + +msgid "Case: A host machine where ceph-mon is running is down" +msgstr "Kasus: Mesin host di mana ceph-mon sedang bekerja sedang mati" + +msgid "Case: Monitor database is destroyed" +msgstr "Kasus: Database monitor dimusnahkan" + +msgid "Case: OSD processes are killed" +msgstr "Kasus: Proses OSD dimatikan" + +msgid "Case: One host machine where ceph-mon is running is rebooted" +msgstr "Kasus: Satu mesin host di mana ceph-mon sedang dijalankan di-reboot" + +msgid "Caveats:" +msgstr "Caveats:" + +msgid "Ceph Cephfs provisioner docker images." +msgstr "Ceph Cephfs provisioner docker images." + +msgid "Ceph Luminous point release images for Ceph components" +msgstr "Ceph Luminous point melepaskan image untuk komponen Ceph" + +msgid "Ceph RBD provisioner docker images." +msgstr "Ceph RBD provisioner docker images." + +msgid "Ceph Resiliency" +msgstr "Ceph Resiliency" + +msgid "Ceph Upgrade" +msgstr "Ceph Upgrade" + +msgid "" +"Ceph can be upgreade without downtime for Openstack components in a multinoe " +"env." +msgstr "" +"Ceph dapat ditingkatkan tanpa downtime untuk komponen OpenStack dalam env " +"multinode." + +msgid "" +"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of " +"quorum. Also, 6 osds running on ``voyager3`` are down (i.e., 18 out of 24 " +"osds are up). Some placement groups become degraded and undersized." +msgstr "" +"Status Ceph menunjukkan bahwa ceph-mon yang berjalan pada ``voyager3`` " +"menjadi tidak dapat digunakan. Juga, 6 osds yang berjalan pada ``voyager3`` " +"sedang down (yaitu, 18 dari 24 osds naik). Beberapa kelompok penempatan " +"menjadi terdegradasi dan berukuran kecil." + +msgid "" +"Ceph status shows that ceph-mon running on ``voyager3`` becomes out of " +"quorum. Also, six osds running on ``voyager3`` are down; i.e., 18 osds are " +"up out of 24 osds." +msgstr "" +"Status Ceph menunjukkan bahwa ceph-mon yang berjalan pada ``voyager3`` " +"menjadi tidak dapat digunakan. Juga, enam osds yang berjalan di ``voyager3`` " +"turun; yaitu, 18 osds naik dari 24 osds." + +msgid "Ceph version: 12.2.3" +msgstr "Ceph versi: 12.2.3" + +msgid "Check Ceph Pods" +msgstr "Periksa Ceph Pods" + +msgid "Check version of each Ceph components." +msgstr "Periksa versi setiap komponen Ceph." + +msgid "Check which images Provisionors and Mon-Check PODs are using" +msgstr "Periksa image mana yang digunakan Provisionors dan Mon-Check PODs" + +msgid "Cluster size: 4 host machines" +msgstr "Ukuran cluster: 4 mesin host" + +msgid "Conclusion:" +msgstr "Kesimpulan:" + +msgid "Confirm Ceph component's version." +msgstr "Konfirmasi versi komponen Ceph." + +msgid "Continue with OSH multinode guide to install other Openstack charts." +msgstr "" +"Lanjutkan dengan panduan multinode OSH untuk menginstal chart Openstack " +"lainnya." + +msgid "Deploy and Validate Ceph" +msgstr "Menyebarkan dan Memvalidasi Ceph" + +msgid "Disk Failure" +msgstr "Kegagalan Disk" + +msgid "Docker Images:" +msgstr "Docker Images:" + +msgid "" +"Every OpenStack-Helm chart should include any required Helm tests necessary " +"to provide a sanity check for the OpenStack service. Information on using " +"the Helm testing framework can be found in the Helm repository_. Currently, " +"the Rally testing framework is used to provide these checks for the core " +"services. The Keystone Helm test template can be used as a reference, and " +"can be found here_." +msgstr "" +"Setiap OpenStack-Helm chart harus menyertakan tes Helm yang diperlukan untuk " +"memberikan pemeriksaan (sanity check) kewarasan untuk layanan OpenStack. " +"Informasi tentang menggunakan kerangka pengujian Helm dapat ditemukan di " +"repositori Helm. Saat ini, kerangka pengujian Rally digunakan untuk " +"menyediakan pemeriksaan ini untuk layanan inti. Kerangka uji Keystone Helm " +"dapat digunakan sebagai referensi, dan dapat ditemukan di sini_." + +msgid "Find that Ceph is healthy with a lost OSD (i.e., a total of 23 OSDs):" +msgstr "Temukan bahwa Ceph sehat dengan OSD yang hilang (yaitu, total 23 OSD):" + +msgid "Follow all steps from OSH multinode guide with below changes." +msgstr "" +"Ikuti semua langkah dari panduan multinode OSH dengan perubahan di bawah ini." + +msgid "" +"Followed OSH multinode guide steps to setup nodes and install k8 cluster" +msgstr "" +"Mengikuti langkah-langkah panduan multinode OSH untuk mengatur node dan " +"menginstal k8 cluster" + +msgid "Followed OSH multinode guide steps upto Ceph install" +msgstr "Mengikuti panduan multinode OSH langkah-langkah upto Ceph menginstal" + +msgid "Following is a partial part from script to show changes." +msgstr "" +"Berikut ini adalah bagian parsial dari skrip untuk menunjukkan perubahan." + +msgid "" +"From the Kubernetes cluster, remove the failed OSD pod, which is running on " +"``voyager4``:" +msgstr "" +"Dari kluster Kubernetes, hapus pod OSD yang gagal, yang berjalan di " +"``voyager4``:" + +msgid "Hardware Failure" +msgstr "Kegagalan perangkat keras" + +msgid "Helm Tests" +msgstr "Tes Helm" + +msgid "Host Failure" +msgstr "Host Failure" + +msgid "" +"In the mean time, we monitor the status of Ceph and noted that it takes " +"about 30 seconds for the 6 OSDs to recover from ``down`` to ``up``. The " +"reason is that Kubernetes automatically restarts OSD pods whenever they are " +"killed." +msgstr "" +"Sementara itu, kami memantau status Ceph dan mencatat bahwa dibutuhkan " +"sekitar 30 detik untuk 6 OSD untuk memulihkan dari ``down`` ke ``up``. " +"Alasannya adalah Kubernetes secara otomatis merestart pod OSD setiap kali " +"mereka dimatikan." + +msgid "" +"In the mean time, we monitored the status of Ceph and noted that it takes " +"about 24 seconds for the killed Monitor process to recover from ``down`` to " +"``up``. The reason is that Kubernetes automatically restarts pods whenever " +"they are killed." +msgstr "" +"Sementara itu, kami memantau status Ceph dan mencatat bahwa dibutuhkan " +"sekitar 24 detik untuk proses Monitor yang mati untuk memulihkan dari " +"``down`` ke ``up``. Alasannya adalah Kubernetes secara otomatis me-restart " +"pod setiap kali mereka dimatikan." + +msgid "Install Ceph charts (12.2.4) by updating Docker images in overrides." +msgstr "" +"Instal Ceph charts (12.2.4) dengan memperbarui Docker images di overrides." + +msgid "Install Ceph charts (version 12.2.4)" +msgstr "Pasang chart Ceph (versi 12.2.4)" + +msgid "Install OSH components as per OSH multinode guide." +msgstr "Instal komponen OSH sesuai panduan multinode OSH." + +msgid "Install Openstack charts" +msgstr "Pasang chart Openstack" + +msgid "" +"It takes longer (about 1 minute) for the killed Monitor processes to recover " +"from ``down`` to ``up``." +msgstr "" +"Diperlukan waktu lebih lama (sekitar 1 menit) untuk proses Monitor yang mati " +"untuk memulihkan dari ``down`` ke ``up``." + +msgid "Kubernetes version: 1.10.5" +msgstr "Kubernetes versi: 1.10.5" + +msgid "Kubernetes version: 1.9.3" +msgstr "Kubernetes version: 1.9.3" + +msgid "Mission" +msgstr "Misi" + +msgid "Monitor Failure" +msgstr "Memantau Kegagalan" + +msgid "" +"Note: To find the daemonset associated with a failed OSD, check out the " +"followings:" +msgstr "" +"Catatan: Untuk menemukan daemon yang terkait dengan OSD yang gagal, periksa " +"yang berikut:" + +msgid "Number of disks: 24 (= 6 disks per host * 4 hosts)" +msgstr "Jumlah disk: 24 (= 6 disk per host * 4 host)" + +msgid "OSD Failure" +msgstr "Kegagalan OSD" + +msgid "OSD count is set to 3 based on env setup." +msgstr "Penghitungan OSD diatur ke 3 berdasarkan pada env setup." + +msgid "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459" +msgstr "OpenStack-Helm commit: 25e50a34c66d5db7604746f4d2e12acbdd6c1459" + +msgid "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb" +msgstr "OpenStack-Helm commit: 28734352741bae228a4ea4f40bcacc33764221eb" + +msgid "" +"Our focus lies on resiliency for various failure scenarios but not on " +"performance or stress testing." +msgstr "" +"Fokus kami terletak pada ketahanan untuk berbagai skenario kegagalan tetapi " +"tidak pada kinerja atau stress testing." + +msgid "Plan:" +msgstr "Rencana:" + +msgid "Recovery:" +msgstr "Pemulihan:" + +msgid "" +"Remove the entire ceph-mon directory on voyager3, and then Ceph will " +"automatically recreate the database by using the other ceph-mons' database." +msgstr "" +"Hapus seluruh direktori ceph-mon di voyager3, dan kemudian Ceph akan secara " +"otomatis membuat ulang database dengan menggunakan database ceph-mons " +"lainnya." + +msgid "" +"Remove the failed OSD (OSD ID = 2 in this example) from the Ceph cluster:" +msgstr "Hapus OSD yang gagal (OSD ID = 2 dalam contoh ini) dari kluster Ceph:" + +msgid "Resiliency Tests for OpenStack-Helm/Ceph" +msgstr "Tes Ketahanan untuk OpenStack-Helm/Ceph" + +msgid "Running Tests" +msgstr "Menjalankan Tes" + +msgid "Setup:" +msgstr "Mempersiapkan:" + +msgid "" +"Showing partial output from kubectl describe command to show which image is " +"Docker container is using" +msgstr "" +"Menampilkan sebagian output dari kubectl menggambarkan perintah untuk " +"menunjukkan image mana yang digunakan oleh container Docker" + +msgid "Software Failure" +msgstr "Kegagalan Perangkat Lunak" + +msgid "Solution:" +msgstr "Solusi:" + +msgid "Start a new OSD pod on ``voyager4``:" +msgstr "Mulai pod LED baru pada ``voyager 4``:" + +msgid "Steps:" +msgstr "Langkah:" + +msgid "Symptom:" +msgstr "Gejala:" + +msgid "Test Environment" +msgstr "Uji Lingkungan" + +msgid "Test Scenario:" +msgstr "Test Scenario:" + +msgid "Testing" +msgstr "Pengujian" + +msgid "Testing Expectations" +msgstr "Menguji Ekspektasi" + +msgid "" +"The goal of our resiliency tests for `OpenStack-Helm/Ceph `_ is to show symptoms of " +"software/hardware failure and provide the solutions." +msgstr "" +"Tujuan dari uji ketahanan kami untuk `OpenStack-Helm/Ceph `_ adalah untuk menunjukkan " +"gejala kegagalan perangkat lunak/perangkat keras dan memberikan solusi." + +msgid "" +"The logs of the failed mon-pod shows the ceph-mon process cannot run as ``/" +"var/lib/ceph/mon/ceph-voyager3/store.db`` does not exist." +msgstr "" +"Log dari mon-pod gagal menunjukkan proses ceph-mon tidak dapat berjalan " +"karena ``/var/lib/ceph/mon/ceph-voyager3/store.db`` tidak ada." + +msgid "" +"The node status of ``voyager3`` changes to ``Ready`` after the node is up " +"again. Also, Ceph pods are restarted automatically. Ceph status shows that " +"the monitor running on ``voyager3`` is now in quorum." +msgstr "" +"Status node ``voyager3`` berubah menjadi ``Ready`` setelah node naik lagi. " +"Juga, Ceph pod di-restart secara otomatis. Status Ceph menunjukkan bahwa " +"monitor yang dijalankan pada ``voyager3`` sekarang dalam kuorum." + +msgid "" +"The node status of ``voyager3`` changes to ``Ready`` after the node is up " +"again. Also, Ceph pods are restarted automatically. The Ceph status shows " +"that the monitor running on ``voyager3`` is now in quorum and 6 osds gets " +"back up (i.e., a total of 24 osds are up)." +msgstr "" +"Status node ``voyager3`` berubah menjadi ``Ready`` setelah node naik lagi. " +"Juga, Ceph pod di-restart secara otomatis. Status Ceph menunjukkan bahwa " +"monitor yang berjalan pada ``voyager3`` sekarang berada di kuorum dan 6 osds " +"akan kembali (yaitu, total 24 osds naik)." + +msgid "" +"The output of the Helm tests can be seen by looking at the logs of the pod " +"created by the Helm tests. These logs can be viewed with:" +msgstr "" +"Output dari tes Helm dapat dilihat dengan melihat log dari pod yang dibuat " +"oleh tes Helm. Log ini dapat dilihat dengan:" + +msgid "The pod status of ceph-mon and ceph-osd shows as ``NodeLost``." +msgstr "Status pod ceph-mon dan ceph-osd ditampilkan sebagai ``NodeLost``." + +msgid "" +"The status of the pods (where the three Monitor processes are killed) " +"changed as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> " +"``Running`` and this recovery process takes about 1 minute." +msgstr "" +"Status pod (di mana ketiga proses Monitor dimatikan) diubah sebagai berikut: " +"``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` dan proses " +"pemulihan ini memakan waktu sekitar 1 menit." + +msgid "" +"The status of the pods (where the two Monitor processes are killed) changed " +"as follows: ``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` " +"and this recovery process takes about 1 minute." +msgstr "" +"Status pod (di mana kedua proses Monitor mati) diubah sebagai berikut: " +"``Running`` -> ``Error`` -> ``CrashLoopBackOff`` -> ``Running`` dan proses " +"pemulihan ini memakan waktu sekitar 1 menit." + +msgid "" +"This guide documents steps showing Ceph version upgrade. The main goal of " +"this document is to demostrate Ceph chart update without downtime for OSH " +"components." +msgstr "" +"Panduan ini mendokumentasikan langkah-langkah yang menunjukkan upgrade versi " +"Ceph. Tujuan utama dari dokumen ini adalah untuk mendemonstrasikan pembaruan " +"Ceph chart tanpa downtime untuk komponen OSH." + +msgid "" +"This is for the case when a host machine (where ceph-mon is running) is down." +msgstr "" +"Ini untuk kasus ketika mesin host (di mana ceph-mon sedang berjalan) sedang " +"mati." + +msgid "This is to test a scenario when 1 out of 3 Monitor processes is down." +msgstr "Ini untuk menguji skenario ketika 1 dari 3 proses Monitor mati." + +msgid "" +"This is to test a scenario when 2 out of 3 Monitor processes are down. To " +"bring down 2 Monitor processes (out of 3), we identify two Monitor processes " +"and kill them from the 2 monitor hosts (not a pod)." +msgstr "" +"Ini untuk menguji skenario ketika 2 dari 3 proses Monitor sedang down. Untuk " +"menurunkan 2 proses Monitor (dari 3), kami mengidentifikasi dua proses " +"Monitor dan mematikannya dari 2 monitor host (bukan pod)." + +msgid "" +"This is to test a scenario when 3 out of 3 Monitor processes are down. To " +"bring down 3 Monitor processes (out of 3), we identify all 3 Monitor " +"processes and kill them from the 3 monitor hosts (not pods)." +msgstr "" +"Ini untuk menguji skenario ketika 3 dari 3 proses Monitor sedang down. Untuk " +"menurunkan 3 proses Monitor (dari 3), kami mengidentifikasi semua 3 proses " +"Monitor dan mematikannya dari 3 monitor host (bukan pod)." + +msgid "" +"This is to test a scenario when a disk failure happens. We monitor the ceph " +"status and notice one OSD (osd.2) on voyager4 which has ``/dev/sdh`` as a " +"backend is down." +msgstr "" +"Ini untuk menguji skenario ketika terjadi kegagalan disk. Kami memonitor " +"status ceph dan melihat satu OSD (osd.2) di voyager4 yang memiliki ``/dev/" +"sdh`` sebagai backend sedang down (mati)." + +msgid "" +"This is to test a scenario when an OSD pod is deleted by ``kubectl delete " +"$OSD_POD_NAME``. Meanwhile, we monitor the status of Ceph and noted that it " +"takes about 90 seconds for the OSD running in deleted pod to recover from " +"``down`` to ``up``." +msgstr "" +"Ini untuk menguji skenario ketika pod OSD dihapus oleh ``kubectl delete " +"$OSD_POD_NAME``. Sementara itu, kami memantau status Ceph dan mencatat bahwa " +"dibutuhkan sekitar 90 detik untuk OSD yang berjalan di pod yang dihapus " +"untuk memulihkan dari ``down`` ke ``up``." + +msgid "This is to test a scenario when some of the OSDs are down." +msgstr "Ini untuk menguji skenario ketika beberapa OSD turun." + +msgid "" +"To bring down 1 Monitor process (out of 3), we identify a Monitor process " +"and kill it from the monitor host (not a pod)." +msgstr "" +"Untuk menurunkan 1 proses Monitor (dari 3), kami mengidentifikasi proses " +"Monitor dan mematikannya dari host monitor (bukan pod)." + +msgid "" +"To bring down 6 OSDs (out of 24), we identify the OSD processes and kill " +"them from a storage host (not a pod)." +msgstr "" +"Untuk menurunkan 6 OSD (dari 24), kami mengidentifikasi proses OSD dan " +"mematikannya dari host penyimpanan (bukan pod)." + +msgid "To replace the failed OSD, excecute the following procedure:" +msgstr "Untuk mengganti OSD yang gagal, jalankan prosedur berikut:" + +msgid "Update Ceph Client chart with new overrides:" +msgstr "Perbarui Ceph Client chart dengan override baru:" + +msgid "Update Ceph Mon chart with new overrides" +msgstr "Perbarui Ceph Mon chart dengan override baru" + +msgid "Update Ceph OSD chart with new overrides:" +msgstr "Perbarui Ceph OSD chart dengan override baru:" + +msgid "Update Ceph Provisioners chart with new overrides:" +msgstr "Perbarui Ceph Provisioners chart dengan override baru:" + +msgid "" +"Update ceph install script ``./tools/deployment/multinode/030-ceph.sh`` to " +"add ``images:`` section in overrides as shown below." +msgstr "" +"Perbarui ceph install script ``./tools/deployment/multinode/030-ceph.sh`` " +"untuk menambahkan bagian ``images:`` di override seperti yang ditunjukkan di " +"bawah ini." + +msgid "" +"Update, image section in new overrides ``ceph-update.yaml`` as shown below" +msgstr "" +"Pembaruan, bagian image di overrides baru ``ceph-update.yaml`` seperti yang " +"ditunjukkan di bawah ini" + +msgid "Upgrade Ceph charts to update version" +msgstr "Tingkatkan Ceph charts untuk memperbarui versi" + +msgid "" +"Upgrade Ceph charts to version 12.2.5 by updating docker images in overrides." +msgstr "" +"Tingkatkan Ceph chart ke versi 12.2.5 dengan memperbarui image docker di " +"overrides." + +msgid "" +"Upgrade Ceph component version from ``12.2.4`` to ``12.2.5`` without " +"downtime to OSH components." +msgstr "" +"Upgrade versi komponen Ceph dari ``12.2.4`` ke ``12.2.5`` tanpa waktu henti " +"ke komponen OSH." + +msgid "" +"Use Ceph override file ``ceph.yaml`` that was generated previously and " +"update images section as below" +msgstr "" +"Gunakan Ceph override file ``ceph.yaml`` yang telah dibuat sebelumnya dan " +"perbarui bagian image seperti di bawah ini" + +msgid "" +"Validate the Ceph status (i.e., one OSD is added, so the total number of " +"OSDs becomes 24):" +msgstr "" +"Validasi status Ceph (yaitu satu OSD ditambahkan, sehingga jumlah total OSD " +"menjadi 24):" + +msgid "" +"We also monitored the pod status through ``kubectl get pods -n ceph`` during " +"this process. The deleted OSD pod status changed as follows: ``Terminating`` " +"-> ``Init:1/3`` -> ``Init:2/3`` -> ``Init:3/3`` -> ``Running``, and this " +"process taks about 90 seconds. The reason is that Kubernetes automatically " +"restarts OSD pods whenever they are deleted." +msgstr "" +"Kami juga memantau status pod melalui ``kubectl get pods -n ceph`` selama " +"proses ini. Status pod OSD yang dihapus diubah sebagai berikut: " +"``Terminating`` -> ``Init:1/3`` -> ``Init:2/3`` -> ``Init:3/3`` -> " +"``Running``, dan proses ini memakan waktu sekitar 90 detik. Alasannya adalah " +"Kubernetes secara otomatis merestart pod OSD setiap kali dihapus." + +msgid "" +"We also monitored the status of the Monitor pod through ``kubectl get pods -" +"n ceph``, and the status of the pod (where a Monitor process is killed) " +"changed as follows: ``Running`` -> ``Error`` -> ``Running`` and this " +"recovery process takes about 24 seconds." +msgstr "" +"Kami juga memantau status pod Monitor melalui ``kubectl get pods -n ceph``, " +"dan status pod (di mana proses Monitor mati) berubah sebagai berikut: " +"``Running`` -> ``Error`` -> ``Running`` dan proses pemulihan ini membutuhkan " +"waktu sekitar 24 detik." + +msgid "" +"We have 3 Monitors in this Ceph cluster, one on each of the 3 Monitor hosts." +msgstr "" +"Kami memiliki 3 Monitor di cluster Ceph ini, satu di masing-masing dari 3 " +"host Monitor." + +msgid "" +"We intentionlly destroy a Monitor database by removing ``/var/lib/openstack-" +"helm/ceph/mon/mon/ceph-voyager3/store.db``." +msgstr "" +"Kami bermaksud menghancurkan database Monitor dengan menghapus ``/var/lib/" +"openstack-helm/ceph/mon/mon/ceph-voyager3/store.db``." + +msgid "" +"We monitored the status of Ceph Monitor pods and noted that the symptoms are " +"similar to when 1 or 2 Monitor processes are killed:" +msgstr "" +"Kami memantau status pod Ceph Monitor dan mencatat bahwa gejalanya mirip " +"dengan ketika 1 atau 2 proses Monitor dimatikan:" + +msgid "" +"We monitored the status of Ceph when the Monitor processes are killed and " +"noted that the symptoms are similar to when 1 Monitor process is killed:" +msgstr "" +"Kami memantau status Ceph ketika proses Monitor dimatikan dan mencatat bahwa " +"gejala mirip dengan ketika 1 Proses monitor dimatikan:" + +msgid "`Disk failure <./disk-failure.html>`_" +msgstr "`Disk failure <./disk-failure.html>`_" + +msgid "`Host failure <./host-failure.html>`_" +msgstr "`Host failure <./host-failure.html>`_" + +msgid "`Monitor failure <./monitor-failure.html>`_" +msgstr "`Monitor failure <./monitor-failure.html>`_" + +msgid "`OSD failure <./osd-failure.html>`_" +msgstr "`OSD failure <./osd-failure.html>`_" + +msgid "" +"``Results:`` All provisioner pods got terminated at once (same time). Other " +"ceph pods are running. No interruption to OSH pods." +msgstr "" +"``Results:`` Semua pod penyedia dihentikan sekaligus (saat yang sama). Ceph " +"pod lainnya sedang berjalan. Tidak ada gangguan pada pod OSH." + +msgid "" +"``Results:`` Mon pods got updated one by one (rolling updates). Each Mon pod " +"got respawn and was in 1/1 running state before next Mon pod got updated. " +"Each Mon pod got restarted. Other ceph pods were not affected with this " +"update. No interruption to OSH pods." +msgstr "" +"``Results:`` Mon pod mendapat pembaruan satu per satu (pembaruan bergulir). " +"Setiap Mon pod mendapat respawn dan berada dalam 1/1 keadaan sebelum Mon pod " +"berikutnya diperbarui. Setiap Mon pod mulai dihidupkan ulang. Ceph pod " +"lainnya tidak terpengaruh dengan pembaruan ini. Tidak ada gangguan pada pod " +"OSH." + +msgid "" +"``Results:`` Rolling updates (one pod at a time). Other ceph pods are " +"running. No interruption to OSH pods." +msgstr "" +"``Results:`` Bergulir pembaruan (satu pod dalam satu waktu). Ceph pod " +"lainnya sedang berjalan. Tidak ada gangguan pada pod OSH." + +msgid "" +"``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` images are " +"used for jobs. ``ceph_mon_check`` has one script that is stable so no need " +"to upgrade." +msgstr "" +"Image ``ceph_bootstrap``, ``ceph-config_helper`` and ``ceph_rbs_pool`` " +"digunakan untuk pekerjaan. ``ceph_mon_check`` memiliki satu skrip yang " +"stabil sehingga tidak perlu melakukan upgrade." + +msgid "``cp /tmp/ceph.yaml ceph-update.yaml``" +msgstr "``cp /tmp/ceph.yaml ceph-update.yaml``" + +msgid "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``" +msgstr "``helm upgrade ceph-client ./ceph-client --values=ceph-update.yaml``" + +msgid "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``" +msgstr "``helm upgrade ceph-mon ./ceph-mon --values=ceph-update.yaml``" + +msgid "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``" +msgstr "``helm upgrade ceph-osd ./ceph-osd --values=ceph-update.yaml``" + +msgid "" +"``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update." +"yaml``" +msgstr "" +"``helm upgrade ceph-provisioners ./ceph-provisioners --values=ceph-update." +"yaml``" + +msgid "``series of console outputs:``" +msgstr "``series of console outputs:``" diff --git a/doc/source/locale/ko_KR/LC_MESSAGES/doc-devref.po b/doc/source/locale/ko_KR/LC_MESSAGES/doc-devref.po new file mode 100644 index 0000000000..a2bd7847ff --- /dev/null +++ b/doc/source/locale/ko_KR/LC_MESSAGES/doc-devref.po @@ -0,0 +1,1000 @@ +# Soonyeul Park , 2018. #zanata +msgid "" +msgstr "" +"Project-Id-Version: openstack-helm\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2018-09-29 05:49+0000\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=UTF-8\n" +"Content-Transfer-Encoding: 8bit\n" +"PO-Revision-Date: 2018-09-26 05:52+0000\n" +"Last-Translator: Soonyeul Park \n" +"Language-Team: Korean (South Korea)\n" +"Language: ko_KR\n" +"X-Generator: Zanata 4.3.3\n" +"Plural-Forms: nplurals=1; plural=0\n" + +msgid "" +"**Note:** The values defined in a PodDisruptionBudget may conflict with " +"other values that have been provided if an operator chooses to leverage " +"Rolling Updates for deployments. In the case where an operator defines a " +"``maxUnavailable`` and ``maxSurge`` within an update strategy that is higher " +"than a ``minAvailable`` within a pod disruption budget, a scenario may occur " +"where pods fail to be evicted from a deployment." +msgstr "" +"**참고:** PodDisruptionBudget 에 정의된 값들은 운영자가 배포를 위한 롤링 업데" +"이트를 활용하도록 선택한 경우 제공되는 다른 값들과 충돌할 수 있습니다. 운영자" +"가 pod 중단 비용에서의 ``minAvailable`` 보다 큰 ``maxUnavailable`` 과 " +"``maxSurge`` 를 업데이트 전략으로 정의한 경우, 배포에서 pod가 제거되지 않는 " +"시나리오가 발생할 수 있습니다." + +msgid "" +":code:`neutron/templates/bin/_neutron-linuxbridge-agent-init.sh.tpl` is " +"configuring the tunnel IP, external bridge and all bridge mappings defined " +"in config. It is done in init container, and the IP for tunneling is shared " +"using file :code:`/tmp/pod-shared/ml2-local-ip.ini` with main linuxbridge " +"container." +msgstr "" +":code:`neutron/templates/bin/_neutron-linuxbridge-agent-init.sh.tpl`은 터널 " +"IP, 외부 브리지, 그리고 config에 정의된 모든 브리지 매핑을 구성합니다.그것은 " +"init 컨테이너에서 완료되고, 터널링을 위한 IP는 :code:`/tmp/pod-shared/ml2-" +"local-ip.ini` 파일을 사용하여 linuxbridge 컨테이너와 공유됩니다." + +msgid "" +"A detail worth mentioning is that ovs is configured to use sockets, rather " +"than the default loopback mechanism." +msgstr "" +"언급할 가치가 있는 세부사항은 ovs가 기본 루프백 메커니즘 대신에 소켓을 사용하" +"여 구성된다는 것입니다. " + +msgid "" +"A long-term goal, besides being image agnostic, is to also be able to " +"support any of the container runtimes that Kubernetes supports, even those " +"that might not use Docker's own packaging format. This will allow the " +"project to continue to offer maximum flexibility with regard to operator " +"choice." +msgstr "" +"이미지에서 자유로워지는 것 외에도 장기 목표는 Kubernetes가 지원하는 모든 컨테" +"이너 런타임도 지원할 수 있으며, Docker의 자체 패키징 포맷을 사용하지 않는 컨" +"테이너 런타임도 지원할 수 있습니다. 이는 프로젝트에 운영자의 선택과 관련하여 " +"최대한의 유연성을 허용할 것입니다." + +msgid "" +"All ``Deployment`` chart components are outfitted by default with rolling " +"update strategies:" +msgstr "" +"모든 ``Deployment`` 차트 구성요소는 기본적으로 롤링 업데이트 전략으로 갖춰집" +"니다:" + +msgid "All dependencies described in neutron-dhcp-agent are valid here." +msgstr "neutron-dhcp-agent에 서술된 모든 의존성은 여기에서 유효합니다." + +msgid "" +"All of the above configs are endpoints or path to the specific class " +"implementing the interface. You can see the endpoints to class mapping in " +"`setup.cfg `_." +msgstr "" +"위의 모든 구성은 인터페이스를 구현하는 특정 클래스에 대한 endpoints나 경로입" +"니다. `setup.cfg `_에서 클래스 매" +"핑에 대한 endpoints를 볼 수 있습니다." + +msgid "" +"Also note that other non-overridden values are inherited by hosts and labels " +"with overrides. The following shows a set of example hosts and the values " +"fed into the configmap for each:" +msgstr "" +"또한 다른 오버라이드되지 않은 값들은 오버라이드된 호스트와 라벨에 의해 상속됨" +"을 참고하십시오. 다음은 호스트 예시 세트와 각 configmap 에 입력된 값들을 보여" +"줍니다:" + +msgid "" +"An illustrative example of an ``images:`` section taken from the heat chart:" +msgstr "heat 차트에서 가져온 ``images:`` 섹션의 실제 사례:" + +msgid "" +"Another place where the DHCP agent is dependent of L2 agent is the " +"dependency for the L2 agent daemonset:" +msgstr "" +"DHCP 에이전트가 L2 에이전트에 의존되는 또 다른 위치는 L2 에이전트 데몬세트에 " +"대한 의존성입니다." + +msgid "" +"As Helm stands today, several issues exist when you update images within " +"charts that might have been used by jobs that already ran to completion or " +"are still in flight. OpenStack-Helm developers will continue to work with " +"the Helm community or develop charts that will support job removal prior to " +"an upgrade, which will recreate services with updated images. An example of " +"where this behavior would be desirable is when an updated db\\_sync image " +"has updated to point from a Mitaka image to a Newton image. In this case, " +"the operator will likely want a db\\_sync job, which was already run and " +"completed during site installation, to run again with the updated image to " +"bring the schema inline with the Newton release." +msgstr "" +"Helm이 현재 존재하기 때문에, 이미 완료되었거나 진행 중인 작업에서 사용되었을" +"지 모르는 차트에서 이미지 업데이트를 수행할 때 몇 가지 문제가 발생합니다. " +"OpenStack-Helm 개발자는 Helm 커뮤니티와 계속 협력하거나 업그레이드 전에 작업 " +"제거를 지원하는 차트를 개발하여, 업데이트된 이미지로 서비스를 다시 만들 것입" +"니다. 이 동작이 바람직한 예는 업데이트된 db_sync 이미지가 Mitaka 이미지에서 " +"Neuton 이미지를 가리키도록 업데이트 된 경우입니다. 이 경우, 운영자는 사이트 " +"설치 중에 이미 실행되고 완료된 db_sync 작업으로 Newton 릴리즈의 스키마를 가져" +"와 업데이트된 이미지로 다시 실행하기를 원할 수 있습니다." + +msgid "" +"As an example, this line uses the ``endpoint_type_lookup_addr`` macro in the " +"``helm-toolkit`` chart (since it is used by all charts). Note that there is " +"a second convention here. All ``{{ define }}`` macros in charts should be " +"pre-fixed with the chart that is defining them. This allows developers to " +"easily identify the source of a Helm macro and also avoid namespace " +"collisions. In the example above, the macro ``endpoint_type_look_addr`` is " +"defined in the ``helm-toolkit`` chart. This macro is passing three " +"parameters (aided by the ``tuple`` method built into the go/sprig templating " +"library used by Helm):" +msgstr "" +"한 예로, 이 명령줄은 ``endpoint_type_lookup_addr`` 매크로를 ``helm-toolkit`` " +"차트에 사용합니다(모든 차트에서 사용되기 때문에). 여기서 두 번째 규칙이 있음" +"을 참고하십시오. 차트의 모든 ``{{ define }}`` 매크로는 그들을 정의하는 차트" +"로 미리 수정되어야 합니다. 이는 개발자들이 Helm 매크로의 소스를 쉽게 식별하" +"고 네임스페이스 충돌을 피할 수 있습니다. 위의 예시에서, " +"``endpoint_type_look_addr`` 매크로는 ``helm-toolkit`` 차트에 정의되어 있습니" +"다. 이 매크로는 3개의 매개 변수를 전달합니다(Helm이 사용하는 go/sprig " +"templating 라이브러리에 내장 된 ``tuple`` 메소드에 의해 지원됩니다):" + +msgid "" +"As part of Neutron chart, this daemonset is running Neutron OVS agent. It is " +"dependent on having :code:`openvswitch-db` and :code:`openvswitch-vswitchd` " +"deployed and ready. Since its the default choice of the networking backend, " +"all configuration is in place in `neutron/values.yaml`. :code:`neutron-ovs-" +"agent` should not be deployed when another SDN is used in `network.backend`." +msgstr "" +"Neutron 차트의 일부로서, 이 데몬세트는 Neutron OVS 에이전트를 실행 중입니다. " +"그것은 :code:`openvswitch-db`와 :code:`openvswitch-vswitchd`를 배포하고 준비" +"하는데 의존합니다. 네트워킹 백엔드 기본 선택이기 때문에, 모든 구성은 " +"`neutron/values.yaml`에 위치합니다. 다른 SDN이 `network.backend`에서 사용될 " +"때 :code:`neutron-ovs-agent`를 배포해서는 안됩니다." + +msgid "" +"By default, each endpoint is located in the same namespace as the current " +"service's helm chart. To connect to a service which is running in a " +"different Kubernetes namespace, a ``namespace`` can be provided to each " +"individual endpoint." +msgstr "" +"기본적으로, 각 endpoint는 현재 서비스의 helm 차트와 같은 네임스페이스에 위치" +"합니다. 다른 Kubernetes 네임스페이스에서 실행중인 서비스에 접속하기 위해, " +"``namespace`` 는 각 endpoint에 제공될 수 있습니다." + +msgid "" +"Charts should not use hard coded values such as ``http://keystone-api:5000`` " +"because these are not compatible with operator overrides and do not support " +"spreading components out over various namespaces." +msgstr "" +"차트는 ``http://keystone-api:5000`` 과 같은 하드 코딩된 값을 사용하면 안됩니" +"다. 왜냐하면 이 값들은 운영자의 오버라이드와 호환되지 않으며 다양한 네임스페" +"이스로 확장되는 구성요소들을 지원하지 않기 때문입니다." + +msgid "" +"Configuration of OVS bridges can be done via `neutron/templates/bin/_neutron-" +"openvswitch-agent-init.sh.tpl`. The script is configuring the external " +"network bridge and sets up any bridge mappings defined in :code:`network." +"auto_bridge_add`. These values should be align with :code:`conf.plugins." +"openvswitch_agent.ovs.bridge_mappings`." +msgstr "" +"OVS 브리지의 구성은 `neutron/templates/bin/_neutron-openvswitch-agent-init." +"sh.tpl`을 통해 완료될 수 있습니다. 이 스크립트는 외부 네트워크 브리지를 구성" +"하고 :code:`network.auto_bridge_add`에 정의된 브리지 매핑을 설정합니다. 이 값" +"들은 :code:`conf.plugins.openvswitch_agent.ovs.bridge_mappings`에 맞추어야 합" +"니다." + +msgid "" +"Configure neutron-server with SDN specific core_plugin/mechanism_drivers." +msgstr "" +"SDN 특정 core_plugin/mechanism_drivers를 사용하여 neutron-server를 구성하십시" +"오." + +msgid "Configuring network plugin" +msgstr "네트워크 플러그인 구성" + +msgid "Contents:" +msgstr "목차:" + +msgid "Create separate chart with new SDN deployment method." +msgstr "새로운 SDN 배포 방법으로 별도의 차트를 생성하십시오." + +msgid "" +"Currently OpenStack-Helm supports OpenVSwitch and LinuxBridge as a network " +"virtualization engines. In order to support many possible backends (SDNs), " +"modular architecture of Neutron chart was developed. OpenStack-Helm can " +"support every SDN solution that has Neutron plugin, either core_plugin or " +"mechanism_driver." +msgstr "" +"현재 OpenStack-Helm은 OpenVSwitch 및 LinuxBridge를 네트워크 가상화 엔진으로 " +"지원합니다. 가능한 많은 backend(SDNs)들을 지원하기 위해, Neutron 차트의 모듈" +"식 구조가 개발되었습니다. OpenStack-Helm은 core_plugin 또는 mechanism_driver" +"인 Neutron 플러그인이 있는 모든 SDN 솔루션을 지원할 수 있습니다." + +msgid "DHCP - auto-assign IP address and DNS info" +msgstr "DHCP - IP 주소와 DNS 정보 자동 할당" + +msgid "" +"DHCP agent is running dnsmasq process which is serving the IP assignment and " +"DNS info. DHCP agent is dependent on the L2 agent wiring the interface. So " +"one should be aware that when changing the L2 agent, it also needs to be " +"changed in the DHCP agent. The configuration of the DHCP agent includes " +"option `interface_driver`, which will instruct how the tap interface created " +"for serving the request should be wired." +msgstr "" +"DHCP 에이전트가 IP 할당 및 DNS 정보를 제공하는 dnsmasq 프로세스를 실행 중입니" +"다. DHCP 에이전트는 인터페이스를 연결하는 L2 에이전트에 의존합니다. 그래서 " +"L2 에이전트를 변경할 때 DHCP 에이전트에서 알고 있어야 하고, 또한 DHCP 에이전" +"트에서 변경될 필요가 있습니다. DHCP 에이전트 구성은 요청을 제공하기 위해 생성" +"된 탭 인터페이스가 어떻게 연결되어야 할지 지시할 interface_driver 옵션을 포함" +"합니다." + +msgid "Developer References" +msgstr "개발자 참고자료" + +msgid "" +"EFK (Elasticsearch, Fluent-bit & Fluentd, Kibana) based Logging Mechanism" +msgstr "EFK (Elasticsearch, Fluent-bit & Fluentd, Kibana) 기반 로깅 매커니즘" + +msgid "Endpoints" +msgstr "Endpoints" + +msgid "" +"Fluent-bit, Fluentd meet OpenStack-Helm's logging requirements for " +"gathering, aggregating, and delivering of logged events. Fluent-bit runs as " +"a daemonset on each node and mounts the `/var/lib/docker/containers` " +"directory. The Docker container runtime engine directs events posted to " +"stdout and stderr to this directory on the host. Fluent-bit then forward the " +"contents of that directory to Fluentd. Fluentd runs as deployment at the " +"designated nodes and expose service for Fluent-bit to forward logs. Fluentd " +"should then apply the Logstash format to the logs. Fluentd can also write " +"kubernetes and OpenStack metadata to the logs. Fluentd will then forward the " +"results to Elasticsearch and to optionally Kafka. Elasticsearch indexes the " +"logs in a logstash-* index by default. Kafka stores the logs in a ``logs`` " +"topic by default. Any external tool can then consume the ``logs`` topic." +msgstr "" +"Fluent-bit, Fluentd는 기록된 이벤트의 수집, 집계, 그리고 전달을 위한 로깅 요" +"구사항을 충족합니다. Fluent-bit는 각 노드에서 daemonset으로 실행하고 `/var/" +"lib/docker/containers` 디렉터리를 마운트합니다. Docker 컨테이너 런타임 엔진" +"은 stdout과 stderr에 게시된 이벤트를 호스트의 상기 디렉터리로 보냅니다. 그런 " +"다음 Fluent-bit는 해당 디렉터리의 내용을 Fluentd로 전달합니다. Fluentd는 지정" +"된 노드에서 배포로 동작하고 Fluent-bit의 전달 로그로 서비스를 표시합니다. 그" +"런 다음 Fluentd는 Logstash 포맷을 로그에 적용해야 합니다. Fluentd는 또한 " +"kubernetes와 OpenStack의 메타데이터를 로그에 작성합니다. Fluentd는 그 결과를 " +"Elasticsearch와 선택적으로 Kafka에 전달합니다. Elasticsearch는 기본적으로 로" +"그를 logstash-*로 색인합니다. Kafka는 기본적으로 로그를 ``logs`` 주제로 보관" +"합니다. 그러면 외부 도구가 ``logs`` 주제를 사용할 수 있습니다." + +msgid "" +"For instance, in the Neutron chart ``values.yaml`` the following endpoints " +"are defined:" +msgstr "" +"예를 들어, Neutron 차트 ``values.yaml`` 에서 다음과 같은 endpoints가 정의됩니" +"다:" + +msgid "Host overrides supercede label overrides" +msgstr "호스트가 대체 라벨의 오버라이드를 오버라이드합니다." + +msgid "" +"If :code:`.Values.manifests.daemonset_ovs_agent` will be set to false, " +"neutron ovs agent would not be launched. In that matter, other type of L2 or " +"L3 agent on compute node can be run." +msgstr "" +":code:`.Values.manifests.daemonset_ovs_agent` 가 false로 설정된다면, neutron " +"ovs 에이전트는 시작되지 않을 것입니다. 그 때, 컴퓨트 노드의 다른 타입의 L2나 " +"L3 에이전트가 실행됩니다." + +msgid "If required, add new networking agent label type." +msgstr "필요하다면, 새로운 네트워킹 에이전트 라벨 타입을 추가하십시오." + +msgid "" +"If the SDN implements its own version of L3 networking, neutron-l3-agent " +"should not be started." +msgstr "" +"SDN이 L3 네트워킹의 한 버전을 구현한다면, neutron-l3-agent는 시작되어서는 안" +"됩니다." + +msgid "" +"If the SDN of your choice is using the ML2 core plugin, then the extra " +"options in `neutron/ml2/plugins/ml2_conf.ini` should be configured:" +msgstr "" +"선택한 SDN이 ML2 코어 플러그인을 사용하고 있다면, `neutron/ml2/plugins/" +"ml2_conf.ini` 의 추가 옵션이 구성되어야 합니다:" + +msgid "Images" +msgstr "이미지" + +msgid "" +"In ``values.yaml`` in each chart, the same defaults are supplied in every " +"chart, which allows the operator to override at upgrade or deployment time." +msgstr "" +"각 차트의 ``values.yaml`` 에는, 업그레이드나 배포 시기에 운영자의 오버라이드" +"가 허용되는 동일한 기본값이 모든 차트에 제공됩니다." + +msgid "" +"In order to add support for more SDNs, these steps need to be performed:" +msgstr "더 많은 SDN들을 추가하기 위해서, 이 과정들을 수행할 필요가 있습니다:" + +msgid "" +"In order to meet modularity criteria of Neutron chart, section `manifests` " +"in :code:`neutron/values.yaml` contains boolean values describing which " +"Neutron's Kubernetes resources should be deployed:" +msgstr "" +"Neutron 차트의 모듈성 기준을 충족하기 위해서, :code:`neutron/values.yaml`의 " +"섹션 매니페스트는 어떤 Neutron의 Kubernetes 리소스가 배포되어야 하는지 서술하" +"는 boolean 값들을 포함하고 있습니다:" + +msgid "" +"In order to use linuxbridge in your OpenStack-Helm deployment, you need to " +"label the compute and controller/network nodes with `linuxbridge=enabled` " +"and use this `neutron/values.yaml` override:" +msgstr "" +"OpenStack-Helm 배치에서 linuxbridge를 사용하기 위해, compute와 controller/" +"network 노드를 `linuxbridge=enabled`로 라벨할 필요가 있고 이것을 `neutron/" +"values.yaml`에 오버라이드하여 사용합니다." + +msgid "" +"Introducing a new SDN solution should consider how the above services are " +"provided. It maybe required to disable built-in Neutron functionality." +msgstr "" +"새로운 SDN 솔루션 도입은 위의 서비스가 어떻게 제공되는지를 고려해야합니다. " +"Neutron 내장 함수를 비활성화가 필요할 수도 있습니다." + +msgid "" +"L3 agent is serving the routing capabilities for Neutron networks. It is " +"also dependent on the L2 agent wiring the tap interface for the routers." +msgstr "" +"L3 에이전트는 Neutron 네트워크를 위한 라우팅 능력을 제공하고 있습니다. 또한 " +"라우터를 위한 탭 인터페이스를 연결하는 L2 에이전트에 의존합니다." + +msgid "L3 routing - creation of routers" +msgstr "L3 라우팅 - 라우터 생성" + +msgid "Linuxbridge" +msgstr "Linuxbridge" + +msgid "" +"Linuxbridge is the second type of Neutron reference architecture L2 agent. " +"It is running on nodes labeled `linuxbridge=enabled`. As mentioned before, " +"all nodes that are requiring the L2 services need to be labeled with " +"linuxbridge. This includes both the compute and controller/network nodes. It " +"is not possible to label the same node with both openvswitch and linuxbridge " +"(or any other network virtualization technology) at the same time." +msgstr "" +"Linuxbridge는 Neutron 참조 구조 L2 에이전트의 두 번째 타입입니다. 그것은 " +"`linuxbridge=enabled`로 라벨된 노드에서 실행 중입니다. 앞서 언급했듯이, L2 서" +"비스를 요구하는 모든 노드는 linuxbridge로 라벨될 필요가 있습니다. 이는 " +"compute와 controller/network 노드 모두를 포함합니다. 같은 노드에 openvswitch" +"와 linuxbridge (또는 다른 네트워크 가상화 기술)를 동시에 라벨하는 것은 불가능" +"합니다." + +msgid "Logging Mechanism" +msgstr "로깅 매커니즘" + +msgid "Logging Requirements" +msgstr "로깅 요구사항" + +msgid "Metadata - Provide proxy for Nova metadata service" +msgstr "메타데이터 - Nova 메타데이터 서비스를 위한 프록시 제공" + +msgid "" +"Metadata-agent is a proxy to nova-metadata service. This one provides " +"information about public IP, hostname, ssh keys, and any tenant specific " +"information. The same dependencies apply for metadata as it is for DHCP and " +"L3 agents. Other SDNs may require to force the config driver in nova, since " +"the metadata service is not exposed by it." +msgstr "" +"Metadata-agent는 nova-metadata 서비스에 대한 프록시입니다. 이것은 공용 IP, 호" +"스트명, ssh 키, 그리고 tenant 특정 정보에 대한 정보를 제공합니다. DHCP와 L3 " +"에이전트와 마찬가지로 메타데이터에도 동일한 의존성을 적용합니다. 다른 SDN은 " +"메타데이터 서비스가 노출되어 있지 않으므로 nova에서 드라이버 구성을 강제할 것" +"을 요구할지도 모릅니다." + +msgid "Networking" +msgstr "네트워킹" + +msgid "Neutron architecture" +msgstr "Neutron 구조" + +msgid "Neutron chart includes the following services:" +msgstr "Neutron 차트는 다음의 서비스들을 포함합니다:" + +msgid "" +"Neutron-server service is scheduled on nodes with `openstack-control-" +"plane=enabled` label." +msgstr "" +"Neutron-server 서비스는 `openstack-control-plane=enabled` 라벨로 노드에 스케" +"줄됩니다." + +msgid "Node and label specific configurations" +msgstr "노드와 라벨 특정 구성" + +msgid "Note that only one set of overrides is applied per node, such that:" +msgstr "" +"노드마다 다음과 같이 단 하나의 오버라이드 세트만 적용되는 점을 참고하십시오:" + +msgid "" +"Note that some additional values have been injected into the config file, " +"this is performed via statements in the configmap template, which also calls " +"the ``helm-toolkit.utils.to_oslo_conf`` to convert the yaml to the required " +"layout:" +msgstr "" +"구성 파일에 몇 가지 추가 값들이 주입되었음을 참고하십시오, 이는 configmap 템" +"플릿의 문장을 통해 수행되며, 또한 yaml을 필요한 레이아웃으로 변환하기 위해 " +"``helm-toolkit.utils.to_oslo_conf`` 를 호출합니다:" + +msgid "" +"Note: Rolling update values can conflict with values defined in each " +"service's PodDisruptionBudget. See `here `_ for more " +"information." +msgstr "" +"참고: 롤링 업데이트 값은 각 서비스의 PodDisruptionBudget에 정의된 값들과 충돌" +"할 수 있습니다. 자세한 정보는 `여기 `_ 를 참조하십시오." + +msgid "Nova config dependency" +msgstr "Nova config 의존성" + +msgid "OSLO-Config Values" +msgstr "OSLO-Config 값" + +msgid "" +"OpenStack-Helm defines a centralized logging mechanism to provide insight " +"into the state of the OpenStack services and infrastructure components as " +"well as underlying Kubernetes platform. Among the requirements for a logging " +"platform, where log data can come from and where log data need to be " +"delivered are very variable. To support various logging scenarios, OpenStack-" +"Helm should provide a flexible mechanism to meet with certain operation " +"needs." +msgstr "" +"OpenStack-Helm은 OpenStack 서비스와 인프라 구성요소의 상태 뿐만 아니라 기본 " +"Kubernetes 플랫폼에 대한 통찰력을 제공하는 중앙 집중식 로깅 메커니즘을 정의합" +"니다. 로깅 플랫폼에 대한 요구사항 중 로그 데이터를 가져올 곳과 로그 데이터를 " +"전달되어야 할 곳은 매우 다양합니다. 다양한 로깅 시나리오를 지원하기 위해서, " +"OpenStack-Helm은 특정 작업 요구를 충족할 수 있는 유연한 메커니즘을 제공해야 " +"합니다." + +msgid "" +"OpenStack-Helm generates oslo-config compatible formatted configuration " +"files for services dynamically from values specified in a yaml tree. This " +"allows operators to control any and all aspects of an OpenStack services " +"configuration. An example snippet for an imaginary Keystone configuration is " +"described here:" +msgstr "" +"OpenStack-Helm은 yaml 트리에 지정된 값에서 동적으로 서비스를 위한 oslo-" +"config 호환 형식의 구성 파일을 생성합니다. 이는 운영자가 OpenStack 서비스 구" +"성 전반을 제어할 수 있도록 합니다. 가상의 키스톤 구성에 대한 짧은 예시가 여" +"기 있습니다:" + +msgid "" +"OpenStack-Helm leverages PodDistruptionBudgets to enforce quotas that ensure " +"that a certain number of replicas of a pod are available at any given time. " +"This is particularly important in the case when a Kubernetes node needs to " +"be drained." +msgstr "" +"OpenStack-Helm은 주어진 시간에 특정 수의 pod 복제본의 가용성을 보장하도록 할" +"당량을 강제하기 위해 PodDistruptionBudget을 활용합니다. 이는 Kubernetes 노드" +"가 빠져야 될 필요가 있는 경우에 특히 중요합니다." + +msgid "" +"OpenStack-Helm provides fast and lightweight log forwarder and full featured " +"log aggregator complementing each other providing a flexible and reliable " +"solution. Especially, Fluent-bit is used as a log forwarder and Fluentd is " +"used as a main log aggregator and processor." +msgstr "" +"OpenStack-Helm은 신속하고 가벼운 로그 전달자와 완전한 기능을 갖춘 로그 수집기" +"를 제공하여 서로 보완하여 유연하고 신뢰할 수 있는 솔루션을 제공합니다. 특히, " +"Fluent-bit는 로그 전달자로 사용되고 Fluentd는 주 로그 수집기와 프로세서로 사" +"용됩니다." + +msgid "OpenVSwitch" +msgstr "OpenVSwitch" + +msgid "Other SDNs" +msgstr "다른 SDN들" + +msgid "Other networking services provided by Neutron are:" +msgstr "Neutron에 의해 제공되는 다른 네트워킹 서비스:" + +msgid "Pod Disruption Budgets" +msgstr "Pod 중단 비용" + +msgid "" +"SDNs implementing ML2 driver can add extra/plugin-specific configuration " +"options in `neutron/ml2/plugins/ml2_conf.ini`. Or define its own " +"`ml2_conf_.ini` file where configs specific to the SDN would be placed." +msgstr "" +"ML2 드라이버를 구현하는 SDN은 extra/plugin-specific 구성 옵션을 `neutron/ml2/" +"plugins/ml2_conf.ini`에 추가할 수 있습니다. 또는 SDN의 고유 구성이 배치될 자" +"체 `ml2_conf_.ini` 파일을 정의하십시오." + +msgid "" +"Script in :code:`neutron/templates/bin/_neutron-openvswitch-agent-init.sh." +"tpl` is responsible for determining the tunnel interface and its IP for " +"later usage by :code:`neutron-ovs-agent`. The IP is set in init container " +"and shared between init container and main container with :code:`neutron-ovs-" +"agent` via file :code:`/tmp/pod-shared/ml2-local-ip.ini`." +msgstr "" +":code:`neutron/templates/bin/_neutron-openvswitch-agent-init.sh.tpl`의 스크립" +"트는 :code:`neutron-ovs-agent`에 의한 나중의 사용을 위한 터널 인터페이스와 IP" +"를 결정할 책임이 있습니다. IP는 init 컨테이너에 설정되고 init 컨테이너와 " +"main 컨테이너 사이에서 :code:`/tmp/pod-shared/ml2-local-ip.ini`을 통한 :code:" +"`neutron-ovs-agent`로 공유됩니다." + +msgid "" +"Specify if new SDN would like to use existing services from Neutron: L3, " +"DHCP, metadata." +msgstr "" +"새로운 SDN이 Neutron의 기존 서비스들을 사용할지 명시하십시오: L3, DHCP, 메타" +"데이터." + +msgid "" +"The Neutron reference architecture provides mechanism_drivers :code:" +"`OpenVSwitch` (OVS) and :code:`linuxbridge` (LB) with ML2 :code:" +"`core_plugin` framework." +msgstr "" +"Neutron 참조 구조는 ML2 :code:`core_plugin` 프레임워크와 함께 :code:" +"`OpenVSwitch` (OVS) 와 :code:`linuxbridge` (LB) mechanism_drivers를 제공합니" +"다." + +msgid "" +"The OpenStack-Helm project also implements annotations across all chart " +"configmaps so that changing resources inside containers, such as " +"configuration files, triggers a Kubernetes rolling update. This means that " +"those resources can be updated without deleting and redeploying the service " +"and can be treated like any other upgrade, such as a container image change." +msgstr "" +"OpenStack-Helm 프로젝트는 또한 모든 차트 configmap에 annotations를 구현하여 " +"구성 파일과 같은 컨테이너 내부의 변경 리소스가 Kubernetes 롤링 업데이트를 촉" +"발합니다. 이는 서비스를 삭제하거나 재배포하지 않고 해당 리소스들에 대한 업데" +"이트를 할 수 있으며 컨테이너 이미비 변경과 같은 다른 업그레이드와 같이 처리 " +"할 수 있음을 의미합니다." + +msgid "" +"The OpenStack-Helm project assumes all upgrades will be done through Helm. " +"This includes handling several different resource types. First, changes to " +"the Helm chart templates themselves are handled. Second, all of the " +"resources layered on top of the container image, such as ``ConfigMaps`` " +"which includes both scripts and configuration files, are updated during an " +"upgrade. Finally, any image references will result in rolling updates of " +"containers, replacing them with the updating image." +msgstr "" +"OpenStack-Helm 프로젝트는 모든 업그레이드가 Helm을 통해 수행된다고 가정합니" +"다. 이는 몇몇 다른 리소스 타입의 처리를 포함합니다. 첫째, Helm 차트 템플릿 자" +"체의 변경이 처리됩니다. 둘째, ``ConfigMaps`` 와 같이 컨테이너 이미지 위에 계" +"층화된 모든 리소스가 업그레이드 중에 업데이트 됩니다. 마지막으로, 모든 이미" +"지 참조는 컨테이너의 롤링 업데이트를 가져오고, 업데이트한 이미지로 대체합니" +"다." + +msgid "" +"The OpenStack-Helm project today uses a mix of Docker images from " +"Stackanetes and Kolla, but will likely standardize on a default set of " +"images for all charts without any reliance on image-specific utilities." +msgstr "" +"OpenStack-Helm 프로젝트는 현재 Stackanetes와 Kolla의 Docker 이미지를 혼합하" +"여 사용하지만, 이미지 특정 유틸리티에 의존하지 않고 모든 차트를 위한 기본 이" +"미지 세트를 표준화할 것입니다." + +msgid "" +"The ``hash`` function defined in the ``helm-toolkit`` chart ensures that any " +"change to any file referenced by configmap-bin.yaml or configmap-etc.yaml " +"results in a new hash, which will then trigger a rolling update." +msgstr "" +"``helm-toolkit`` 차트에서 정의된 ``hash`` 함수는 configmap-bin.yaml 또는 " +"configmap-etc.yaml에서 참조되는 모든 파일에 대한 변화가 롤링 업데이트를 촉발" +"할 새로운 hash를 생성하는 것을 보장합니다." + +msgid "The above configuration options are handled by `neutron/values.yaml`:" +msgstr "위의 구성 옵션은 'neutrol/values.yaml` 에 의해 조절됩니다:" + +msgid "" +"The farther down the list the label appears, the greater precedence it has. " +"e.g., \"another-label\" overrides will apply to a node containing both " +"labels." +msgstr "" +"라벨이 표시된 목록이 낮을수록, 우선순위가 높아집니다. 예를 들어, \\\"another-" +"label\\\" 의 오버라이드는 두 라벨 모두를 포함한 노드에 적용될 것입니다. " + +msgid "" +"The following standards are in use today, in addition to any components " +"defined by the service itself:" +msgstr "" +"서비스 자체로 정의된 구성요소 외에도, 다음의 표준들이 현재 사용되고 있습니다." + +msgid "" +"The macros that help translate these into the actual URLs necessary are " +"defined in the ``helm-toolkit`` chart. For instance, the cinder chart " +"defines a ``glance_api_servers`` definition in the ``cinder.conf`` template:" +msgstr "" +"이들을 실제 필요한 URL로 변환하는 데 도움이 되는 매크로는 ``helm-toolkit`` 차" +"트에 정의되어 있습니다. 예를 들어, cinder 차트는 ``glance_api_servers`` 정의" +"를 ``cinder.conf`` 템플릿에 정의합니다:" + +msgid "" +"The ovs set of daemonsets are running on the node labeled " +"`openvswitch=enabled`. This includes the compute and controller/network " +"nodes. For more flexibility, OpenVSwitch as a tool was split out of Neutron " +"chart, and put in separate chart dedicated OpenVSwitch. Neutron OVS agent " +"remains in Neutron chart. Splitting out the OpenVSwitch creates " +"possibilities to use it with different SDNs, adjusting the configuration " +"accordingly." +msgstr "" +"데몬세트의 ovs 세트는 `openvswitch=enabled`로 라벨된 노드에서 실행 중입니다. " +"이는 compute와 controller/network 노드를 포함합니다. 유연성을 높이기 위해서, " +"OpenVSwitch를 도구로 사용하여 Neutron 차트를 분리하고 OpenVSwitch 전용의 별" +"도 차트를 넣었습니다. Neutron OVS 에이전트는 Neutron 차트에 남아있습니다. " +"OpenVSwitch를 분할하면 다른 SDN과 함께 사용할 수 있는 가능성이 생기기 때문" +"에, 적절하게 구성을 조정합니다." + +msgid "" +"The project's core philosophy regarding images is that the toolsets required " +"to enable the OpenStack services should be applied by Kubernetes itself. " +"This requires OpenStack-Helm to develop common and simple scripts with " +"minimal dependencies that can be overlaid on any image that meets the " +"OpenStack core library requirements. The advantage of this is that the " +"project can be image agnostic, allowing operators to use Stackanetes, Kolla, " +"LOCI, or any image flavor and format they choose and they will all function " +"the same." +msgstr "" +"이미지에 관한 프로젝트의 핵심 철학은 OpenStack 서비스를 활성화하는 데 필요한 " +"도구모음이 Kubernetes 자체에서 적용되어야 한다는 것입니다. 이는 OpenStack-" +"Helm에 OpenStack 핵심 라이브러리의 요구사항을 충족하는 이미지들에 중첩될 수 " +"있는 최소한의 의존성으로 일반적이고 단순한 스크립트를 개발할 것을 요구합니" +"다. 이것의 장점은 운영자들이 Stackanetes, Kolla, LOCI, 또는 그들이 선택하고 " +"동일하게 기능할 이미지 flavor와 포맷을 사용하는 것을 허락하여 프로젝트가 이미" +"지에서 자유로워지는 것입니다." + +msgid "" +"The project's goal is to provide a consistent mechanism for endpoints. " +"OpenStack is a highly interconnected application, with various components " +"requiring connectivity details to numerous services, including other " +"OpenStack components and infrastructure elements such as databases, queues, " +"and memcached infrastructure. The project's goal is to ensure that it can " +"provide a consistent mechanism for defining these \"endpoints\" across all " +"charts and provide the macros necessary to convert those definitions into " +"usable endpoints. The charts should consistently default to building " +"endpoints that assume the operator is leveraging all charts to build their " +"OpenStack cloud. Endpoints should be configurable if an operator would like " +"a chart to work with their existing infrastructure or run elements in " +"different namespaces." +msgstr "" +"프로젝트의 목표는 endpoints에 대한 일관된 매커니즘을 제공하는 것입니다. " +"OpenStack은 다른 OpenStack 구성요소 와 데이터베이스, 큐, 그리고 memcached 인" +"프라와 같은 인프라 요소를 포함하여 다양한 서비스에 대한 연결 세부 사항을 필요" +"로 하는 다양한 구성요소와 고도로 상호 연결된 응용 프로그램입니다. 프로젝트의 " +"목표는 모든 차트에서 이러한 \"endpoints\"를 정의하는 일관된 메커니즘을 제공하" +"고 이러한 정의를 유용한 endpoints로 변환하는 데 필요한 매크로를 제공하는 것입" +"니다. 차트는 운영자가 그들의 OpenStack 클라우드를 구축하기 위하여 모든 차트" +"를 활용하고 있다고 가정하고 endpoints 구축을 일관되게 기본값으로 설정해야 합" +"니다. 운영자가 차트를 그들의 기존 인프라와 함께 사용하기를 원하거나 다른 네임" +"스페이스의 요소를 실행하려면 Endpoints를 구성 할 수 있어야 합니다." + +msgid "" +"The resulting logs can then be queried directly through Elasticsearch, or " +"they can be viewed via Kibana. Kibana offers a dashboard that can create " +"custom views on logged events, and Kibana integrates well with Elasticsearch " +"by default." +msgstr "" +"로그의 결과는 Elasticsearch를 통해 직접 쿼리하거나 Kibana를 통해 볼 수 있습니" +"다. Kibana는 기록된 이벤트에 대한 사용자 정의 뷰를 만들 수 있는 대시보드를 제" +"공하며 Kibana는 기본적으로 Elasticsearch와 잘 통합됩니다." + +msgid "" +"There are situations where we need to define configuration differently for " +"different nodes in the environment. For example, we may require that some " +"nodes have a different vcpu_pin_set or other hardware specific deltas in " +"nova.conf." +msgstr "" +"환경의 각각의 노드들에 대해 구성을 다르게 정의해야 하는 상황이 있습니다. 예" +"를 들어, 어떤 노드가 nova.conf 에서 다른 vcpu_pi_set 또는 다른 하드웨어 특정 " +"delta를 갖도록 요구할 수 있습니다." + +msgid "" +"There is also a need for DHCP agent to pass ovs agent config file (in :code:" +"`neutron/templates/bin/_neutron-dhcp-agent.sh.tpl`):" +msgstr "" +"또한 DHCP 에이전트가 ovs 에이전트 구성 파일을 전달할 필요가 있습니다(:code:" +"`neutron/templates/bin/_neutron-dhcp-agent.sh.tpl` 내):" + +msgid "" +"These quotas are configurable by modifying the ``minAvailable`` field within " +"each PodDistruptionBudget manifest, which is conveniently mapped to a " +"templated variable inside the ``values.yaml`` file. The ``min_available`` " +"within each service's ``values.yaml`` file can be represented by either a " +"whole number, such as ``1``, or a percentage, such as ``80%``. For example, " +"when deploying 5 replicas of a pod (such as keystone-api), using " +"``min_available: 3`` would enforce policy to ensure at least 3 replicas were " +"running, whereas using ``min_available: 80%`` would ensure that 4 replicas " +"of that pod are running." +msgstr "" +"이러한 할당량은 ``values.yaml`` 파일 내부의 템플릿 변수에 편리하게 매핑된 각 " +"PodDistruptionBudget 매니페스트 내의 ``minAvailable`` 필드를 수정하여 구성할 " +"수 있습니다. 각 서비스의 ``values.yaml`` 파일 내 ``min_available`` 는 ``1`` " +"과 같은 정수나 ``80%`` 와 같은 퍼센트로 표현될 수 있습니다. 예를 들어, 5개의 " +"pod 복제본(keystone-api)을 배포할 때, ``mun_available: 3``을 사용하면 적어도 " +"3개의 복제본의 실행을 보장하도록 정책을 강제하는 반면, ``min_available: " +"80%`` 을 사용하면 4개의 복제본의 실행을 보장할 수 있습니다." + +msgid "" +"These values define all the endpoints that the Neutron chart may need in " +"order to build full URL compatible endpoints to various services. Long-term, " +"these will also include database, memcached, and rabbitmq elements in one " +"place. Essentially, all external connectivity can be be defined centrally." +msgstr "" +"이 값들은 Neutron 차트가 다양한 서비스에 URL과 완전 호환되는 endpoints를 구축" +"하는데 필요할 수 있는 모든 endpoints를 정의합니다. 장기적으로, 이들은 또한 데" +"이터베이스, memcached, 그리고 rabbitmq 요소들을 한 곳에 포함할 것입니다. 근본" +"적으로, 모든 외부 연결을 중앙에서 정의할 수 있습니다." + +msgid "" +"This daemonset includes the linuxbridge Neutron agent with bridge-utils and " +"ebtables utilities installed. This is all that is needed, since linuxbridge " +"uses native kernel libraries." +msgstr "" +"이 daemonset은 bridge-utils와 ebtables 유틸리티가 설치된 linuxbridge Neutron " +"에이전트를 포함합니다.linuxbridge가 네이티브 커널 라이브러리를 사용하기 때문" +"에 이 모든 것이 필요합니다." + +msgid "This is accomplished with the following annotation:" +msgstr "이는 다음 annotation으로 수행됩니다:" + +msgid "" +"This option will allow to configure the Neutron services in proper way, by " +"checking what is the actual backed set in :code:`neutron/values.yaml`." +msgstr "" +"이 옵션은 :code:`neutron/values.yaml`의 실제 backend set이 무엇인지 확인하" +"여, Neuron 서비스를 적절한 방법으로 구성할 수 있도록 허락할 것입니다." + +msgid "" +"This requirement is OVS specific, the `ovsdb_connection` string is defined " +"in `openvswitch_agent.ini` file, specifying how DHCP agent can connect to " +"ovs. When using other SDNs, running the DHCP agent may not be required. When " +"the SDN solution is addressing the IP assignments in another way, neutron's " +"DHCP agent should be disabled." +msgstr "" +"이 요구사항은 OVS 특정이며, `ovsdb_connection` 문자열은 DHCP 에이전트가 어떻" +"게 ovs에 연결할 수 있는지 명세하는 `openvswitch_agent.ini` 파일에 정의됩니" +"다. 다른 SDN을 사용할 때, DHCP 에이전트 실행은 요구되지 않을 것입니다. SDN 솔" +"루션이 다른 방식으로 IP 할당을 지정할 때, neutron의 DHCP 에이전트를 비활성화" +"해야 합니다." + +msgid "" +"This runs the OVS tool and database. OpenVSwitch chart is not Neutron " +"specific, it may be used with other technologies that are leveraging the OVS " +"technology, such as OVN or ODL." +msgstr "" +"이것은 OVS 툴과 데이터베이스를 실행합니다. OpenVSwitch 차트는 특정 Neutron이 " +"아니라, OVN이나 ODL과 같은 OVS 기술을 활용하는 다른 기술들에 사용될 것입니다." + +msgid "" +"This will be consumed by the templated ``configmap-etc.yaml`` manifest to " +"produce the following config file:" +msgstr "" +"이것은 템플릿화 된 ``configmap-etc.yaml`` 매니페스트에 의해 다음의 구성 파일" +"을 생산하기 위해 소비될 것입니다:" + +msgid "" +"To be able to configure multiple networking plugins inside of OpenStack-" +"Helm, a new configuration option is added:" +msgstr "" +"OpenStack-Helm 내 다중 네트워킹 플러그인을 구성하는 것을 가능하게 하기 위해" +"서, 새로운 구성 옵션이 추가되었습니다:" + +msgid "" +"To do this, we can specify overrides in the values fed to the chart. Ex:" +msgstr "이를 위해, 차트에 입력된 값의 오버라이드를 명시할 수 있습니다. 예시:" + +msgid "" +"To enable new SDN solution, there should be separate chart created, which " +"would handle the deployment of service, setting up the database and any " +"related networking functionality that SDN is providing." +msgstr "" +"새로운 SDN 솔루션을 가능하게 하기 위해 서비스 배포, 데이터베이스 설정, 그리" +"고 SDN이 제공하고 있는 관련된 모든 네트워크 기능을 처리하는 별도의 차트가 생" +"성되어 있어야 합니다." + +msgid "" +"To that end, all charts provide an ``images:`` section that allows operators " +"to override images. Also, all default image references should be fully " +"spelled out, even those hosted by Docker or Quay. Further, no default image " +"reference should use ``:latest`` but rather should be pinned to a specific " +"version to ensure consistent behavior for deployments over time." +msgstr "" +"이를 위해, 모든 차트는 운영자들이 이미지를 오버라이드할 수 있는 ``images`` 섹" +"션을 제공합니다. 또한, 모든 기본 이미지 참조는 Docker 또는 Quay에 의해 호스팅" +"되었더라도 완전히 작성되어야 합니다. 나아가, 어떤 기본 이미지 참조도 ``:" +"latest` 를 사용하면 안되지만, 대신에 배포 후에 일관된 동작을 보장하는 특정 버" +"전으로 고정되어야 합니다." + +msgid "" +"To use other Neutron reference architecture types of SDN, these options " +"should be configured in :code:`neutron.conf`:" +msgstr "" +"다른 Neutron 참조 구조 타입의 SDN을 사용하려면, 이러한 옵션을 neutron.conf에" +"서 구성해야 합니다:" + +msgid "" +"Today, the ``images:`` section has several common conventions. Most " +"OpenStack services require a database initialization function, a database " +"synchronization function, and a series of steps for Keystone registration " +"and integration. Each component may also have a specific image that composes " +"an OpenStack service. The images may or may not differ, but regardless, " +"should all be defined in ``images``." +msgstr "" +"현재, ``images:`` 섹션에는 몇가지 규칙이 있습니다. 대부분의 OpenStack 서비스" +"는 데이터베이스 초기화 기능, 데이터베이스 동기화 기능, 그리고 Keystone 등록" +"과 통합을 위한 단계를 요구합니다. 각 구성요소에는 OpenStack 서비스를 구성하" +"는 특정 이미지가 있을 수도 있습니다. 이미지는 다를 수 있지만, 이에 관계없이 " +"모두 ``images``에 정의되어야 합니다." + +msgid "Typical networking API request is an operation of create/update/delete:" +msgstr "일반적인 네트워킹 API 요청은 생성/갱신/삭제 작업입니다:" + +msgid "Upgrades and Reconfiguration" +msgstr "업그레이드 및 재구성" + +msgid "" +"Whenever we change the L2 agent, it should be reflected in `nova/values." +"yaml` in dependency resolution for nova-compute." +msgstr "" +"L2 에이전트를 변경할 때마다, nova-compute를 위한 의존성 결정을 `nova/values." +"yaml` 에 반영해야 합니다." + +msgid "" +"``host1.fqdn`` with labels ``compute-type: dpdk, sriov`` and ``another-" +"label: another-value``:" +msgstr "" +"``compute-type: dpdk, sriov`` 와 ``another-label: another-value`` 라벨의 " +"``host1.fqdn`` :" + +msgid "" +"``host2.fqdn`` with labels ``compute-type: dpdk, sriov`` and ``another-" +"label: another-value``:" +msgstr "" +"``compute-type: dpdk, sriov`` 와 ``another-label: another-value`` 라벨의 " +"``host2.fqdn`` :" + +msgid "" +"``host3.fqdn`` with labels ``compute-type: dpdk, sriov`` and ``another-" +"label: another-value``:" +msgstr "" +"``compute-type: dpdk, sriov`` 와 ``another-label: another-value`` 라벨의 " +"``host3.fqdn`` :" + +msgid "``host4.fqdn`` with labels ``compute-type: dpdk, sriov``:" +msgstr "``compute-type: dpdk, sriov`` 라벨의 ``host4.fqdn`` :" + +msgid "``host5.fqdn`` with no labels:" +msgstr "라벨이 없는 ``host5.fqdn`` :" + +msgid "" +"api: This is the port to map to for the service. Some components, such as " +"glance, provide an ``api`` port and a ``registry`` port, for example." +msgstr "" +"api : 이것은 서비스에 매핑할 포트입니다. 예를 들어 glance와 같은 어떤 구성요" +"소들은 ``api`` 포트와 ``registry`` 포트를 제공합니다." + +msgid "" +"db\\_drop: The image that will perform database deletion operations for the " +"OpenStack service." +msgstr "db\\_drop: OpenStack 서비스를 위한 데이터베이스 삭제를 수행할 이미지" + +msgid "" +"db\\_init: The image that will perform database creation operations for the " +"OpenStack service." +msgstr "" +"db\\_init: OpenStack 서비스를 위한 데이터베이스 생성 작업을 수행할 이미지" + +msgid "" +"db\\_sync: The image that will perform database sync (schema initialization " +"and migration) for the OpenStack service." +msgstr "" +"db\\_sync: OpenStack 서비스를 위한 데이터베이스 동기화(초기화와 이동 스키마)" +"를 수행할 이미지" + +msgid "" +"dep\\_check: The image that will perform dependency checking in an init-" +"container." +msgstr "dep\\_check: init-container에서 의존성 검사를 수행할 이미지." + +msgid "" +"image: This is the OpenStack service that the endpoint is being built for. " +"This will be mapped to ``glance`` which is the image service for OpenStack." +msgstr "" +"image : 이것은 endpoint가 구축하고 있는 OpenStack 서비스입니다. 이것은 " +"OpenStack을 위한 이미지 서비스인 ``glance``로 매핑될 것입니다." + +msgid "" +"internal: This is the OpenStack endpoint type we are looking for - valid " +"values would be ``internal``, ``admin``, and ``public``" +msgstr "" +"internal : 이것은 우리가 찾고 있는 OpenStack endpoint 타입입니다. - 유효한 값" +"은 ``internal``, ``admin``, 그리고 ``public`` 입니다." + +msgid "" +"ks\\_endpoints: The image that will perform keystone endpoint registration " +"for the service." +msgstr "ks\\_endpoints: 서비스를 위한 키스톤 endpoint 등록을 수행할 이미지" + +msgid "" +"ks\\_service: The image that will perform keystone service registration for " +"the service." +msgstr "ks\\_service: 서비스를 위한 키스톤 사용자 등록을 수행할 이미지" + +msgid "" +"ks\\_user: The image that will perform keystone user creation for the " +"service." +msgstr "ks\\_user : 서비스를 위한 키스톤 사용자 생성을 수행할 이미지" + +msgid "network" +msgstr "네트워크" + +msgid "neutron-dhcp-agent" +msgstr "neutron-dhcp-agent" + +msgid "" +"neutron-dhcp-agent service is scheduled to run on nodes with the label " +"`openstack-control-plane=enabled`." +msgstr "" +"neutron-dhcp-agent 서비스는 `openstack-control-plane=enabled` 라벨로 노드에" +"서 실행되도록 스케줄됩니다." + +msgid "neutron-l3-agent" +msgstr "neutron-l3-agent" + +msgid "" +"neutron-l3-agent service is scheduled to run on nodes with the label " +"`openstack-control-plane=enabled`." +msgstr "" +"neutron-l3-agent 서비스는 `openstack-control-plane=enabled` 라벨로 노드에서 " +"실행되도록 스케줄됩니다." + +msgid "neutron-lb-agent" +msgstr "neutron-lb-agent" + +msgid "neutron-metadata-agent" +msgstr "neutron-metadata-agent" + +msgid "" +"neutron-metadata-agent service is scheduled to run on nodes with the label " +"`openstack-control-plane=enabled`." +msgstr "" +"neutron-metadata-agent 서비스는 `openstack-control-plane=enabled` 라벨로 노드" +"에서 실행되도록 스케줄됩니다." + +msgid "neutron-ovs-agent" +msgstr "neutron-ovs-agent" + +msgid "neutron-server" +msgstr "neutron-server" + +msgid "" +"neutron-server is serving the networking REST API for operator and other " +"OpenStack services usage. The internals of Neutron are highly flexible, " +"providing plugin mechanisms for all networking services exposed. The " +"consistent API is exposed to the user, but the internal implementation is up " +"to the chosen SDN." +msgstr "" +"neutron-server는 운영자와 다른 OpenStack 서비스 사용을 위한 네트워킹 REST API" +"를 제공합니다. Neutron 내부 구조는 유연성이 뛰어나, 노출된 모든 네트워킹 서비" +"스를 위한 플러그인 매커니즘을 제공합니다. 일관된 API는 사용자에게 노출되지" +"만, 내부 구현은 선택된 SDN가 결정합니다." + +msgid "openvswitch-db and openvswitch-vswitchd" +msgstr "openvswitch-db와 openvswitch-vswitchd" + +msgid "port" +msgstr "포트" + +msgid "" +"pull\\_policy: The image pull policy, one of \"Always\", \"IfNotPresent\", " +"and \"Never\" which will be used by all containers in the chart." +msgstr "" +"pull\\_policy: 차트 내 모든 컨테이너가 사용할 \"Always\", \"IfNotPresent\", " +"그리고 \"Never\" 중 하나의 이미지 pull 정책" + +msgid "subnet" +msgstr "서브넷" diff --git a/doc/source/locale/ko_KR/LC_MESSAGES/doc-troubleshooting.po b/doc/source/locale/ko_KR/LC_MESSAGES/doc-troubleshooting.po new file mode 100644 index 0000000000..c03452e77b --- /dev/null +++ b/doc/source/locale/ko_KR/LC_MESSAGES/doc-troubleshooting.po @@ -0,0 +1,209 @@ +# Soonyeul Park , 2018. #zanata +msgid "" +msgstr "" +"Project-Id-Version: openstack-helm\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2018-09-29 05:49+0000\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=UTF-8\n" +"Content-Transfer-Encoding: 8bit\n" +"PO-Revision-Date: 2018-09-29 01:07+0000\n" +"Last-Translator: Soonyeul Park \n" +"Language-Team: Korean (South Korea)\n" +"Language: ko_KR\n" +"X-Generator: Zanata 4.3.3\n" +"Plural-Forms: nplurals=1; plural=0\n" + +msgid "Backing up a PVC" +msgstr "PVC 백업" + +msgid "" +"Backing up a PVC stored in Ceph, is fairly straigthforward, in this example " +"we use the PVC ``mysql-data-mariadb-server-0`` as an example, but this will " +"also apply to any other services using PVCs eg. RabbitMQ, Postgres." +msgstr "" +"Ceph에 저장된 PVC를 백업하는 것은 아주 간단합니다. 본 예시에서 PVC의 ``mysql-" +"data-mariadb-server-0`` 을 예로 들겠지만, 이는 또한 RabbitMQ, Postgres 와 같" +"은 다른 서비스에도 적용 가능할 것입니다." + +msgid "" +"Before proceeding, it is important to ensure that you have deployed a client " +"key in the namespace you wish to fulfill ``PersistentVolumeClaims``. To " +"verify that your deployment namespace has a client key:" +msgstr "" +"진행하기 전에, ``PersistentVolumeClaims`` 를 만족하기를 원하는 네임스페이스" +"에 클라이언트 키를 배포했는지 보장하는 것이 중요합니다. 네임스페이스가 클라이" +"언트 키를 가지고 있는지에 대한 검증은 다음과 같습니다:" + +msgid "Bugs and Feature requests" +msgstr "버그와 기능 요청" + +msgid "Ceph" +msgstr "Ceph" + +msgid "Ceph Deployment Status" +msgstr "Ceph 배포 상태" + +msgid "Ceph Validating PVC Operation" +msgstr "Ceph의 PVC 작업 검증" + +msgid "Ceph Validating StorageClass" +msgstr "Ceph의 StorageClass 검증" + +msgid "Channels" +msgstr "채널" + +msgid "Database Deployments" +msgstr "데이터베이스 배포" + +msgid "" +"First, we want to validate that Ceph is working correctly. This can be done " +"with the following Ceph command:" +msgstr "" +"먼저, Ceph가 정확하게 작동하고 있는지 확인하고 싶습니다. 이는 다음과 같은 " +"Ceph 명령을 통해 수행될 수 있습니다:" + +msgid "Galera Cluster" +msgstr "Galera 클러스터" + +msgid "Getting help" +msgstr "도움말" + +msgid "Installation" +msgstr "설치" + +msgid "" +"Join us on `IRC `_: #openstack-" +"helm on freenode" +msgstr "" +"`IRC `_: freenode 내 " +"#openstack-helm 에 가입합니다." + +msgid "Join us on `Slack `_ - #openstack-helm" +msgstr "`Slack `_ - #openstack-helm 에 가입합니다." + +msgid "" +"Next we can look at the storage class, to make sure that it was created " +"correctly:" +msgstr "" +"다음으로 올바르게 생성되었는지를 확인하기 위해, 저장소 클래스를 살펴볼 수 있" +"습니다:" + +msgid "" +"Note: This step is not relevant for PVCs within the same namespace Ceph was " +"deployed." +msgstr "" +"참고: 이 단계는 Ceph가 배포된 동일한 네임스페이스 내의 PVC와는 관련이 없습니" +"다." + +msgid "Once this has been done the workload can be restarted." +msgstr "이 작업이 완료되면 워크로드를 재시작할 수 있습니다." + +msgid "PVC Preliminary Validation" +msgstr "PVC 사전 검증" + +msgid "Persistent Storage" +msgstr "Persistent 스토리지" + +msgid "" +"Restoring is just as straightforward. Once the workload consuming the device " +"has been stopped, and the raw RBD device removed the following will import " +"the back up and create a device:" +msgstr "" +"복구도 마찬가지로 간단합니다. 장치를 소비하는 워크로드가 멈추고 raw RBD 장치" +"가 제거되면, 다음과 같이 백업을 가져와 장치를 생성할 것입니다:" + +msgid "" +"Sometimes things go wrong. These guides will help you solve many common " +"issues with the following:" +msgstr "" +"가끔 이상이 발생할 때, 이 안내서가 다음과 같은 여러가지 일반적인 문제들을 해" +"결하는 데 도움이 될 것입니다." + +msgid "" +"The parameters are what we're looking for here. If we see parameters passed " +"to the StorageClass correctly, we will see the ``ceph-mon.ceph.svc.cluster." +"local:6789`` hostname/port, things like ``userid``, and appropriate secrets " +"used for volume claims." +msgstr "" +"여기서 찾고있는 것은 매개 변수입니다. 매개 변수가 StorageClass를 올바르게 통" +"과했는지 확인했다면, ``userid``, 그리고 볼륨 클레임을 위해 사용되는 적절한 비" +"밀과 같은 ``ceph-mon.ceph.svc.cluster.local:6789`` hostname/port 를 확인할 " +"수 있습니다." + +msgid "" +"This guide is to help users debug any general storage issues when deploying " +"Charts in this repository." +msgstr "" +"이 안내서는 사용자가 저장소에 차트를 배포할 때의 일반적인 저장소 문제들을 디" +"버그하는 것을 돕기 위한 것입니다." + +msgid "" +"This guide is to help users debug any general storage issues when deploying " +"charts in this repository." +msgstr "" +"이 안내서는 사용자가 저장소에 차트를 배포할 때의 일반적인 저장소 문제들을 디" +"버그하는 것을 돕기 위한 것입니다." + +msgid "" +"To deploy the HWE kernel, prior to deploying Kubernetes and OpenStack-Helm " +"the following commands should be run on each node:" +msgstr "" +"HWE 커널을 배포하려면, Kubernetes와 OpenStack-Helm을 배포하기 전에 각 노드에" +"서 다음과 같은 명령을 실행해야 합니다." + +msgid "" +"To make use of CephFS in Ubuntu the HWE Kernel is required, until the issue " +"described `here `_ is fixed." +msgstr "" +"Ubuntu에서 CephFS를 사용하려면, `여기 `_ 에 기술된 문제가 해결될 때까지 HWE " +"커널이 필요합니다." + +msgid "To test MariaDB, do the following:" +msgstr "MariaDB를 테스트하려면, 다음과 같이 수행하십시오:" + +msgid "" +"To validate persistent volume claim (PVC) creation, we've placed a test " +"manifest `here `_. Deploy this manifest and verify the job " +"completes successfully." +msgstr "" +"persistent volume claim (PVC) 생성을 검증하기 위해서, `여기 `_ " +"에 테스트 매니페스트를 배치하였습니다. 이 매니페스트를 배포하고 작업이 성공적" +"으로 완료되었는지 확인합니다." + +msgid "Troubleshooting" +msgstr "트러블슈팅" + +msgid "Ubuntu HWE Kernel" +msgstr "Ubuntu HWE 커널" + +msgid "" +"Use one of your Ceph Monitors to check the status of the cluster. A couple " +"of things to note above; our health is `HEALTH\\_OK`, we have 3 mons, we've " +"established a quorum, and we can see that all of our OSDs are up and in the " +"OSD map." +msgstr "" +"Ceph 모니터 중 하나를 사용하여 클러스터의 상태를 점검하십시오. 위의 몇 가지 " +"사항을 참고했을 때; health는 `HEALTH\\_OK`, mon은 3개로 quorum을 설정하였고, " +"모든 OSD가 up되어 OSD 맵 안에 있음을 확인할 수 있습니다." + +msgid "" +"When discovering a new bug, please report a bug in `Storyboard `_." +msgstr "" +"새로운 버그 발견 시, `Storyboard `_ 로 보고 부탁드립니다." + +msgid "" +"Without this, your RBD-backed PVCs will never reach the ``Bound`` state. " +"For more information, see how to `activate namespace for ceph <../install/" +"multinode.html#activating-control-plane-namespace-for-ceph>`_." +msgstr "" +"이것 없이는, RBD-backed PVC 는 절대로 ``Bound`` 상태가 될 수 없을 것입니다. " +"더 많은 정보를 위해서는, 어떻게 `ceph를 위한 네임스페이스를 활성화 <../" +"install/multinode.html#activating-control-plane-namespace-for-ceph>`_ 하는지 " +"보십시오."